The ethical paradox: Regulation keeping pace with technology – Med-Tech Innovation
Fiona Maini, principal, global compliance and strategy at Medidata, a Dassault Systèmes company, writes about the regulatory issues surrounding artificial intelligence (AI) as its presence continues to grow in medtech and healthcare.
Hand of businessman touching hand artificial intelligence meaning technology connection go to future
Artificial intelligence (AI) is a broad term, but it is often defined as a “software that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” AI has wide applicability across the healthcare industry, from drug repurposing, imaging, and diagnostics to supporting healthcare providers with workflow management and using applications to assess a patient’s symptoms.
The pandemic has shed additional light on the full potential of AI in helping the healthcare industry speed up the delivery of drugs. With the use of AI, researchers were, among other things, able to run algorithms to determine which drugs and treatments could be repurposed to treat patients with COVID-19. But relentless technological innovation in healthcare brings with it regulatory challenges. What can be done to address the risks around the use of AI? To what extent can AI be regulated to ensure it is used for the greater good?
Ethical AI: considerations
While AI can be used for the greater good, it does pose some ethical challenges if not governed properly. The healthcare industry is guided by a set of core ethical principles, set in place and recognised decades ago, including guidelines like the Declaration of Helsinki. The declaration has played a central role in setting ethical boundaries in healthcare to ensure the industry is keeping patients’ best interests at heart.
It’s no surprise that with its rise in use and applicability, a similar approach has been adopted with AI. Researchers from Harvard Law School and the Massachusetts Institute of Technology Media Lab published a paper on adversarial attacks on medical machine learning, including techniques to corrupt otherwise-reliable systems. For example, it’s possible to manipulate AI diagnostics and alter pixels on a scan to indicate an illness or tumour that isn’t there, which clearly goes against the principles adopted by the wider industry. This needs to be monitored and properly legislated against.
Regulating AI to ensure it remains ethical
To address ethical concerns raised by rapidly evolving technological advancements, regulators have launched several initiatives. For instance, the European Union Commission appointed members to a newly formed AI High-Level Expert Group, with the purpose of providing advice on the Commission’s AI strategy. The group has delivered Ethics Guidelines for Trustworthy AI – which argues for a human centric approach to AI, and outlines guiding principles and seven core requirements AI systems need to meet to be considered trustworthy. These core requirements include the need for human agency and oversight, privacy and data governance and technical robustness and safety, amongst other things.
Such ethical considerations also was a reference to the publication of a draft proposal by the EU Commission titled ‘The Artificial Intelligence Act’, the world’s first legal framework for the use of AI, which lays out a set of regulations to ensure the ethical use of AI within the EU. The proposal takes a four-tiered approach to the regulation of AI, with different rules being applied depending on the category in which a product or system falls. The tiers include AI systems that pose ‘minimal or no risk’, ‘limited risk’, ‘high risk’, and ‘unacceptable risk’. ‘Minimal or no risk’ includes AI systems like spam filters, while ‘unacceptable risk’ AI systems will be banned due to their threat to the safety, livelihood, and rights of people.
In parallel, the US Food and Drug Administration (FDA) released an action plan in 2021 on AI and machine learning, which outlines how regulatory oversight can help deliver safe and effective software functionality to improve the quality of care that patients receive.
Leading from the front: a united industry
Regulators have taken a swift role in addressing ethical concerns raised by the use of AI in healthcare, through the review and implementation of guidelines and regulations. Whilst progress has been made, the problem persists that the pace at which technology is developing needs to be matched with the implementation of such regulations. Close collaboration and dialogue between all industry stakeholders and regulatory bodies is imperative to properly address this and mitigate the misuse of data.
Fiona Maini will be participating in a panel session: Addressing the Challenges Presented by the Current Regulatory Device Environment, on Day One of Med-Tech Innovation Expo on 8th-9th June at the NEC, Birmingham. For more information visit www.med-techexpo.com.