Effective governance: Charting the future of responsible healthcare AI innovation – MedCity News
The power and promise of healthcare artificial intelligence (AI) in virtually every clinical dimension seems limitless. From preventing and detecting illnesses and improving diagnostic accuracy, to facilitating treatment planning, accelerating research and discovery, enhancing patient engagement and experience, streamlining and automating clinical and administrative workstreams to optimize workforce management and reduce provider burnout, and empowering public health surveillance and population health management—these are but a few examples of AI’s potential for healthcare space.
However, this power and promise comes in a Pandora’s box full of novel and multi-dimensional enterprise risks—legal, regulatory, financial, operational, ethical and reputational—that go above and beyond those associated with other technology innovations. Many AI solutions, particularly dynamic and autonomous machine-learning algorithms, remain in the development and testing stages and have an as-yet unproven track record of safety or measurable return on investment. The inherent technological certainty is exacerbated by the lack of an adequate healthcare legal and regulatory scheme for addressing how to responsibly manage AI’s unique liability and compliance challenges. Regulators will have difficulty adapting the current legal and regulatory framework to the pace of AI innovation.
The governance imperative
Healthcare boards simply do not have the luxury of waiting for greater certainty in AI technology or the law. They must act now to position their organizations to maximize AI’s potential for transforming healthcare and be prepared to face and manage these risks head-on, at the front-end of any AI innovation initiative and throughout the AI’s life cycle. For the foreseeable future, this will require an unusually active level of board engagement.
The most important step that a board can take now is to create a disciplined yet flexible framework for exercising governance oversight. The sooner boards take this foundational step, the better they will position themselves and their organizations to make well-informed and prudent decisions in the effort to harness the transformative potential of healthcare AI.
Taking this step will require boards and senior leaders first to establish and maintain their “AI literacy.” While healthcare leadership need not have a deep and all-encompassing knowledge of the underlying technologies that drive AI innovation, they will need at least a high-level understanding of its broad spectrum of functionalities, sophistication and associated risks in order to make responsible decisions.
Developing a “home-grown” governance oversight framework
As noted above, the current healthcare legal and regulatory scheme is lacking the direction and focus boards and their management teams and advisors are used to having to manage compliance risks. Therefore, while a board’s existing corporate compliance program will provide a good starting point, leadership will need to turn to other resources for constructing a “home-grown” governance oversight framework tailored to its organization’s particular needs. Such a framework will enable board to effectively manage AI technology’s new and different liability and compliance challenges.
For example, various domestic and international regulatory agencies, including the Food and Drug Administration (FDA), Office of the President of the United States, Office of Management and Budget, Department of Human Services and World Health Organization, have published guidance addressing laws, policies and ethical principles that provide useful resources for boards.
Recognizing the unique nature and transformative potential of AI technology, the FDA has been blazing new regulatory trails through its efforts to adapt its own medical device regulatory scheme to the unique nature and rapid pace of AI and machine-learning (ML) technology innovation. In January 2021, for example, the FDA released the Artificial Intelligence and Machine Learning (AI/ML) Software as a Medical Device Action Plan, setting forth an oversight framework that seeks to balance the essence of AI’s “ability to learn from real-world use and experience, and its capability to improve its performance” with the importance of ensuring that such solutions “will deliver safe and effective software functionality that improves the quality of care that patients receive.” The FDA approach offers different approaches to the regulatory oversight of AI technologies based on the intended use of the AI solution (e.g., patient care and research versus healthcare operations) and where a particular AI solution falls on the safety-risk spectrum.
In October 2021, the FDA joined forces with Health Canada and the United Kingdom’s Medicines and Healthcare Products Regulatory Agency (MHRA) to identify 10 guiding principles as a foundation for the development of safe, effective and high-quality AI/ML-enabled medical devices.
In December 2021, the FTC issued an Advance Notice of Proposed Rulemaking to initiate its consideration of rulemaking on privacy and artificial intelligence in order to “curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination”. The FTC’s 2020 and 2021 guidelines emphasize the importance of transparency and explainability and fairness in the use of AI in the consumer context.
In April 2021, the European Commission unveiled its long-awaited proposal, Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. Like the FDA, the Commission takes a risk-based approach to balancing the benefits and the risks of AI systems throughout their entire life cycle.
The U.K. just announced its plans to pilot a new AI Standards Hub for shaping global AI standards. The Alan Turing Institute will lead the pilot, with the support British Standards Institution and National Physical Laboratory. The roles of the Hub will include improving governance of AI, complementing pro-innovation regulation, and unlocking the huge economic potential of these technologies to boost investment and employment now the UK has left the European Union.
The World Health Organization’s publication, Ethics and Governance of Artificial Intelligence for Health, provides a broad overview of laws, policies and ethical principles boards can draw on when considering AI applications for the delivery of healthcare services, research development and systems management. It has also recommended a governance framework focusing on issues such as consent, data protection and sharing, specific private-sector and public-sector interests, and the development of policy and legislation.
Other guidance and resources are available from other international governmental bodies, organizations with a track record of successful AI innovation and development, as well as from industry watchdogs, trade associations, standards-setting organizations, private sector collaborations and public-private partnerships.
Finally, internal and external legal counsel together bring a wealth of expertise and experience concerning the complex healthcare regulatory schemes that will be invaluable for navigating the complexities and uncertainties presented by the evolving AI regulatory framework while achieving the appropriate balance between opportunities and risks.
Key ingredients for an effective “home-grown” governance oversight framework
Recent AI innovation guidance and real-world experience strongly suggest that an effective healthcare AI innovation governance oversight framework should:
The future of healthcare AI is as promising as it is precarious. Responsibly managing the potential benefits and risks is of paramount importance throughout an AI system’s entire life cycle. While the regulatory scheme is evolving at a slower pace than the technology itself, various domestic and international governmental bodies have issued helpful guidance and are making meaningful progress toward formal rulemaking. Various other sources have made guidance for navigating this difficult terrain. While there is no prescribed pathway or other one-size-fits all solution, Board can draw meaningful direction from these public and private sector resources to develop their own framework for responsible and successful management of AI’s benefits and risks. There is no time like the present for healthcare boards to do so.
Photo: metamorworks, Getty Images
For researchers who aren’t affiliated with an academic library, finding scientific papers can be time-consuming and expensive—and organizing and sharing them with co-workers can be even harder. A new tool by DeepDyve looks to help researchers address this gap.