AI Regulation: Threat to Innovation or Timely Intervention?

The European Union is the first major power to sound the regulatory klaxon in an attempt to govern the explosion in artificial intelligence-based technology. While some may see this as a threat to a potentially transformative area of innovation, such intervention is timely before it becomes impossible to cage the AI “beast’.

In proposals published in April, the EU outlined that it would ban “unacceptable” uses of AI, which it defines as “AI systems considered a clear threat to the safety, livelihoods and rights of people.” And, while the EU has taken the lead, it will not be long before others follow.

Indeed, beyond AI, there is a general trend toward closer scrutiny of the technology sector with President Joe Biden installing two advocates for regulation within his administration—Lina Khan, just approved by a Senate panel to be an FTC commissioner, and Tim Wu, on the National Economic Council—and the U.K. planning to introduce a new code of practice for technology companies in a bid to curb the domination of tech giants. Even the Pope is getting involved, with The Rome Call for AI Ethics paper published in 2020.

This presents serious consequences for those at the sharp end of AI innovation, not least the corporate conglomerates dominating the U.S. tech scene, who currently have their eyes on global market opportunities. New and emerging technology does not respect geographical boundaries and once discovered, it is swiftly adopted and adapted in myriad applications across multiple jurisdictions—such is the demand for the latest “tech fix.” But technology innovators could be stopped in their tracks by a more restrictive regulatory environment.

Taking Stock of AI and Bias

However, is a moment to pause and take stock necessarily a bad thing? Recent developments have signaled a need for greater regulation in relation to the ethical application of AI-based technologies, specifically around the integrity of the data behind decision-making algorithms. In short, AI can be biased, and this has proven to have significant and damaging consequences, reinforcing social inequalities, and compromising civil rights.

In the 2020 Netflix documentary, Coded Bias, MIT Media Lab researcher Joy Buolamwini powerfully demonstrates her accidental discovery of the way AI systems can amplify racism (as well as other forms of discrimination), by showing how facial recognition technology could not register her face until she put on a white mask. If AI is being built in a way that effectively white washes the world or perceives men to be superior, what are the consequences when it is helping us to determine (or using a machine to determine) who gets a job, admitted to a university, or qualifies for medical care?

Buolamwini has since gone on to form the Algorithmic Justice League, which is “building a movement to shift the AI ecosystem towards equitable and accountable AI.” And there is an increasingly louder rallying call coming from within the technology community (including those at the very top of big tech, like Google) for citizens to wake up and get engaged in this issue before we are driven blindly into the future, shackled to prejudices of the past.

In recent years, there has been evidence of self-regulation with IBM’s AI ethics board and Google announcing just this month that it will double its AI ethics research staff over the coming years. While this intent among tech companies to embed guiding principles that ensure AI is used for good (rather than bad), it is not something for technologists to determine on their own. They should, of course, be involved and I would urge all those at the forefront of developing AI technology, no matter where they are based, to engage in this EU consultation and help to shape the emerging legislative framework, which will inevitably become the blueprint on which all future regulation is based.

But the discussion must be broader because AI is a truly global issue impacting all areas of society—from social influencers to philosophers, technologists to psychologists and sociologists, everyday consumers to industrialists—and everyone must help in defining the acceptable role of AI, rather than allowing technology to dictate to society.

Undoubtedly, AI will solve some of the world’s biggest problems in health, the environment, education, and mobility, but it will not deliver the future we want if we fail to consider what that future should look like. So, while on the surface regulation may appear restrictive, it is in fact an opportunity to determine the parameters within which this burgeoning area of tech can legitimately flourish.

This column does not necessarily reflect the opinion of The Bureau of National Affairs, Inc. or its owners.

Saiful Khan is partner and patent attorney at European intellectual property law firm, Potter Clarkson LLP. With over two decades of both in-house and private practice, he specializes in the field of electronics and software.