An Update on the Artificial Intelligence Act: Progress, Battlegrounds, and Next Steps – Center for Data Innovation

In April 2021, the European Commission published the draft Artificial Intelligence Act, intended to create the world’s first all-encompassing regulatory framework for AI. Given that AI development is still nascent, and that the law is grounded in the precautionary principle, the AIA, as currently drafted, is expected to slow AI innovation and adoption in EU.

Because the law’s size and scope are enormous, it is slowly churning its way through the EU’s institutions. By now, some of the main themes to amend the law have emerged, as have the biggest points of contention between the various political actors involved in drafting and eventually passing the law.

The Council of Ministers, which represents the EU’s member states, has proposed a number of tweaks to make the AIA a tad less onerous. The European Parliament is politically divided, so the divergent positions of its protagonists are a leading indicator of the main fissures over the law’s purpose and content. Going forward, the key debates on what the final version of the law looks like will likely revolve around the definition of AI, what activities and sectors are classified as “high risk”, and the use of facial recognition technologies.

After the Commission proposes a law it moves into the hands of the Council of Ministers and the European Parliament, who, in parallel, study the draft and begin a process of proposing and eventually voting on amendments. For the AIA, the Council moved faster than Parliament, with the Slovenian presidency producing the first set of compromises in November 2021, before the French presidency produced another two compromises this year. Whilst these are not official positions of the Council and do not amount to a formal consensus among member states, they indicate the direction of travel they will pursue. Most of the Council’s proposals reflect a desire to make the AIA more workable and rein in some of the problematic requirements for “high-risk” AI in the original draft. For example:

The original AIA includes an obligation for AI systems to provide “interpretable” outputs, which is not always technically possible. The Council suggests replacing this with a more sensible mandate for users to “understand and use the system appropriately.” Similarly, the Council recommends that human oversight of AI systems only be required only where proportional and reasonable.

Most of these amendments would take some of the sting out of the Commission’s originally proposed obligations for “high-risk” AI, ensuring that the requirements the law places on such systems are realistic and workable. However, not all of the amendments are helpful. For example, the French compromise text calls for data minimization in AI by obliging systems to limit the collection of personal information to what is strictly necessary. Such a requirement is at odds with many forms of AI research and development and constitutes a direct interference with the design choices of AI developers.

On the parliamentary front, two reports on the AIA recently came out, one by the European Parliament’s legal affairs committee, and another by Parliament’s industry committee. What is particularly interesting about these reports is that they contain concrete amendments which stake out some of the key areas over which the most intense legislative battles will likely be fought over the rest of the year.

First, how should the AIA define AI? Both reports amend the definition to bring it in line with the OECD’s definition of AI, a welcome move since it is narrower than the EU’s and thus would make the law more focused: Instead of regulating most software, the two parliamentary committees recommend the AIA should regulate “machine-based systems” that act with a degree of autonomy. Moreover, the reports suggest limiting the list of AI methods the AIA covers to machine learning and optimization, rather than all modern forms of software engineering. Those who still believe in the Commission’s original definition are now forced to reveal their hand that they see the AIA as a general-purpose “software law,” which manifestly contradicts the Commission’s stated purpose of the Act, which is to exclusively regulate AI. Even those who oppose definitional changes accept the premise that the AIA should not become a general “algorithm act.” However, the rapporteur for Parliament’s internal markets committee has stated; “We have pressures to try to reduce the scope of the definition. But I don’t think we will move very much from where it is now.” It thus seems that the AIA’s scope will become a major disagreement going forward, with Parliament likely refusing to budge on the current definition of AI.

Second, how should the AIA operationalize the “high-risk” definition? The Commission designed the AIA to place more requirements on systems deemed “high risk”. This raises the question of what the threshold for “high risk” systems ought to be. The Commission’s approach is to throw entire sectors of the economy under suspicion by labelling them “high risk” if they use A, which. would drive up the cost of the AIA far beyond the conveniently conservative analyses put out by Commission-friendly institutions. Instead, the legal affairs committee proposes a far more sensible approach. The committee recommends that the AIA only applies to systems that are both deployed in “high-risk” sectors and where the intended purpose of the system makes significant harm likely (i.e. if an AI system is deployed in such sectors “in such a manner that significant harm is likely to arise.”) This makes sense because the aim of the AIA is to regulate potentially harmful uses of AI, not single out certain economic areas for greater regulatory burdens if they want to avail themselves of AI. Additionally, the report of the legal affairs committee does away with the Commission’s prohibition on “subliminal techniques beyond a person’s consciousness” in a “manner that causes psychological harm.” This is an unworkable idea that could end up covering many forms of marketing and advertising as well as user-interface design, which after all seek to influence consumer choice, further compounded by the subjectivity of terms like “psychological harm.” The legal affairs committee replaces this part of the AIA with a ban on AI systems that aim “to significantly and materially distort a person’s behaviour or directly cause that person harm.”

Third, while Parliament is resolutely opposed to remote facial recognition in public, (even if assurances are required to not use to monitor people without a warrant or compelling public safety purpose) majority of member states want to keep or expand law enforcement exemptions (except outliers like Germany). How this debate will play out remains to be seen.

So what’s next for the AIA? The remaining Parliamentary committee report, jointly authored by the internal markets and civil liberties committees, is expected to be published in April. The deadline for amendments to the AIA is May. In an ideal world, Parliament hammers out a compromise with a view to holding a final vote in November that will agree its position. However, given the sheer number of parliamentary committees involved, this timeline looks ambitious. An important question is how much room for compromise exists on creating more business-friendly rules in the AIA. As for the Council, although the French presidency (which runs until the end of June) deems AI a priority, its initial aim to present a formal compromise text by March is already doubtful. Arriving at a consensus position among EU member states will likely take longer, given the divergence of views on regulating AI. As such, it is highly unlikely that the AIA will be passed before 2023 at the earliest. After it is adopted, another two years will pass for the AIA to enter into force. This means that businesses should expect to become compliant with the law after 2025.