EU Council sets path for innovation measures in AI Act’s negotiations – EURACTIV.com

EU Council sets path for innovation measures in AI Act’s negotiations – EURACTIV.com

EU countries have taken sides on the measures in support of innovation in the upcoming AI rulebook.

The AI Act is a landmark proposal to regulate Artificial Intelligence based on its potential to cause harm. The draft law is at the last phase of the legislative process, the so-called trilogues between the EU Council, Parliament and Commission.

The next trilogue is scheduled on 18 July, with the EU policymakers set to agree on some of the most controversial parts of the text, notably on the part related to innovation measures, where the positions of the co-legislators are complementary rather than conflictual.

The Spanish Presidency of the EU Council of Ministers, which will represent member states in the negotiations, circulated an options paper on this part of the text, seen by EURACTIV, that was discussed last Wednesday (5 July) at the Telecom Working Party, a technical body of the Council.

“The Presidency would like to give a preview to the delegations of the options concerning articles on sandboxes and innovation (51-55), before they are going to be presented to Coreper [Committee of Permanent Representatives] as part of the revised mandate,” the text reads.

Regulatory sandboxes

Sandboxes are controlled environments where companies can experiment with new AI applications under the supervision of a competent authority, which in turn gets to learn more about the technology.

Whilst the initial version of the AI Act included the possibility for national authorities to establish a sandbox, the European Parliament made this provision mandatory to make sure even companies in smaller member states have access to them.

The EU Council maintain this measure voluntary, and eight countries on Wednesday asked to stick to this approach.

However, four countries agreed to accept the MEPs’ formula, introducing the principle that sandboxes could be done jointly with other member states. Four more national governments supported this approach with the additional possibility for countries to join sandboxes at the EU level.

Presumption of conformity

The Parliament’s position also entails providing to the AI developers that exit a sandbox the presumption of conformity for their systems as an incentive for them to participate.

However, the Spanish Presidency notes that this approach might mean that the supervisory authorities not in charge of the sandbox lose control over the compliance process and that the role of certified auditors, the notified bodies, is ignored.

“Additionally, it could have a negative impact on competition, situating companies participating in sandbox in a different position with regard to those that don’t,” the paper continues.

In this case, too, EU countries were somewhat split. Nine member states pushed to keep the Council text, whilst only one asked to accept the Parliament’s.

However, five national governments suggested accepting the parliamentarians’ text, but subject to the compulsory inclusion of sandboxes results in the declaration of conformity for the AI systems at high risk of causing harm, which the relevant authorities and vetted auditors would then consider.

Two more countries supported this last option by including notified bodies in the sandboxes process, which the Spanish presidency stressed would increase the costs of establishing a sandbox.

Real-world testing

The Council’s position included testing in real-world conditions, meaning that AI providers would get to test their models outside of the laboratory (and indeed outside of the sandbox) for more realistic experiments.

This real-world testing would have to follow safe test procedures and be subject to the authorisation of the relevant market surveillance authority. However, MEPs fear that real-world testing risks causing harm to people even under these conditions and did not include this possibility.

Seven countries backed the Council text in this case, while only one supported the Parliament’s approach. However, four countries were open to limiting real-world testing within the regulatory sandbox.

Secondary legislation

The final topic EU countries discussed is the type of legal instruments the European Commission should use to determine some of the details for the modalities and conditions in which regulatory sandboxes should operate and the specific conditions for SMEs.

The overwhelming majority of member states pushed for an implementing act. This secondary legislation only needs to go through a committee of national representatives, as opposed to a delegated act requested by the EU Parliament, where MEPs have more of a say.

[Edited by Nathalie Weatherald]

Read more with EURACTIV

EU Council cuts down special product categories in cybersecurity law

The Spanish presidency of the EU Council circulated a semi-final version of the draft cybersecurity law with hefty reductions in the list of product categories that must comply with a particular regime.