EU’s AI Regulatory Sandboxes Need Fixing – Center for Data Innovation

The EU’s Artificial Intelligence (AI) Act is on track to become the world’s first and most sweeping regulatory framework for AI. But fears remain that zealous regulation and high compliance costs will unduly deter innovators and have a chilling effect on the EU’s AI ecosystem. Ideally, policymakers should revise the AI Act to make it significantly less restrictive. Absent that, the AI Act proposes an “AI regulatory sandbox” that allows companies to test AI systems for a limited time before entering the market. Unfortunately, as drafted, the sandbox is more of a sandbag—weighing down firms with more regulatory complexity while offering them little respite from existing rules. To fix this, policymakers should revise the AI regulatory sandbox to encourage more regulatory experimentation, give firms equal access regardless of size, and allow foreign companies to participate.

Articles 53-55 of the AI Act propose using regulatory sandboxes to create “a controlled environment that facilitates the development, testing, and validation of innovative AI systems for a limited time before their placement on the market.” The AI Act envisions these operating at the national level under common rules coordinated under the supervision of the European Data Protection Supervisor, Members States, or both. The AI Act does not say exactly what these sandboxes would permit, nor does it require them, so some Member States may decide not to create them.

The concept of regulatory sandboxes is not new: 13 Members States already use sandboxes across banking, insurance, and security markets. Policymakers use sandboxes to sidestep outdated regulations, understand novel technologies and their applications, and inform regulatory adjustments. Sandboxes are no wild west, where regulators suspend all laws and fundamental rights. Instead, they are an environment where regulators derogate specific rules to enable experimentation and mistakes with new technologies and business models that do not necessarily fit existing regulatory frameworks while maintaining industry oversight and accountability. Firms benefit from the opportunity to bring a product more quickly to market, test their business model with consumers, and work cooperatively with regulators to ensure sufficient consumer protection.

Unfortunately, there is no sign that the AI Act will allow participants to benefit from regulatory flexibility. Indeed, the AI Act explicitly notes that regulators may not offer liability protection. Article 54 posits potentially allowing “further processing of personal data” but the European Data Protection Supervisor, which, as noted earlier, would have a governance role over AI regulatory sandboxes, has already objected and demanded that all processing must comply with the GDPR. So what benefit does the AI regulatory sandbox offer companies? Instead, Recital 71 warns sandbox participants will be subject to the additional burden of “strict regulatory oversight.”

The AI Act needs a strong experimentation clause to ensure AI sandboxes work. Experimentation clauses are provisions in regulatory frameworks that allow authorities to act with a degree of discretion when enforcing or implementing rules. Experimentation clauses provide the legal basis for regulatory flexibility in sandboxes and, as the Council recommended in 2020, are especially well suited to disruptive innovative technologies, products, services, or approaches. For instance, in 2021, Germany’s Passenger Transportation Act was amended to include an experimentation clause to allow the government to derogate provisions in order to trial new types or modes of transport. This leeway has led to successful tests for new types of transport: AI-driven parcel delivery vehicles, automated shuttle pods, and other forms of autonomous of public transport.

In the AI Act, an experimentation clause could allow the enforcement and implementation authority to act on a case-by-case basis and, by providing a degree of flexibility, create the right incentives for innovators to take part. The clause could read: “In order to allow for the testing of new AI systems, the Member State competent authorities may, upon the request of the provider or user, derogate certain provisions of this Act or the GDPR for the period for which the AI system operates in the sandbox.” This clause would allow, for example, authorities to waive conformity assessment requirements.

AI regulatory sandboxes should be available to all businesses, regardless of their size. Unfortunately, the AI Act unfairly offers “small-scale providers and start-ups with priority access” to the AI regulatory sandboxes. If the goal is to maximize the EU’s AI ecosystem, it should not matter whether participants join from large or small businesses. By restricting access to the AI regulatory sandboxes to only small businesses, the AI Act would limit AI innovation in large businesses, hurt consumer welfare and deter EU AI competitiveness

AI regulatory sandboxes should also be open to non-EU companies. Allowing foreign companies to participate will help achieve the EU’s goal of leading the global adoption of AI, ensuring its businesses and citizens have access to the best-in-class AI applications. Unfortunately, the proposal appears to be moving in the wrong direction: Draft parliamentary amendments require that data from the sandbox shall not be “transferred to a third country outside the Union,” making it significantly harder for non-EU companies to participate. If the EU wants to set global standards, it should clarify in the AI Act that non-EU companies can test their systems in its regulatory sandboxes, before entering the EU market.

AI regulatory sandboxes can boost businesses and help the EU embrace a more nimble regulatory environment for emerging technologies. Including AI regulatory sandboxes is one of the most promising aspects of the AI Act. If done right, AI sandboxes can show the world that the EU is—contra its reputation—open for business.

Image credit: OpenAI