The European Union’s AI Act, a risk-based plan for regulating applications of artificial intelligence, has passed what looks to be the final big hurdle standing in the way of adoption after Member State representatives today voted to confirm the final text of the draft law.
The development follows the political agreement reached in December — clinched after marathon ‘final’ three-way talks between EU co-legislators which stretched over several days. After that, the work to turn agreed positions on scrappy negotiation sheets into a final compromise text for lawmakers approval kicked off — culminating in today’s Coreper vote affirming the draft rules.
The planned regulation sets out a list of prohibited uses of AI (aka unacceptable risk), such as using AI for social scoring; brings in some governance rules for high risk uses (where AI apps might harm health, safety, fundamental rights, environment, democracy and the rule of law) and for the most powerful general purpose/foundational models deemed to pose “systemic risk”; and applies transparency requirements on apps like AI chatbots. But ‘low risk’ applications of AI will not be in scope of the law.
The vote affirming the final text will lead to a huge sign of relief across much of Brussels. Ongoing opposition to the risk-based AI regulation, led by France — fuelled by a desire to avoid legal limits standing in the way of blitzscaling homegrown generative AI startups like Mistral AI into national champions that might challenge the rise of US AI giants — had threatened the possibility of the regulation being derailed, even at this late stage.
In the event, all 27 ambassadors of EU Member States gave the text their unanimous backing.
Had the vote failed there was a risk of the whole regulation foundering, with limited time for any re-negotiations — given looming European elections and the end of the current Commission’s mandate later this year.
In terms of adopting the draft law, the baton now passes back to the European Parliament where lawmakers, in committee and plenary, will also get a final vote on the compromise text. But given the biggest backlash was coming from a handful of Member States (Germany and Italy were also linked to doubts about the AI Act putting obligations on so-called foundation models), these upcoming votes look academic. And the EU’s flagship AI Act should be adopted as law in the coming months.
Once adopted, the Act will enter into force 20 days after publication in the EU’s Official Journal. There will then be a tiered implementation period before the new rules apply to in-scope apps and AI models — with six months’ grace before a list of banned uses of AI set out in the regulation start applying (likely around fall).
The phased entry into force also allows a year before applying rules on foundational models (aka general purpose AIs) — so not until 2025. The bulk of the rest of the rules won’t apply until two years after the law’s publication.
The Commission has already moved to begin setting up an AI Office that will oversee the compliance of a subset of more powerful foundational models deemed to pose systemic risk. It also recently announced a package of measures intended to boost the prospects of homegrown AI developers, including retooling the bloc’s network of supercomputers to support generative AI model training.