European Commission President Ursula von der Leyen lauded the achievement as a “global first,” positioning the EU as a trailblazer in AI regulation, setting clear rules for its use.
The EU AI Act, slated to create an extensive legal framework for AI systems across the EU, focusses on safety, respect for fundamental rights, and the encouragement of AI investment and innovation in Europe. The majority of the Act’s provisions are set to apply two years after its entry into force.
Key Features and Debates
The EU AI Act adopts a risk-based approach, categorizing AI systems into four classes: unacceptable-risk, high-risk, limited-risk, and minimal/no-risk. Prohibited and high-risk AI systems, including biometric categorization, facial recognition databases, emotion recognition in workplaces, and certain applications of predictive policing, face stringent regulations and obligations.
During trilogue negotiations, contentious issues included the list of prohibited and high-risk AI systems, exceptions for biometric identification systems, and the regulation of general-purpose AI models. Concerns were voiced to avoid excessive regulation that could hinder innovation and harm European companies.
Scope of Application and Risk-Based Approach
While the final text is pending publication, the EU AI Act’s scope is expected to align with the OECD’s approach, likely applying to providers and deployers of AI systems. Exemptions include AI systems exclusively for military or defense purposes and those used solely for research and innovation.
The risk-based approach classifies AI systems based on their potential harm, with high-risk systems facing comprehensive compliance obligations, including risk mitigation, data governance, documentation, human oversight, transparency, and cybersecurity. Fundamental rights impact assessments and citizen complaints mechanisms are integral parts of the agreement.
Safeguards for General-Purpose AI Models
Debates on the regulation of general-purpose AI models led to a tiered approach, distinguishing between horizontal obligations for all models and additional obligations for models with systemic risk. Transparency requirements, compliance with copyright law, and specific obligations for systemic risk models are highlighted in the compromise.
Enforcement Framework and Penalties
Enforcement will primarily fall under national competent market surveillance authorities, with a new European AI Office overseeing coordination at the European level. Fines for violations will vary based on the AI system, company size, and severity of infringement, with proportionate caps for smaller companies and startups.
The future
The EU AI Act is set to be officially adopted by the EU Parliament and Council, entering into force after a two-year grace period for compliance. The Act’s prohibitions will apply after six months, and obligations for general-purpose AI models become effective after 12 months. The AI Pact, launched by the European Commission, allows developers to commit voluntarily to key provisions before deadlines.
As the EU positions itself as a leader in responsible AI development, the effectiveness of the EU AI Act will be closely monitored and compared to approaches in other leading AI nations and international efforts to set AI guardrails.