Dec 10, 2023

A First Pass at AI Regulation in the EU

Person walking forward

Last week, the EU passed the AI Act, the most significant attempt at regulating advanced artificial intelligence systems thus far.

Though we await the finalized text of the act, the draft discussed in the last several days clearly states its goal to ensure AI systems are safe and respect the fundamental rights of citizens.

Based on the reporting of The New York Times and The Guardian, the AI Act specifically outlines the following rules and guidelines: 

  • A risk-based approach with stricter rules for high-risk AI systems that could cause harm. Lower risk systems (which include deepfake and generative AI models) have lighter requirements.
  • Bans on certain "unacceptable" uses of AI, including emotion recognition in schools or social scoring.
  • Requirements for high-risk systems to undergo rigorous assessments before market access.
  • Exceptions allowing law enforcement use of real-time biometric identification in public spaces, with necessary safeguards.
  • New rules introduced for general purpose AI systems and "high impact" models.
  • An AI Office created to oversee new/existing models and enforce rules.
  • Penalties for violations, with lesser violations for smaller enterprises.
  • Fundamental rights impact assessments required before deploying high-risk AI systems.
  • Measures to support innovation like regulatory sandboxes for testing systems. 

Preventing Catastrophe

The main goal of the AI Act is to clearly prevent irreversible and catastrophic consequences stemming from unchecked development and use of high-risk AI systems that can potentially harm individuals on a mass scale. This includes (but is definitely not limited to) autonomous weaponized drones with no “man in the middle” preventing direct military actions, bulk collection and abuse of citizens’ biometric data, and the deployment of systems with a high propensity for bias that could, in turn, further create wider gaps in society driven by said bias.

Myself and the Reality Defender team believe that these issues are critical to address immediately — not just by the EU, but all nations and intergovernmental organizations. By quickly implementing these hard guardrails to prevent unthinkable events from happening, we can shift our fears and focus away from doomsday-type scenarios and onto other equally pressing matters in the world of AI.

“Limited Risk”

At the same time, Generative AI-related issues are seen by the EU as “Limited Risk” problems when compared to scenarios mentioned above. While deepfakes cannot, say, launch missiles autonomously, they can sow discord to the point where their spreading of disinformation may have unforeseen and comparably dire consequences.

This is why we take issue with the labeling of these risks as “limited.” We welcome the proposed rules of governing these systems, which includes requiring the self-labeling of creations as AI-generated, preventing the creation of illegal content, and summarizing training data to avoid copyrighted material usage. That said, by slightly downplaying systems that can influence individuals, entities, and governments — thus potentially creating just as much havoc on a societal level as “High Risk” systems — we believe regulators in the EU have greatly underestimated the risks that generative AI systems and models pose. For this, we hope leaders explore these matters with more scrutiny in the near future. 

The AI Act is a welcome step in the right direction at a time when any step towards regulations of AI systems seemed rather up in the air. Though we implore EU regulators to better examine regulatory frameworks on generative AI systems — as we do with all regulatory bodies and governments — we hope to see governments follow suit rather soon, using this act as a guide for their own enforceable legal frameworks and making the world safe from the many perils of artificial intelligence.

\ Solutions by Industry
Reality Defender’s purpose-built solutions help defend against deepfakes across all industries
Subscribe to the Reality Defender Newsletter