Skip to main content

EU AI Act Risk Levels: A Quick Guide to Compliance by System Type

The EU AI Act Explained: A Summary of Prohibited, High, Limited, and Minimal Risk AI Systems

Sean Gibson avatar
Written by Sean Gibson
Updated this week

Please be advised that this summary table is for informational purposes only and does not constitute legal advice. It is not a substitute for consulting the official text of the European Union's Artificial Intelligence Act (AI Act) and its annexes, which are the only legally binding source. The classifications, specific obligations, exceptions, and definitions outlined in the official Act are subject to change and interpretation by EU institutions and national authorities. Users must independently consult the definitive, up-to-date text of the EU AI Act to determine their full legal and technical compliance obligations.


EU AI Act Risk Categories: A Summary

Risk Category

Description and Examples of Prohibited or Regulated Uses

Key Obligation

Prohibited Uses

AI systems deemed to pose an unacceptable risk to fundamental rights and EU values. Examples include:

Complete Ban

- Social scoring systems (Government-operated comprehensive social credit systems).

- Real-time biometric identification in public spaces (live facial recognition for law enforcement, with limited exceptions).

- Manipulative AI systems (designed to exploit vulnerabilities or use subliminal techniques to harm people).

- AI for emotional recognition (in workplace and educational settings).

- Biometric categorization systems (based on sensitive characteristics like race, religion).

- Predictive policing for individuals (assessing individual risk of criminal behavior).

- Untargeted biometric data scraping (harvesting facial images from internet/CCTV for recognition databases).

- AI exploiting vulnerabilities (targeting specific groups based on age, disability, or social/economic circumstances).

High-Risk AI Systems

AI systems that create significant potential harm to health, safety, or fundamental rights. These are categorized into Safety Components (Annex I) and Specific Use Cases (Annex III). Examples include:

Comprehensive Compliance

- Safety Components: Medical devices (e.g., surgical robots), Automotive AI (e.g., autonomous driving), Aviation AI.

- Risk Management System

- Biometric Systems: Biometric identification for access control/border management.

- Data Governance

- Critical Infrastructure: AI in energy grid management, traffic control systems.

- Technical Documentation

- Education/Training: Student assessment AI, educational admission systems.

- Logging/Traceability

- Employment/HR: Recruitment AI (CV screening), Performance evaluation AI.

- Conformity Assessment

- Essential Services Access: Credit scoring systems, Insurance underwriting AI, Social benefit systems.

- Human Oversight

- Law Enforcement: Criminal risk assessment, Predictive policing (crime hotspot prediction).

- Robustness, Accuracy, Security

Limited Risk AI Systems

AI systems that present specific risks (e.g., manipulation or lack of transparency) but are not High-Risk. Examples include:

Transparency Obligations

- Conversational AI: Chatbots and virtual assistants.

- Users must be informed they are interacting with AI.

- Content Generation: AI-generated images (deepfakes) and text.

- Disclosure obligations for AI-generated content.

Minimal Risk AI Systems

AI systems that pose little to no risk to fundamental rights or safety. Examples include:

No Specific Obligations

- Business Applications: Spam filters, Inventory management AI.

- Can be developed and deployed freely (subject to existing law).

- Entertainment/Gaming: Video game AI (non-conversational), content recommendation.

- Productivity Tools: Grammar checkers, Translation services, Search algorithms.

- Technical Applications: Network optimization, Weather prediction models.

Did this answer your question?