Risk Tiers
›
Banned
›
High-Risk
›
GPAI
›
Fines
›
Comply
›
Timeline
Section 01 — Overview
The EU AI Act Risk Pyramid
The world's first comprehensive AI regulation (Regulation (EU) 2024/1689) uses a risk-based approach — the higher the potential harm, the stricter the rules. Click each tier to explore.
8
Banned Practices
8
High-Risk Categories
10²⁵
FLOPs GPAI Threshold
7%
Max Fine (Global Turnover)
Click any tier to explore what it covers. The EU AI Act classifies every AI system into one of four risk levels, with different obligations at each level.
Who Does It Apply To?
Any organisation placing an AI system on the EU market or putting it into service in the EU — regardless of where the organisation is based. Extraterritorial reach: US, UK, and other non-EU companies are affected if their AI touches EU users.
Provider vs Deployer
Provider: develops or places AI on the market. Carries the heaviest obligations. Deployer: uses AI in a professional context. Has obligations around monitoring, human oversight, and transparency to users.
Why It Matters
The EU has 450M consumers. Companies ignoring the AI Act risk fines up to €35M or 7% global turnover — and being barred from the EU market. It's also shaping global AI governance norms via the "Brussels Effect."
What AI systems are excluded from the Act?
▶
The Act does not apply to: AI systems used exclusively for military, national security, or defence purposes; AI used for purely personal non-professional activities; AI used for scientific R&D (with conditions); and open-source models (partial exemptions for GPAI, but not for prohibited practices).
How does the Act define "AI system"?
▶
Article 3(1): "a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments." This is intentionally broad — aligning with the OECD AI definition.
Explore Banned Practices →
Section 02 — Article 5
Prohibited AI Practices
Eight categories of AI are banned outright — deemed too harmful to be permitted under any circumstances. These rules took effect 2 February 2025. Click each to expand.
Effective date: 2 February 2025. Violations carry the highest penalty tier: up to €35 million or 7% of global annual turnover. No grace period — these were the first provisions to apply.
1 · Subliminal & Manipulative Techniques
AI systems that deploy subliminal, manipulative, or deceptive techniques beyond a person's consciousness to significantly distort their behaviour in a way that causes or is likely to cause them or another person significant harm. Covers persuasive AI, dark patterns, and neuromarketing that bypasses rational agency.
2 · Exploitation of Vulnerabilities
AI that exploits vulnerabilities specific to particular groups — age (children, elderly), disability, or socioeconomic circumstances — to distort their behaviour in a way that causes or is likely to cause harm. Example: predatory lending algorithms targeting financially vulnerable people.
3 · Biometric Categorisation by Sensitive Attributes
AI systems that classify individuals based on biometric data to deduce or infer race, political opinions, religious or philosophical beliefs, trade union membership, or sexual orientation. Biometric data-based categorisation inferring these protected characteristics is prohibited.
4 · Social Scoring by Public Authorities
AI systems used by public authorities (or on their behalf) to evaluate or classify individuals or groups based on social behaviour or personality traits, where this leads to detrimental or unfavourable treatment of those individuals in unrelated social contexts or treatment that is unjustified or disproportionate to the social behaviour.
5 · Criminal Risk Profiling
AI used for risk assessments to predict the likelihood of an individual committing criminal offences, based solely on profiling or personality trait assessment. Law enforcement may still use risk assessment tools that rely on objective, verifiable, non-biometric data — the ban is on pure personality/profile-based prediction.
6 · Facial Recognition Database Scraping
Untargeted scraping of facial images from the internet or CCTV footage to build or expand facial recognition databases. Targets companies like Clearview AI which scraped billions of images to build identification databases without consent. Existing databases are also covered.
7 · Emotion Inference in Work & Education
AI systems that infer emotions of natural persons in the workplace or educational institutions — except where used for medical or safety reasons (e.g., detecting drowsiness in pilots). This bans "emotional surveillance" of employees and students via facial or voice analysis.
8 · Real-Time Remote Biometric ID in Public Spaces
Real-time remote biometric identification (e.g., live facial recognition in crowds) in public spaces for law enforcement — with narrow exceptions: searching for missing persons/victims of trafficking, preventing imminent terrorist threats, and identifying suspects of serious crimes (carrying a 3-year+ sentence). Requires prior judicial or administrative authorisation.
Explore High-Risk AI →
Section 03 — Annex I & III
High-Risk AI Systems
High-risk AI is permitted but must meet strict compliance requirements before market placement. There are two pathways to being classified high-risk.
1 · Biometrics
Remote biometric identification (post, not real-time), biometric categorisation, emotion recognition — except the banned categories in Article 5.
2 · Critical Infrastructure
Safety components for management of critical infrastructure — electricity grids, water supply, transport networks, financial markets.
3 · Education & Training
AI determining access, admission, or assignment to educational/vocational institutions; evaluating learning outcomes; assessing exam performance.
4 · Employment & Workers
Recruitment, CV screening, job allocation, promotion/demotion decisions, performance monitoring, task allocation, contract termination.
5 · Essential Services
Creditworthiness assessment, insurance pricing, public benefit eligibility (social services, healthcare, housing, utilities access).
6 · Law Enforcement
Individual risk assessment (recidivism, criminal suspects), polygraph-like tools, crime analytics, evidence reliability evaluation, profiling.
7 · Migration & Border
Asylum claim assessment, visa application processing, border control surveillance, migration fraud detection, travel document verification.
8 · Justice & Democracy
AI assisting judicial decision-making, influencing outcomes of elections or referenda, targeting political advertising to manipulate voters.
Transparency & GPAI →
Section 04 — Chapter V
Transparency Obligations & GPAI Models
Two distinct regimes: (1) Transparency obligations for limited-risk AI (chatbots, deepfakes); (2) GPAI rules for foundation models like GPT-4, Claude, and Gemini. Active from August 2025.
AI-Generated Content
AI-generated text, images, audio, and video must be machine-readable labelled as AI-generated. Deepfakes must be disclosed to the people depicted — unless for clearly artistic, satirical, or fictional purposes with appropriate disclosure.
Chatbots & Conversational AI
Users interacting with AI systems must be informed they are talking to an AI — unless this is obvious from context. No exceptions for entertainment chatbots where users "should know." Obligation applies at time of interaction.
Recommendation Systems
AI systems generating personalised recommendations (content, products) must clearly disclose they are AI-driven when this is not obvious. Users must understand they're being targeted by automated personalisation.
Explore Penalties →
Section 05 — Article 99
Penalties & Enforcement
Three tiers of fines — the highest in any EU digital regulation. Use the calculator below to see your organisation's maximum exposure based on global revenue.
Max Fine Calculator
Enter your organisation's annual global turnover (€ millions):
🔴 Banned Practice Violation
7% global turnover or €35M — whichever is higher
€35M
🟠 Other Compliance Violation
3% global turnover or €15M — whichever is higher
€15M
🟡 Providing False Information
1% global turnover or €7.5M — whichever is higher
€7.5M
⚠️ SMEs & startups receive the lower of the two amounts. Large companies receive the higher.
Enforcement Bodies
EU AI Office — Central enforcement for GPAI models. Investigates systemic risk violations. Located within the European Commission.
National Supervisory Authorities — Each member state designates national authorities for high-risk AI and other provisions. Cross-border cooperation required.
AI Board — Coordination body of national authorities. Issues guidelines, opinions, recommendations. No direct enforcement power.
Scientific Panel — Independent experts advising the AI Office on GPAI systemic risk evaluations.
National Supervisory Authorities — Each member state designates national authorities for high-risk AI and other provisions. Cross-border cooperation required.
AI Board — Coordination body of national authorities. Issues guidelines, opinions, recommendations. No direct enforcement power.
Scientific Panel — Independent experts advising the AI Office on GPAI systemic risk evaluations.
Context vs GDPR
GDPR max fine: 4% global turnover. EU AI Act max: 7% — 75% higher. The Act signals that AI risks are considered more severe than data privacy risks. Companies already familiar with GDPR compliance programs can build on them but must add AI-specific risk management.
Compliance Guide →
Section 06 — Practical Guide
Compliance Decision Guide
Use the decision tree to identify your obligations. Answer each question to determine what the EU AI Act requires of your organisation.
Are You In Scope?
Does your AI system interact with EU users or affect EU persons?
Key Compliance Steps by Role
Provider — Building or Placing AI on the Market
▶
1. Classify your system using the risk tiers
2. For high-risk: implement risk management, data governance, logging, transparency documentation
3. Conduct conformity assessment (self-assessment or notified body)
4. Register in EU database (high-risk deployers of public-facing systems)
5. Affix CE marking where required
6. Establish post-market monitoring and incident reporting
7. Maintain technical documentation for 10 years after last placing on market
2. For high-risk: implement risk management, data governance, logging, transparency documentation
3. Conduct conformity assessment (self-assessment or notified body)
4. Register in EU database (high-risk deployers of public-facing systems)
5. Affix CE marking where required
6. Establish post-market monitoring and incident reporting
7. Maintain technical documentation for 10 years after last placing on market
Deployer — Using AI in a Professional Context
▶
1. Use high-risk AI only for intended purpose per provider instructions
2. Assign human oversight responsibilities to competent staff
3. Monitor system performance; report serious incidents to provider/market surveillance
4. For HR and public authority uses: conduct data protection impact assessment (DPIA)
5. Inform workers when AI affects employment decisions
6. Maintain logs of system operation (where technically possible)
7. Ensure AI literacy of staff using or overseeing AI systems
2. Assign human oversight responsibilities to competent staff
3. Monitor system performance; report serious incidents to provider/market surveillance
4. For HR and public authority uses: conduct data protection impact assessment (DPIA)
5. Inform workers when AI affects employment decisions
6. Maintain logs of system operation (where technically possible)
7. Ensure AI literacy of staff using or overseeing AI systems
GPAI Provider — Foundation Model Obligations
▶
1. Prepare technical documentation per Annex XI
2. Implement copyright compliance policy (including takedown mechanisms)
3. Publish sufficiently detailed training data summary
4. Provide downstream API/integration providers with adequate information
5. If systemic risk: conduct model evaluation + adversarial testing before release
6. Establish incident reporting to EU AI Office
7. Participate in (or implement equivalent of) AI Office Code of Practice
2. Implement copyright compliance policy (including takedown mechanisms)
3. Publish sufficiently detailed training data summary
4. Provide downstream API/integration providers with adequate information
5. If systemic risk: conduct model evaluation + adversarial testing before release
6. Establish incident reporting to EU AI Office
7. Participate in (or implement equivalent of) AI Office Code of Practice
See the Timeline →
Section 07 — Implementation
Implementation Timeline
The Act phases in over 36 months. Click each milestone to see what becomes active and what organisations need to have in place.
1 August 2024 — Entry into Force
Regulation (EU) 2024/1689 published in the Official Journal (12 July 2024) and enters into force 20 days later. The Act is now law — no provisions apply yet (other than governance setup), but the clock is running on all compliance timelines.
Regulation (EU) 2024/1689 published in the Official Journal (12 July 2024) and enters into force 20 days later. The Act is now law — no provisions apply yet (other than governance setup), but the clock is running on all compliance timelines.
Already Active (Feb 2025)
All 8 prohibited AI practices are banned. AI literacy obligations apply — providers and deployers must ensure staff have sufficient AI knowledge. No more grace period for banned systems.
Coming Aug 2025
GPAI model rules take effect. AI Office operational. Codes of Practice for GPAI providers. National supervisory authorities must be designated. Notified body accreditation processes begin.
Coming Aug 2026–2027
Full high-risk AI compliance (Annex III): Aug 2026. High-risk AI in Annex I products (safety-component): Aug 2027. EU database for high-risk systems. Post-market monitoring requirements active.
The Simplification Proposal (2025): The European Commission proposed amendments to reduce the compliance burden — extending some high-risk timelines to 16 months (from 24), simplifying SME requirements, and reducing documentation fragmentation. These amendments are pending Council/Parliament approval and may shift some deadlines.
What about the EU AI Liability Directive?
▶
Separate from the AI Act, the EU AI Liability Directive (still in progress) will introduce civil liability rules for AI-caused harm — allowing victims to sue for damages. It creates a disclosure obligation (courts can order AI providers to reveal technical documentation) and a rebuttable presumption of causality (if a provider fails to comply with the AI Act, there's a presumption their non-compliance caused the harm). Together, the AI Act + Liability Directive create both regulatory and civil law incentives for compliance.
How does this compare to other AI regulations?
▶
USA: No federal AI law yet — executive orders and voluntary frameworks (NIST AI RMF). Sector-specific guidance from FTC, SEC, EEOC. More permissive approach.
UK: "Pro-innovation" approach — existing regulators apply AI principles to their sectors. No omnibus AI Act. May diverge from EU post-Brexit.
China: Sector-specific regulations (algorithm recommendations, deepfakes, generative AI). State-centric approach with content controls.
Global impact: The "Brussels Effect" — companies serving EU customers often apply EU standards globally to avoid maintaining parallel compliance programs, effectively exporting EU norms worldwide.
UK: "Pro-innovation" approach — existing regulators apply AI principles to their sectors. No omnibus AI Act. May diverge from EU post-Brexit.
China: Sector-specific regulations (algorithm recommendations, deepfakes, generative AI). State-centric approach with content controls.
Global impact: The "Brussels Effect" — companies serving EU customers often apply EU standards globally to avoid maintaining parallel compliance programs, effectively exporting EU norms worldwide.