Skip to main contentSkip to main content
Regulation (EU) 2024/1689In force

EU AI Act

The world's first comprehensive AI regulation. Entered into force August 1, 2024. Sets out a risk-based framework for AI systems placed on the EU market or used in the EU — with obligations on providers, deployers, importers, and distributors.

Regulation

EU 2024/1689

In force

1 Aug 2024

High-risk enforcement

2 Aug 2026

Max penalty

€35M / 7%

Structure

SectionCoverageArticles
Title IGeneral ProvisionsArt. 1–5
Title IIProhibited AI PracticesArt. 5
Title IIIHigh-Risk AI SystemsArt. 6–49
Title IVTransparency ObligationsArt. 50
Title VGPAI ModelsArt. 51–56
Title VIGovernanceArt. 57–88
Title IXPenaltiesArt. 99–101

Art. 5

Prohibited AI Practices

Plain English

Article 5 bans certain AI practices outright — they cannot be sold, deployed, or used in the EU under any circumstances (with narrow law enforcement exceptions). The ban covers: subliminal/deceptive manipulation causing harm, exploiting vulnerable groups, biometric social scoring, untargeted facial image scraping, emotion recognition in workplaces/schools, biometric categorisation inferring sensitive attributes, and real-time remote biometric identification in public spaces by law enforcement.

Official Text (EUR-Lex)

Key obligations

  • 1Stop development or deployment of any prohibited practice immediately
  • 2Review your AI system against all Art. 5 subcategories
  • 3Seek qualified legal counsel if you believe a narrow exemption may apply
  • 4Document your classification decision

Tools for this article

Source

Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.

Art. 6

Classification Rules for High-Risk AI Systems

Plain English

Article 6 sets the two pathways to being classified as 'high-risk'. Path 1 (Art. 6(1)): your AI is a safety component of a product in Annex I (machinery, medical devices, aviation, vehicles, etc.) that requires third-party conformity assessment. Path 2 (Art. 6(2)): your AI falls into one of the eight Annex III use-case categories (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice).

Official Text (EUR-Lex)

Key obligations

  • 1Determine which pathway (Annex I or Annex III) applies to your system
  • 2If high-risk: comply with all Art. 9–17 requirements before market placement
  • 3Establish a quality management system (Art. 17)
  • 4Create Annex IV technical documentation (Art. 11)
  • 5Register in EU AI database (Art. 49)

Source

Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.

Art. 9

Risk Management System

Plain English

Article 9 requires all high-risk AI system providers to establish a continuous risk management process — not a one-time assessment. The risk management system must be documented and maintained throughout the system's lifecycle. It covers: (1) identifying known and foreseeable risks, including to health, safety, and fundamental rights; (2) estimating and evaluating risks from intended use AND reasonably foreseeable misuse; (3) incorporating data from post-market monitoring; (4) adopting targeted mitigation measures. The system must be updated when you learn new information about how the AI behaves in deployment. This is the foundation of your entire compliance programme.

Official Text (EUR-Lex)

Key obligations

  • 1Establish a documented risk management system before placing the AI system on the market
  • 2Identify and analyse all known and reasonably foreseeable risks (including misuse scenarios)
  • 3Assess risks to health, safety, and fundamental rights
  • 4Define and document risk mitigation measures for each identified risk
  • 5Run risk management as a continuous lifecycle process — update when risks change
  • 6Integrate Art. 10–15 requirements into your risk management process
  • 7Document all risk assessments, evaluations, and mitigation decisions

Source

Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.

Art. 10

Data and Data Governance

Plain English

Article 10 requires that training, validation, and testing data for high-risk AI systems meet specific quality standards. You need data governance practices covering: how data was collected, its origin and original purpose (crucial for GDPR alignment), all pre-processing steps (annotation, labelling, cleaning), assumptions about what the data measures, assessment of data quantity and suitability, examination for bias relevant to your deployment context, and identification of gaps. This article has significant overlap with GDPR — if the data includes personal data, GDPR's lawful basis, purpose limitation, and data minimisation principles apply on top.

Official Text (EUR-Lex)

Key obligations

  • 1Document the origin and collection method for all training, validation, and testing datasets
  • 2Record all data pre-processing steps: annotation, labelling, cleaning, aggregation
  • 3Examine all datasets for bias relevant to your intended geographical and contextual deployment
  • 4Document assumptions made about what data measures and represents
  • 5Assess the suitability of datasets for the intended AI system purpose
  • 6Identify data gaps or shortcomings and document how they are addressed
  • 7Align data governance with GDPR obligations if personal data is involved
  • 8Review training data when the system is significantly updated

Source

Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.

Art. 13

Transparency and Provision of Information to Deployers

Plain English

Article 13 requires that high-risk AI systems be delivered with comprehensive 'instructions for use' that give deployers enough information to understand what the system does, its limitations, and how to use it properly. Think of it as an AI system manual — but with regulatory teeth. The instructions must cover: the system's capabilities and limitations, the purpose and intended use, the metrics and benchmarks it was tested against, known biases or performance variations, residual risks after mitigation, and what human oversight is required. This is distinct from Annex IV technical documentation — Art. 13 is outward-facing documentation for deployers.

Official Text (EUR-Lex)

Key obligations

  • 1Prepare comprehensive instructions for use before placing the system on the market
  • 2Include: provider identity and contact details
  • 3Describe the AI system's intended purpose and use cases
  • 4Document capabilities, limitations, and performance metrics
  • 5Describe the conditions under which the system was tested and validated
  • 6Disclose residual risks that the deployer must manage
  • 7Specify what human oversight measures the deployer must implement
  • 8Explain any known performance variations across demographic or geographic groups
  • 9Update instructions for use when the system is significantly changed

Source

Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.

Art. 14

Human Oversight

Plain English

Article 14 requires that all high-risk AI systems be designed to be effectively overseen by humans. This is not a policy obligation — it's a design requirement. The system itself must enable oversight. Specifically: humans overseeing the system must be able to understand its capabilities and limits, recognise automation bias, correctly interpret outputs, override or reject any output, and physically stop the system if needed. This directly intersects with GDPR Art. 22 (right not to be subject to purely automated decisions). For providers, this means building appropriate interfaces, audit logs, and override mechanisms. For deployers, it means assigning qualified staff and giving them actual authority to override.

Official Text (EUR-Lex)

Key obligations

  • 1Design the AI system with human-machine interfaces that enable effective oversight
  • 2Ensure humans can fully understand the system's capabilities and limitations
  • 3Implement mechanisms for overriding or rejecting AI outputs
  • 4Provide a physical or virtual 'stop' capability
  • 5Train oversight staff on the system's operation and known limitations
  • 6Address automation bias risk in operator training materials
  • 7Document which decisions require human review vs. can be automated
  • 8For deployers: assign qualified individuals with actual authority to override

Source

Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.

Art. 51–56

General Purpose AI Models

Plain English

Articles 51–56 create a separate compliance track for General Purpose AI (GPAI) model providers — companies that train and release foundation models (LLMs, image generators, etc.). All GPAI providers need: technical documentation, copyright compliance policy, information for downstream providers. If your model has systemic risk (trained with >10²⁵ FLOPs, or AI Office designation), you also need: adversarial testing, incident reporting, cybersecurity measures. Deadline: August 2, 2025.

Official Text (EUR-Lex)

Key obligations

  • 1Prepare and maintain technical documentation before making model available
  • 2Implement a copyright compliance policy (Art. 53)
  • 3Provide technical information to downstream AI system providers
  • 4Register with the AI Office (Art. 53)
  • 5For systemic risk: conduct red-teaming and report incidents

Source

Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.

Art. 17

Quality Management System

Plain English

Article 17 requires high-risk AI providers to establish a Quality Management System (QMS) — a documented set of policies, procedures, and processes covering every aspect of how you design, develop, test, and maintain the AI system. This is analogous to ISO 9001 quality management, but specifically adapted for AI. The QMS must be a living document that integrates your Art. 9 risk management, Art. 72 post-market monitoring, and Art. 73 incident reporting into a unified governance framework. If you already have ISO 9001 or ISO/IEC 42001 certification, you can leverage those frameworks — but you'll need to add AI-specific elements. Regulators will inspect your QMS during market surveillance.

Official Text (EUR-Lex)

Key obligations

  • 1Establish a written QMS covering all aspects listed in Art. 17(1)(a)–(j)
  • 2Document your regulatory compliance strategy, including conformity assessment approach
  • 3Define design control and verification procedures for the AI system development
  • 4Document quality control procedures for development and testing
  • 5Define pre/during/post-development testing and validation schedules
  • 6Document which technical standards apply and justify any deviations from harmonised standards
  • 7Establish data management procedures covering full data lifecycle
  • 8Integrate your Art. 9 risk management system into the QMS
  • 9Set up post-market monitoring procedures (Art. 72) as part of QMS
  • 10Document incident reporting procedures (Art. 73)
  • 11Keep QMS updated throughout the system lifecycle

Source

Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.

Art. 49

Registration in the EU AI Database

Plain English

Article 49 requires providers of high-risk Annex III AI systems to register in the EU AI Act's public database (managed by the AI Office). Deployers who are public authorities also have registration obligations for certain Annex III systems. Registration must happen before market placement or putting into service. The registration must include: provider identity and contact details, the system's intended purpose and use cases, the risk category and Annex III category, the conformity assessment procedure used, and a link to the EU declaration of conformity. This creates a public registry of high-risk AI systems in the EU — which will be searchable by the public and national regulators.

Official Text (EUR-Lex)

Key obligations

  • 1Register before placing the high-risk AI system on the EU market or putting it into service
  • 2Register the provider's name and contact details in the EU AI database
  • 3Register the system's intended purpose, Annex III category, and risk classification
  • 4Include the conformity assessment procedure used (Annex VI or VII)
  • 5Provide a link to the EU Declaration of Conformity
  • 6Public authorities deploying Annex III cat. 1, 6, 7, or 8 systems must also register as deployers
  • 7Keep the registration updated when the AI system is significantly modified
  • 8Cat. 2 (critical infrastructure) systems: only register if intended for public authority use

Source

Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.

Art. 50

Transparency Obligations for Certain AI Systems

Plain English

Article 50 creates disclosure obligations for 'limited risk' AI. If your AI talks to people: tell them it's AI. If your AI generates images, video, audio, or text: label it as AI-generated in a machine-readable format. If it does emotion recognition or biometric categorisation: notify users. These are lighter obligations than the full high-risk regime, but they are mandatory and enforceable.

Official Text (EUR-Lex)

Key obligations

  • 1Chatbots and conversational AI must disclose their AI nature at the start of each interaction
  • 2Synthetic content (deepfakes, AI-generated images/video/audio) must be labelled machine-readably
  • 3Emotion recognition systems must notify affected persons
  • 4Biometric categorisation systems must notify affected persons

Tools for this article

Source

Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.

Art. 99–101

Penalties

€35 million

or 7% of global annual turnover

Prohibited AI practices (Art. 5)

€15 million

or 3% of global annual turnover

Other violations (Art. 9–49)

€7.5 million

or 1.5% of global annual turnover

Supplying incorrect information

For SMEs and startups, the lower of the fixed amount or the percentage applies. The AI Office may also conduct market surveillance.

Classify your AI system now

Use our free Risk Classifier to determine which obligations apply to you.

Start Risk Classification →