EU AI Act
The world's first comprehensive AI regulation. Entered into force August 1, 2024. Sets out a risk-based framework for AI systems placed on the EU market or used in the EU — with obligations on providers, deployers, importers, and distributors.
Regulation
EU 2024/1689
In force
1 Aug 2024
High-risk enforcement
2 Aug 2026
Max penalty
€35M / 7%
Structure
| Section | Coverage | Articles |
|---|---|---|
| Title I | General Provisions | Art. 1–5 |
| Title II | Prohibited AI Practices | Art. 5 |
| Title III | High-Risk AI Systems | Art. 6–49 |
| Title IV | Transparency Obligations | Art. 50 |
| Title V | GPAI Models | Art. 51–56 |
| Title VI | Governance | Art. 57–88 |
| Title IX | Penalties | Art. 99–101 |
Art. 5
Prohibited AI Practices
Plain English
Article 5 bans certain AI practices outright — they cannot be sold, deployed, or used in the EU under any circumstances (with narrow law enforcement exceptions). The ban covers: subliminal/deceptive manipulation causing harm, exploiting vulnerable groups, biometric social scoring, untargeted facial image scraping, emotion recognition in workplaces/schools, biometric categorisation inferring sensitive attributes, and real-time remote biometric identification in public spaces by law enforcement.
Official Text (EUR-Lex)
1. The following AI practices shall be prohibited: (a) the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person's consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision... (b) the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation... (e) the placing on the market, the putting into service for this specific purpose, or the use of AI systems for 'real-time' remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement...
Key obligations
- 1Stop development or deployment of any prohibited practice immediately
- 2Review your AI system against all Art. 5 subcategories
- 3Seek qualified legal counsel if you believe a narrow exemption may apply
- 4Document your classification decision
Tools for this article
Related articles
Source
Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.
Art. 6
Classification Rules for High-Risk AI Systems
Plain English
Article 6 sets the two pathways to being classified as 'high-risk'. Path 1 (Art. 6(1)): your AI is a safety component of a product in Annex I (machinery, medical devices, aviation, vehicles, etc.) that requires third-party conformity assessment. Path 2 (Art. 6(2)): your AI falls into one of the eight Annex III use-case categories (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice).
Official Text (EUR-Lex)
1. Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in this paragraph, that AI system shall be considered to be high-risk where both of the following conditions are fulfilled: (a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I; (b) the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I. 2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall be considered to be high-risk.
Key obligations
- 1Determine which pathway (Annex I or Annex III) applies to your system
- 2If high-risk: comply with all Art. 9–17 requirements before market placement
- 3Establish a quality management system (Art. 17)
- 4Create Annex IV technical documentation (Art. 11)
- 5Register in EU AI database (Art. 49)
Tools for this article
Related articles
Source
Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.
Art. 9
Risk Management System
Plain English
Article 9 requires all high-risk AI system providers to establish a continuous risk management process — not a one-time assessment. The risk management system must be documented and maintained throughout the system's lifecycle. It covers: (1) identifying known and foreseeable risks, including to health, safety, and fundamental rights; (2) estimating and evaluating risks from intended use AND reasonably foreseeable misuse; (3) incorporating data from post-market monitoring; (4) adopting targeted mitigation measures. The system must be updated when you learn new information about how the AI behaves in deployment. This is the foundation of your entire compliance programme.
Official Text (EUR-Lex)
1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems. 2. The risk management system shall be understood as a continuous iterative process run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: (a) identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when the AI system is used in accordance with its intended purpose; (b) estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse; (c) evaluation of other risks possibly arising, based on the analysis of data gathered from the post-market monitoring system; (d) adoption of appropriate and targeted risk management measures designed to address the risks identified. 4. The risk management measures referred to in paragraph 2, point (d) shall give due consideration to the effects and possible interactions resulting from the combined application of the requirements set out in Articles 10 to 15. They shall take into account the generally acknowledged state of the art, including as reflected in relevant harmonised standards or common specifications.
Key obligations
- 1Establish a documented risk management system before placing the AI system on the market
- 2Identify and analyse all known and reasonably foreseeable risks (including misuse scenarios)
- 3Assess risks to health, safety, and fundamental rights
- 4Define and document risk mitigation measures for each identified risk
- 5Run risk management as a continuous lifecycle process — update when risks change
- 6Integrate Art. 10–15 requirements into your risk management process
- 7Document all risk assessments, evaluations, and mitigation decisions
Tools for this article
Related articles
Source
Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.
Art. 10
Data and Data Governance
Plain English
Article 10 requires that training, validation, and testing data for high-risk AI systems meet specific quality standards. You need data governance practices covering: how data was collected, its origin and original purpose (crucial for GDPR alignment), all pre-processing steps (annotation, labelling, cleaning), assumptions about what the data measures, assessment of data quantity and suitability, examination for bias relevant to your deployment context, and identification of gaps. This article has significant overlap with GDPR — if the data includes personal data, GDPR's lawful basis, purpose limitation, and data minimisation principles apply on top.
Official Text (EUR-Lex)
2. High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 3 and 4 wherever such data sets are used. 3. Training, validation and testing data sets shall be subject to appropriate data governance and management practices. Those practices shall concern in particular: (a) the relevant design choices; (b) data collection processes and the origin of data, and in the case of personal data, the original purpose of the data collection; (c) relevant data preparation processing operations, such as annotation, labelling, cleaning, enrichment and aggregation; (d) the formulation of relevant assumptions, notably with respect to the information that the data is supposed to measure and represent; (e) an assessment of the availability, quantity and suitability of the data sets that are needed; (f) examination in view of possible biases, including those that may emerge as a result of the intended geographical, contextual, behavioural or functional setting where the high-risk AI system is to be used; (g) the identification of any possible data gaps or shortcomings, and how those gaps and shortcomings are addressed.
Key obligations
- 1Document the origin and collection method for all training, validation, and testing datasets
- 2Record all data pre-processing steps: annotation, labelling, cleaning, aggregation
- 3Examine all datasets for bias relevant to your intended geographical and contextual deployment
- 4Document assumptions made about what data measures and represents
- 5Assess the suitability of datasets for the intended AI system purpose
- 6Identify data gaps or shortcomings and document how they are addressed
- 7Align data governance with GDPR obligations if personal data is involved
- 8Review training data when the system is significantly updated
Tools for this article
Related articles
Source
Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.
Art. 13
Transparency and Provision of Information to Deployers
Plain English
Article 13 requires that high-risk AI systems be delivered with comprehensive 'instructions for use' that give deployers enough information to understand what the system does, its limitations, and how to use it properly. Think of it as an AI system manual — but with regulatory teeth. The instructions must cover: the system's capabilities and limitations, the purpose and intended use, the metrics and benchmarks it was tested against, known biases or performance variations, residual risks after mitigation, and what human oversight is required. This is distinct from Annex IV technical documentation — Art. 13 is outward-facing documentation for deployers.
Official Text (EUR-Lex)
1. High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately. An appropriate type and degree of transparency shall be ensured, with a view to achieving compliance with the relevant obligations of the provider and deployer set out in Chapter 3 of this Title. 2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to deployers, covering: the identity and contact details of the provider, the characteristics, capabilities and limitations of performance of the high-risk AI system, changes to the high-risk AI system and its performance, the data for which the system has been tested and validated, and the residual risk and countermeasures.
Key obligations
- 1Prepare comprehensive instructions for use before placing the system on the market
- 2Include: provider identity and contact details
- 3Describe the AI system's intended purpose and use cases
- 4Document capabilities, limitations, and performance metrics
- 5Describe the conditions under which the system was tested and validated
- 6Disclose residual risks that the deployer must manage
- 7Specify what human oversight measures the deployer must implement
- 8Explain any known performance variations across demographic or geographic groups
- 9Update instructions for use when the system is significantly changed
Tools for this article
Related articles
Source
Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.
Art. 14
Human Oversight
Plain English
Article 14 requires that all high-risk AI systems be designed to be effectively overseen by humans. This is not a policy obligation — it's a design requirement. The system itself must enable oversight. Specifically: humans overseeing the system must be able to understand its capabilities and limits, recognise automation bias, correctly interpret outputs, override or reject any output, and physically stop the system if needed. This directly intersects with GDPR Art. 22 (right not to be subject to purely automated decisions). For providers, this means building appropriate interfaces, audit logs, and override mechanisms. For deployers, it means assigning qualified staff and giving them actual authority to override.
Official Text (EUR-Lex)
1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use. 2. Human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. 4. The measures referred to in paragraph 3 shall enable the individuals to whom human oversight has been assigned to do the following, as appropriate and proportionate: (a) fully understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation; (b) remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system ('automation bias'); (c) be able to correctly interpret the AI system's output, taking into account in particular the characteristics of the system and the interpretation tools and methods available; (d) be able to decide, in any particular situation, not to use the AI system or to override, ignore or reverse the output of the AI system; (e) be able to intervene on the operation of the high-risk AI system or interrupt the system through a 'stop' button or a similar procedure.
Key obligations
- 1Design the AI system with human-machine interfaces that enable effective oversight
- 2Ensure humans can fully understand the system's capabilities and limitations
- 3Implement mechanisms for overriding or rejecting AI outputs
- 4Provide a physical or virtual 'stop' capability
- 5Train oversight staff on the system's operation and known limitations
- 6Address automation bias risk in operator training materials
- 7Document which decisions require human review vs. can be automated
- 8For deployers: assign qualified individuals with actual authority to override
Tools for this article
Related articles
Source
Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.
Art. 51–56
General Purpose AI Models
Plain English
Articles 51–56 create a separate compliance track for General Purpose AI (GPAI) model providers — companies that train and release foundation models (LLMs, image generators, etc.). All GPAI providers need: technical documentation, copyright compliance policy, information for downstream providers. If your model has systemic risk (trained with >10²⁵ FLOPs, or AI Office designation), you also need: adversarial testing, incident reporting, cybersecurity measures. Deadline: August 2, 2025.
Official Text (EUR-Lex)
Article 51 — Classification of GPAI models as GPAI models with systemic risk 1. A GPAI model shall be classified as a GPAI model with systemic risk if it meets any of the following conditions: (a) it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks; (b) based on a decision of the Commission, ex officio or following a qualified alert from the scientific panel, it has capabilities or an impact equivalent to those set out in point (a). 2. A GPAI model shall be presumed to have high impact capabilities pursuant to paragraph 1, point (a), where the cumulative amount of compute used for its training measured in floating point operations (FLOPs) is greater than 10²⁵.
Key obligations
- 1Prepare and maintain technical documentation before making model available
- 2Implement a copyright compliance policy (Art. 53)
- 3Provide technical information to downstream AI system providers
- 4Register with the AI Office (Art. 53)
- 5For systemic risk: conduct red-teaming and report incidents
Tools for this article
Source
Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.
Art. 17
Quality Management System
Plain English
Article 17 requires high-risk AI providers to establish a Quality Management System (QMS) — a documented set of policies, procedures, and processes covering every aspect of how you design, develop, test, and maintain the AI system. This is analogous to ISO 9001 quality management, but specifically adapted for AI. The QMS must be a living document that integrates your Art. 9 risk management, Art. 72 post-market monitoring, and Art. 73 incident reporting into a unified governance framework. If you already have ISO 9001 or ISO/IEC 42001 certification, you can leverage those frameworks — but you'll need to add AI-specific elements. Regulators will inspect your QMS during market surveillance.
Official Text (EUR-Lex)
1. Providers of high-risk AI systems shall put in place a quality management system that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall cover at least the following aspects: (a) a strategy for regulatory compliance, including compliance with conformity assessment procedures and procedures for the management of modifications to the high-risk AI system; (b) techniques, procedures and systematic actions to be used for the design, design control and design verification of the high-risk AI system; (c) techniques, procedures and systematic actions to be used for the development, quality control and quality assurance of the high-risk AI system; (d) examination, test and validation procedures to be carried out before, during and after the development of the high-risk AI system, and the frequency with which they have to be carried out; (e) technical specifications, including standards, to be applied and, where the relevant harmonised standards are not applied in full, the means to be used to ensure that the high-risk AI system complies with the requirements set out in Chapter 2 of this Title; (f) systems and procedures for data management, including data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data that is performed before and for the purpose of the placing on the market or the putting into service of high-risk AI systems; (g) the risk management system referred to in Article 9; (h) the setting-up, implementation and maintenance of a post-market monitoring system, in accordance with Article 72; (i) procedures related to the reporting of serious incidents in accordance with Article 73; (j) the handling of communication with national competent authorities, competent authorities, notified bodies, other operators, customers or other interested parties.
Key obligations
- 1Establish a written QMS covering all aspects listed in Art. 17(1)(a)–(j)
- 2Document your regulatory compliance strategy, including conformity assessment approach
- 3Define design control and verification procedures for the AI system development
- 4Document quality control procedures for development and testing
- 5Define pre/during/post-development testing and validation schedules
- 6Document which technical standards apply and justify any deviations from harmonised standards
- 7Establish data management procedures covering full data lifecycle
- 8Integrate your Art. 9 risk management system into the QMS
- 9Set up post-market monitoring procedures (Art. 72) as part of QMS
- 10Document incident reporting procedures (Art. 73)
- 11Keep QMS updated throughout the system lifecycle
Tools for this article
Source
Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.
Art. 49
Registration in the EU AI Database
Plain English
Article 49 requires providers of high-risk Annex III AI systems to register in the EU AI Act's public database (managed by the AI Office). Deployers who are public authorities also have registration obligations for certain Annex III systems. Registration must happen before market placement or putting into service. The registration must include: provider identity and contact details, the system's intended purpose and use cases, the risk category and Annex III category, the conformity assessment procedure used, and a link to the EU declaration of conformity. This creates a public registry of high-risk AI systems in the EU — which will be searchable by the public and national regulators.
Official Text (EUR-Lex)
1. Before placing on the market or putting into service a high-risk AI system referred to in Annex III, with the exception of the high-risk AI systems referred to in Annex III, point 2, the provider shall register themselves and their system in the EU database referred to in Article 71. 2. Before putting into service or using a high-risk AI system referred to in Annex III, point 1, 6, 7 or 8, deployers that are public authorities, Union institutions, bodies, offices and agencies, shall register themselves and the high-risk AI systems they use in the EU database referred to in Article 71. 3. For high-risk AI systems referred to in Annex III, point 2, providers shall register themselves and their system in the EU database referred to in Article 71 only where those systems are intended to be used by public authorities.
Key obligations
- 1Register before placing the high-risk AI system on the EU market or putting it into service
- 2Register the provider's name and contact details in the EU AI database
- 3Register the system's intended purpose, Annex III category, and risk classification
- 4Include the conformity assessment procedure used (Annex VI or VII)
- 5Provide a link to the EU Declaration of Conformity
- 6Public authorities deploying Annex III cat. 1, 6, 7, or 8 systems must also register as deployers
- 7Keep the registration updated when the AI system is significantly modified
- 8Cat. 2 (critical infrastructure) systems: only register if intended for public authority use
Tools for this article
Related articles
Source
Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.
Art. 50
Transparency Obligations for Certain AI Systems
Plain English
Article 50 creates disclosure obligations for 'limited risk' AI. If your AI talks to people: tell them it's AI. If your AI generates images, video, audio, or text: label it as AI-generated in a machine-readable format. If it does emotion recognition or biometric categorisation: notify users. These are lighter obligations than the full high-risk regime, but they are mandatory and enforceable.
Official Text (EUR-Lex)
1. Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences... 2. Providers of AI systems, including GPAI model providers, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated.
Key obligations
- 1Chatbots and conversational AI must disclose their AI nature at the start of each interaction
- 2Synthetic content (deepfakes, AI-generated images/video/audio) must be labelled machine-readably
- 3Emotion recognition systems must notify affected persons
- 4Biometric categorisation systems must notify affected persons
Tools for this article
Related articles
Source
Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.
Art. 99–101
Penalties
€35 million
or 7% of global annual turnover
Prohibited AI practices (Art. 5)
€15 million
or 3% of global annual turnover
Other violations (Art. 9–49)
€7.5 million
or 1.5% of global annual turnover
Supplying incorrect information
For SMEs and startups, the lower of the fixed amount or the percentage applies. The AI Office may also conduct market surveillance.
Classify your AI system now
Use our free Risk Classifier to determine which obligations apply to you.
Start Risk Classification →