Skip to main contentSkip to main content

EU AI Act Definitions Glossary

Authoritative definitions from Article 3 of the EU AI Act (Regulation 2024/1689), with plain-English explanations and practical notes for compliance professionals. 26 terms.

AI system

Art. 3(1)

A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

Practical note

This is the core definition. The key elements are: machine-based, operates with autonomy, generates outputs (predictions, content, recommendations, decisions), and influences environments. Pure rule-based systems without learning may fall outside this definition.

General Purpose AI (GPAI) model

Art. 3(63)

An AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market.

Practical note

Covers foundation models (LLMs, image generators, multimodal models). The key criteria are: trained on large data, self-supervised learning, significant generality, wide range of tasks. Note: a GPAI model is not the same as a GPAI system.

High-risk AI system

Art. 6 + Annex III

An AI system classified as high-risk under Article 6(1) (safety component of Annex I product) or Article 6(2) (falling into an Annex III use-case category). High-risk AI systems are subject to the full set of obligations under Title III of the AI Act.

Practical note

The two pathways are: (1) safety component of a product requiring third-party conformity assessment, or (2) fitting one of 8 Annex III categories covering biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice.

Provider

Art. 3(3)

A natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge.

Practical note

Providers face the heaviest obligations under the AI Act. You are a provider if you develop and sell/distribute an AI system, even if development was outsourced. The key tests: does it carry your name/trademark, and did you place it on the market?

Deployer

Art. 3(4)

A natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.

Practical note

Deployers use AI systems provided by others in the context of their professional activity. They have their own set of obligations under Art. 26, particularly around fundamental rights impact assessments, instructions compliance, and human oversight implementation.

Importer

Art. 3(6)

A natural or legal person established in the Union that places on the market or puts into service an AI system that bears the name or trademark of a natural or legal person established outside the Union.

Practical note

If you are an EU company bringing a non-EU AI system to market under the foreign provider's name, you are an importer. Importers must verify the provider's AI Act compliance before placing the system on the EU market.

Distributor

Art. 3(7)

A natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market.

Practical note

Distributors face lighter obligations than providers/importers but must check CE marking and declarations of conformity before distribution, and cannot distribute non-compliant systems.

Conformity assessment

Art. 3(20)

The process of demonstrating whether the requirements set out in Title III, Chapter 2 of this Regulation relating to a high-risk AI system have been fulfilled.

Practical note

For most Annex III high-risk AI systems, this is a self-assessment (internal control under Annex VI). For biometric identification systems and some Annex I products, third-party notified bodies are required. All assessments result in a Declaration of Conformity and CE marking.

Placing on the market

Art. 3(9)

The first making available of an AI system or a general-purpose AI model on the Union market.

Practical note

This triggers compliance obligations. Note that making an AI system available outside the EU but accessible inside (e.g., via the internet to EU users) may also qualify as placing it on the market under the extraterritorial scope of Art. 2.

Systemic risk

Art. 3(65)

A risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole.

Practical note

Systemic risk is the basis for enhanced GPAI obligations under Art. 55. It is presumed to exist when training compute exceeds 10²⁵ FLOPs, and can be designated by the AI Office on other grounds.

Reasonably foreseeable misuse

Art. 3(12)

The use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems.

Practical note

Providers must account for foreseeable misuse in their risk management system (Art. 9) and technical documentation. Courts and regulators will assess whether foreseeable misuse was adequately considered.

Serious incident

Art. 3(49)

An incident or malfunctioning of an AI system that directly or indirectly leads to any of the following: (a) death of a person or serious damage to a person's health; (b) serious and irreversible disruption of the management and operation of critical infrastructure; (c) infringement of the obligations under Union law; (d) serious damage to property or environment.

Practical note

Serious incidents involving high-risk AI systems must be reported to market surveillance authorities under Art. 73. For GPAI systemic risk models, serious incidents must be reported to the AI Office under Art. 55.

Technical documentation

Art. 3(21) + Annex IV

Documentation covering all aspects of the AI system required to demonstrate compliance with the requirements of this Regulation, kept by the provider throughout the lifetime of the AI system.

Practical note

The 8 sections of Annex IV cover: general description; elements of the AI system; detailed information on monitoring, functioning and control; description of changes; relevant standards applied; EU declaration of conformity; assessment procedures; post-market monitoring. Use the Document Generator to create your Annex IV file.

Quality management system (QMS)

Art. 17

A documented system required of high-risk AI system providers covering: compliance strategies, design and development procedures, risk management, data management, post-market monitoring, incident reporting, and staff competence.

Practical note

Art. 17 sets out 11 mandatory elements for the QMS. Companies with existing ISO 9001 QMS systems can build on them, but must extend to cover AI-specific requirements.

Post-market monitoring

Art. 72

A proactive system established and documented by providers of high-risk AI systems to collect and review experience gained from using AI systems after they are placed on the market, and to identify any need for corrective action.

Practical note

Post-market monitoring must include a plan developed before deployment, data collection mechanisms, review intervals, and a feedback loop into the risk management system. It runs throughout the system's operational lifetime.

Human oversight

Art. 14

Measures built into high-risk AI systems enabling natural persons to effectively oversee the AI system during the period of use, including the ability to understand the system's capabilities and limitations, monitor its operation, detect anomalies, and intervene or interrupt the system.

Practical note

Art. 14 requires specific technical capabilities: the ability to understand outputs in context, interpret them, identify malfunctions, and interrupt/override the system via a 'stop' button. Human oversight is one of the most operationally challenging requirements.

Fundamental rights impact assessment (FRIA)

Art. 27

An assessment required of deployers of high-risk AI systems in certain public and private contexts, covering the impact of the AI system on fundamental rights, including dignity, non-discrimination, privacy, and freedom of expression.

Practical note

FRIAs are required by deployers (not providers) when deploying certain Annex III systems. The assessment must be notified to market surveillance authorities before deployment in high-risk contexts.

EU AI database

Art. 49 + Art. 71

A public database managed by the European Commission where providers of high-risk AI systems must register their systems before placing them on the market. Deployers of certain high-risk AI systems in sensitive areas must also register.

Practical note

Registration in the EU AI database is a prerequisite for market placement. The database is publicly searchable, increasing transparency. Providers must update registration when systems are substantially modified.

Notified body

Art. 3(22)

A conformity assessment body that has been notified by a member state to carry out third-party conformity assessments for high-risk AI systems where required.

Practical note

Third-party assessment by a notified body is required for: biometric identification systems (except verification), AI safety components of Annex I products where the sector legislation requires third-party assessment. Most Annex III systems can use self-assessment.

CE marking

Art. 48

The marking affixed to a high-risk AI system to indicate its conformity with the requirements of the EU AI Act and any other applicable Union legislation providing for its affixing.

Practical note

CE marking on an AI system indicates that a conformity assessment has been completed, a declaration of conformity has been drawn up, and all applicable requirements have been met. It is a legal statement, not a quality mark.

Market surveillance authority

Art. 3(26)

A national authority responsible for carrying out market surveillance in the territory of a member state in accordance with the EU AI Act.

Practical note

Each EU member state designates at least one market surveillance authority for the AI Act. They have powers to request documentation, conduct audits, order systems to be withdrawn, and impose penalties.

AI Office

Art. 64

The EU AI Office established within the European Commission, responsible for: overseeing GPAI model compliance, maintaining the EU AI database, facilitating codes of practice, and conducting market surveillance of GPAI models.

Practical note

The AI Office is distinct from national market surveillance authorities. It has EU-level oversight authority over GPAI models and coordinates with national authorities for AI system oversight.

Adversarial testing / red-teaming

Art. 55(1)(a)

Evaluation methodologies used by GPAI systemic risk model providers to identify and mitigate systemic risks, conducted in accordance with state-of-the-art approaches.

Practical note

Adversarial testing goes beyond standard evaluation benchmarks to actively probe model vulnerabilities, misuse potential, and failure modes. The AI Office's Code of Practice provides detailed guidance on acceptable adversarial testing methodologies.

Intended purpose

Art. 3(12)

The use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements.

Practical note

The intended purpose defines the scope of risk assessment and obligation. Narrow intended purpose definitions may exclude certain high-risk use cases — but regulators will scrutinise whether the actual use matches the stated intended purpose.

Biometric identification system

Art. 3(40)

An AI system for the purpose of identifying natural persons at a distance by comparing a person's biometric data with biometric data contained in a reference database.

Practical note

'At a distance' means without the person's awareness or active participation. Real-time remote biometric identification in public spaces is prohibited under Art. 5 except for narrow law enforcement exceptions. Post-remote (non-real-time) identification is high-risk.

Transparency obligation

Art. 50

The obligation on providers of AI systems that interact with natural persons to inform those persons that they are interacting with an AI, and on providers of AI-generated content systems to label that content as artificially generated.

Practical note

Applies to: chatbots/conversational AI (must disclose AI nature at interaction start), synthetic content generators (must label outputs machine-readably), emotion recognition systems, biometric categorisation systems (must notify users). These are 'limited risk' obligations.

Now classify your AI system

Use the Risk Classifier to map your system against these definitions.

Start Risk Classification →