Skip to main contentSkip to main content
Regulation (EU) 2024/1689Annex III8 high-risk categories — Art. 6(2)

Annex III Deep-Dive

Annex III lists the eight categories of AI systems classified as high-risk under Art. 6(2) of the EU AI Act. Systems within these categories must comply with the full Art. 9–17 obligation stack: risk management, data governance, technical documentation, transparency, human oversight, accuracy and robustness, and cybersecurity. This page covers each category's exact legal text, practical examples, borderline cases, and compliance implications.

How Annex III classification works — Art. 6(2) and Art. 6(3)

Under Art. 6(2), an AI system listed in Annex III is classified as high-risk unless the provider demonstrates that the system does not pose a significant risk of harm to health, safety, or fundamental rights — taking into account Art. 6(3) criteria. Art. 6(3) sets out the factors: the AI system's intended purpose, the extent to which the system has been used, the extent to which human oversight is involved, and the extent to which people have been harmed.

Art. 6(3) is a narrow exception, not a broad carve-out. The burden of proof is on the provider to demonstrate low risk. Regulators will scrutinise Art. 6(3) claims. Providers relying on Art. 6(3) must document their reasoning and notify the national competent authority.

Delegated acts may expand Annex III. Art. 7 empowers the Commission to adopt delegated acts adding new categories to Annex III as AI capabilities and deployment patterns evolve. The AI Office is actively monitoring emerging high-risk use cases. Compliance programmes should monitor the AI Office's work programme.

1

Annex III, Category 1

Biometric identification and categorisation of natural persons

Official Annex III text

(a) AI systems intended to be used for the real-time and post remote biometric identification of natural persons; (b) AI systems intended to be used for the categorisation of natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sexual orientation or health; (c) AI systems intended to be used for the inference of emotions of natural persons in the context of workplace or educational settings.

Plain English

This category covers three distinct types of system: (a) AI that identifies who a person is from biometric data (facial recognition, gait analysis, iris scanning) in real-time or from recorded footage; (b) AI that categorises people by inferred characteristics from biometric data — race, political views, health, sexuality; and (c) AI that infers emotional states from biometric data in workplaces or schools. Note that real-time remote biometric identification in publicly accessible spaces for law enforcement purposes is separately addressed under Art. 5 as a prohibited practice (with narrow exceptions), not merely high-risk.

Clearly in scope — examples

  • Facial recognition systems identifying individuals from CCTV footage (post-remote)
  • AI systems used in border control or airport boarding processes to verify identity
  • Workplace emotion recognition software analysing facial expressions for mood or engagement
  • AI inferring health conditions from facial scans for insurance or employment purposes
  • Gait analysis AI identifying individuals in public spaces

Borderline cases — how to assess

  • ?Biometric authentication systems (verifying you are who you claim to be, not identifying you from a crowd) may fall outside cat. 1(a) — but legal advice is warranted for each deployment context
  • ?Aggregate crowd analytics that detect patterns without identifying individuals — typically outside scope, but depends on technical method
  • ?Emotion detection used for product testing / market research rather than workplace assessment — requires analysis of whether it falls within cat. 1(c) scope
  • ?Medical imaging AI that identifies disease markers from scans — primarily an MDR/Annex I pathway question, not cat. 1

Key obligations triggered

  • 1Full Art. 9–17 high-risk obligation stack applies
  • 2Registration in EU AI database (Art. 49) — public registration mandatory for cat. 1 systems
  • 3Cat. 1(a) real-time remote biometric identification in public spaces for law enforcement: prohibited under Art. 5 with narrow exceptions (requires prior judicial/administrative authorisation)
  • 4Art. 26(4): deployers must notify natural persons that biometric identification is being used (unless law enforcement context justifies exception)
  • 5GDPR Art. 9 compliance required — biometric data is special category data requiring Art. 9(2) basis
Art. 26 — Deployer obligations

Deployers of biometric identification systems must conduct a FRIA (fundamental rights impact assessment) if they are public authorities. Deployers must notify persons subject to biometric identification, maintain logs of use, and implement human oversight procedures enabling review of identification outcomes.

2

Annex III, Category 2

Management and operation of critical infrastructure

Official Annex III text

AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.

Plain English

AI systems that function as safety components in critical infrastructure sectors are high-risk. The key term is 'safety component' — not all AI used in infrastructure, but specifically AI whose failure or misperformance could compromise safety. This includes AI that directly controls or monitors systems whose failure could cause widespread harm. The infrastructure types listed (digital infrastructure, road traffic, water, gas, heating, electricity) reflect the NIS2 Directive categories of critical infrastructure, with which this provision aligns.

Clearly in scope — examples

  • AI controlling traffic light systems or dynamic speed limits on major road networks
  • AI monitoring pressure and flow in water treatment or gas distribution networks, with automated responses
  • AI-based anomaly detection in electricity grid management used to prevent cascading failures
  • AI managing cybersecurity threat response in critical digital infrastructure with autonomous remediation capability
  • AI for predictive maintenance in nuclear or gas power stations where failure prediction triggers safety shutdowns

Borderline cases — how to assess

  • ?AI used for planning and optimisation of infrastructure (routing, scheduling) without real-time safety control — may not be a 'safety component'
  • ?AI dashboards providing operators with information but not taking or triggering automatic actions — passive monitoring tools are likely outside scope
  • ?Cybersecurity AI used for threat intelligence and reporting (not automated response) — typically outside scope
  • ?AI used in general corporate IT systems of infrastructure operators (not the infrastructure systems themselves) — outside scope

Key obligations triggered

  • 1Full Art. 9–17 high-risk obligation stack applies
  • 2Particularly strong emphasis on Art. 9 risk management — infrastructure failures have cascading consequences
  • 3Art. 15 accuracy, robustness, and cybersecurity requirements are critical — systems must perform reliably under adverse conditions
  • 4Integration with NIS2 cybersecurity risk management obligations for operators of essential services
  • 5Post-market monitoring (Art. 72) must include monitoring for performance degradation that could compromise safety
Art. 26 — Deployer obligations

Infrastructure operators deploying AI as deployers under Art. 26 must implement human oversight enabling operators to intervene and disable automated responses. The scale of potential harm justifies robust override and manual takeover procedures. Most critical infrastructure operators are also subject to NIS2 — integrate AI Act post-market monitoring with NIS2 incident reporting obligations.

3

Annex III, Category 3

Education and vocational training

Official Annex III text

(a) AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions; (b) AI systems intended to be used for evaluating learning outcomes of natural persons in educational and vocational training institutions, including where those outcomes are used to steer the learning process of those persons; (c) AI systems intended to be used for the purpose of assessing the appropriate level of education for a person and materially influencing the level of education and training that person will receive; (d) AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in educational and vocational training institutions.

Plain English

This category addresses four scenarios where AI can significantly affect a person's educational path: (a) AI controlling admissions to schools, universities, or vocational programmes; (b) AI assessing or grading student performance, including adaptive learning systems that alter what a student is taught based on performance; (c) AI that assesses educational level and thereby determines what level of education someone receives; and (d) AI used for exam proctoring — monitoring students for cheating. All four can have significant and lasting effects on a person's opportunities and future life trajectory.

Clearly in scope — examples

  • University admissions AI that screens or ranks applications
  • Automated exam grading systems whose outputs determine pass/fail outcomes
  • Adaptive learning platforms that direct students to different content tracks based on assessed performance
  • Remote proctoring AI that monitors webcam footage, eye movement, or keystrokes during exams
  • AI systems used by vocational training programmes to determine trainee progression

Borderline cases — how to assess

  • ?AI providing personalised study recommendations without affecting assessment outcomes or formal progression — likely limited risk
  • ?Plagiarism detection software that flags work for human review (with no automated outcome) — human oversight present, may reduce classification
  • ?Language learning apps using AI for practice feedback but not formal assessment — likely minimal risk
  • ?AI analytics dashboards used by teachers to understand class-level performance trends, not individual outcomes — likely outside scope

Key obligations triggered

  • 1Full Art. 9–17 obligations apply
  • 2Art. 14 human oversight is particularly important in educational settings — teachers and academic staff must be able to review and override AI assessments
  • 3Art. 13 instructions for use must explain assessment methodology, accuracy metrics, and known limitations
  • 4Art. 26(4): students must be informed when AI systems are used for their assessment or admissions
  • 5GDPR: educational AI typically processes children's data — enhanced protections apply, and impact assessments must address the GDPR Art. 8 / national law age restrictions
Art. 26 — Deployer obligations

Educational institutions deploying AI for admissions or assessment are deployers under Art. 26. They must use systems in accordance with provider instructions, implement human oversight (academic staff review), inform students, and maintain logs. Remote proctoring AI is particularly sensitive: it processes biometric data (facial recognition) creating overlap with cat. 1 and requiring an Art. 9 GDPR basis for biometric data processing.

4

Annex III, Category 4

Employment, worker management, and self-employment

Official Annex III text

(a) AI systems intended to be used for the recruitment or selection of natural persons, notably to advertise vacancies, screen or filter applications, evaluate candidates in the course of interviews or tests; (b) AI systems intended to be used to make decisions affecting the terms and conditions of employment, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics, or to monitor and evaluate performance and behaviour of persons in such a relationship.

Plain English

Category 4 covers the full employment lifecycle: AI in hiring (advertising jobs, screening CVs, video interview analysis) and AI managing existing employees (setting conditions, promotions, dismissals, task allocation, performance monitoring). The breadth is intentional — employment decisions have profound effects on financial security, professional development, and dignity. This category reflects regulatory concern about algorithmic management and gig economy AI that exerts effective control over workers without human managerial discretion.

Clearly in scope — examples

  • CV screening AI that filters or ranks applications before a human reviews them
  • Video interview AI that analyses facial expressions, tone, or language to score candidates
  • AI that determines shift allocation or task assignment for gig workers (Uber, Deliveroo, Amazon Flex equivalent systems)
  • Performance monitoring AI that feeds into disciplinary processes or bonus/promotion decisions
  • AI that flags employees for termination review based on productivity metrics
  • Automated reference checking or background screening systems

Borderline cases — how to assess

  • ?AI that aggregates productivity data for manager review, where the manager makes all decisions — the AI's role as an 'input' rather than a decision-maker matters, but must be genuinely human-controlled
  • ?AI recommending candidates for interview (rather than filtering them out) — arguable, but Art. 6(3) clarification may resolve
  • ?Chatbots used in initial candidate information-gathering stages, before any evaluation
  • ?AI scheduling systems that optimise shift patterns based on business needs without profiling individual workers

Key obligations triggered

  • 1Full Art. 9–17 obligations apply
  • 2Art. 14 human oversight: HR professionals or line managers must be able to override AI decisions, and must be trained to do so meaningfully
  • 3GDPR Art. 22: employment decisions based significantly on automated processing require a lawful basis and information to workers about the logic
  • 4Art. 26(4): workers must be informed when AI is used to make decisions affecting them
  • 5EU Platform Work Directive (when adopted): gig economy AI management obligations will create additional requirements that overlap with cat. 4
Art. 26 — Deployer obligations

Employers using third-party HR AI tools (ATS systems, performance monitoring platforms) are deployers under Art. 26. They cannot delegate compliance to the vendor. They must ensure human HR oversight is genuine, inform workers about AI use, and retain logs. Gig economy operators using AI to allocate work and manage contractors bear the same obligations — the platform work context does not reduce the obligation.

5

Annex III, Category 5

Access to and enjoyment of essential private services and public services and benefits

Official Annex III text

(a) AI systems intended to be used by or on behalf of public authorities or by Union institutions, agencies, offices and bodies to evaluate the eligibility of natural persons for essential public benefits and services, including healthcare services, housing, and child protection; (b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud; (c) AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance; (d) AI systems intended to be used to evaluate and classify emergency calls by natural persons or to dispatch or establish priority in the sending of emergency response, including by police, firefighters and medical aid.

Plain English

Category 5 is the broadest and most commercially significant high-risk category. It covers: (a) AI used by government to determine who gets welfare, housing, or child protection services; (b) credit scoring and creditworthiness assessment (the most explicitly named financial services use case); (c) insurance pricing for life and health; and (d) AI that classifies 999/112 emergency calls or determines dispatch priority. These systems share a common characteristic: their outputs determine whether natural persons can access things that are fundamental to their welfare — money, shelter, health coverage, emergency assistance.

Clearly in scope — examples

  • Credit scoring models used in lending decisions (mortgages, personal loans, credit cards)
  • Insurance underwriting AI for life, health, or critical illness products
  • Government benefit eligibility AI (universal credit, housing benefit, social care allocation)
  • Emergency call classification AI used by police, fire, and ambulance services
  • Child protection risk assessment AI used by local authorities
  • Housing allocation AI used by social landlords or local authorities

Borderline cases — how to assess

  • ?Fraud detection AI — explicitly carved out of cat. 5(b) — but note that fraud systems that result in account freezing or access denial may re-enter scope via the general cat. 5(a) or (b) logic
  • ?AI providing affordability guidance to consumers without generating a credit score — depends on how directly outputs influence credit decisions
  • ?Commercial insurance pricing for property/casualty (not life or health) — outside cat. 5(c) but may attract other obligations
  • ?Emergency response AI used for internal resource planning rather than individual call prioritisation

Key obligations triggered

  • 1Full Art. 9–17 obligations — this is the most heavily obligation-laden category for commercial operators
  • 2Art. 26(5): credit institutions deploying cat. 5 AI must conduct a Fundamental Rights Impact Assessment (FRIA) — this obligation is unique to this category
  • 3Art. 26(4): persons must be informed they are subject to AI decision-making affecting their access to services
  • 4GDPR Art. 22: automated credit and insurance decisions require a lawful basis, right to contest, and explanation of logic
  • 5Public authorities using cat. 5(a) systems must register them in the EU AI database publicly
Art. 26 — Deployer obligations

This category creates the most direct financial services and public sector exposure. Credit institutions have the additional FRIA obligation under Art. 26(5). Public authorities deploying cat. 5(a) systems must treat them as public AI — registration in the public EU AI database is mandatory, and public transparency expectations are high. Emergency services using cat. 5(d) AI must maintain human oversight that allows dispatchers to override algorithmic priority recommendations.

6

Annex III, Category 6

Law enforcement

Official Annex III text

(a) AI systems intended to be used by or on behalf of competent authorities, or by Union institutions, agencies, offices or bodies in support of competent authorities, to assess the risk of a natural person becoming a victim of criminal offences; (b) AI systems intended to be used by or on behalf of competent authorities as polygraphs and similar tools; (c) AI systems intended to be used by or on behalf of competent authorities to assess the reliability of evidence in the course of investigation or prosecution of criminal offences; (d) AI systems intended to be used by or on behalf of competent authorities for assessing the risk of a natural person offending or reoffending, or for assessing personality traits and characteristics or past criminal behaviour of natural persons or groups; (e) AI systems intended to be used in the course of detection, investigation, and prosecution of criminal offences, for the profiling of natural persons as referred to in Art. 3(4) of Directive (EU) 2016/680 [the Law Enforcement Directive]; (f) AI systems intended to be used for crime analytics regarding natural persons, allowing law enforcement authorities to search, analyse, and cross-link large amounts of data.

Plain English

Category 6 covers the full range of AI uses by law enforcement and the justice system, with the exception that the AI must be used by or on behalf of competent (law enforcement) authorities. The category covers: predicting who might become a crime victim; polygraph-type systems; evidence reliability assessment; recidivism risk scoring; criminal profiling; and crime analytics. These are high-risk because law enforcement AI errors have serious consequences — wrongful arrest, discriminatory policing, unjust prosecution — and because the power imbalance between state and individual is at its greatest in this context.

Clearly in scope — examples

  • Predictive policing algorithms that generate risk scores for geographic areas or individuals
  • AI-powered facial recognition used by police to identify suspects
  • Recidivism risk assessment tools (like COMPAS equivalents) used in sentencing or parole decisions
  • AI cross-referencing databases (criminal records, biometric databases, communications data) to identify suspects
  • AI assessing the reliability or authenticity of digital evidence (images, audio, video)
  • AI analysing social media or open-source intelligence at scale for criminal investigation

Borderline cases — how to assess

  • ?Crime analytics tools used for strategic planning (not targeting individuals) may be outside scope — depends on whether natural persons are profiled
  • ?AI tools available to the public but incidentally used by police — the 'by or on behalf of competent authorities' test must be met
  • ?AI used by private security companies (not competent authorities) — typically outside cat. 6 scope, though deployer/provider relationships still apply

Key obligations triggered

  • 1Full Art. 9–17 obligations apply — with particularly high scrutiny given fundamental rights implications
  • 2Art. 5(1)(d)-(f): certain law enforcement AI uses that would be cat. 6 are instead prohibited practices — real-time remote biometric identification in public spaces (with narrow exceptions), social scoring by public authorities, predictive AI targeting individuals without prior criminal activity
  • 3Law enforcement authorities using cat. 6 systems must register them in the EU AI database
  • 4Access to EU AI database entries for cat. 6 systems may be restricted from public view for law enforcement sensitivity reasons
  • 5National supervisory authorities for police and justice AI must be designated — this is a Member State obligation but affects how enforcement of cat. 6 AI obligations works in practice
Art. 26 — Deployer obligations

Law enforcement deployers are typically public authorities acting as both deployer and (where systems are built in-house) provider. The FRIA requirement under Art. 26(5) does not explicitly name law enforcement authorities, but fundamental rights impact assessment is implicitly required given the Art. 22 (prohibited practices) adjacency and the human rights framework. Systems near the Art. 5 prohibition line require particular legal review.

7

Annex III, Category 7

Migration, asylum, and border control management

Official Annex III text

(a) AI systems intended to be used by or on behalf of competent authorities as polygraphs and similar tools in the context of migration, asylum, and border control; (b) AI systems intended to be used by or on behalf of competent authorities to assess a risk, including a security risk, a risk of irregular migration, or a health risk, posed by a natural person who intends to enter or has entered the territory of a Member State; (c) AI systems intended to be used by or on behalf of competent authorities for the verification of the authenticity of travel documents and supporting documentation of natural persons and detect non-authentic documents by checking their security features; (d) AI systems intended to be used to assist competent authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.

Plain English

Category 7 covers AI used in the context of border control, immigration, and asylum: risk assessment of travellers, document verification, asylum application processing, and polygraph-type tools in migration contexts. This category reflects the particular vulnerability of asylum seekers and migrants — persons who may face serious harm if AI systems produce erroneous results that lead to refusal of protection or wrongful detention. The AI Act recognises that state power in the immigration context is especially coercive.

Clearly in scope — examples

  • AI systems screening travellers at borders to generate risk profiles (terrorist risk, irregular entry risk)
  • Document verification AI checking passport or visa authenticity at border control points
  • AI used to assist with processing asylum applications — reviewing documentation, assessing credibility indicators
  • AI analysing travel patterns or flight data to flag individuals for further screening
  • Lie detection or deceptive behaviour AI used in immigration interviews

Borderline cases — how to assess

  • ?Private airline check-in systems verifying documents against public registers — the 'by or on behalf of competent authorities' test is the key gateway
  • ?AI used in visa application processing by consulates of third countries (non-EU) — outside scope unless outputs are used in the EU
  • ?General border crossing statistical analysis without profiling natural persons — likely outside cat. 7 if no individual-level risk assessment is involved

Key obligations triggered

  • 1Full Art. 9–17 obligations — with strong Art. 14 human oversight requirements given the coercive context
  • 2Art. 14(5): human oversight measures must be particularly robust — border officials must be able to meaningfully review AI outputs and not be pressured to rubber-stamp AI risk scores
  • 3EU Charter of Fundamental Rights Art. 18 (right to asylum) and Art. 19 (prohibition of refoulement) create a constitutional floor — AI systems must not be deployed in ways that risk systematic violation of these rights
  • 4GDPR Law Enforcement Directive (LED) and the specific data protection rules for asylum and immigration data (EURODAC Regulation) apply alongside the AI Act
  • 5Public registration in EU AI database mandatory for cat. 7 systems
Art. 26 — Deployer obligations

Border and immigration authorities are deployers and often providers of these AI systems. The FRIA obligation under Art. 26(5) applies to public authority deployers. Given the fundamental rights stakes, FRIAs for migration AI must be thorough and must specifically address risks to asylum seekers as a particularly vulnerable group. AI Act Recital 51 explicitly recognises migration AI as an area of particular concern.

8

Annex III, Category 8

Administration of justice and democratic processes

Official Annex III text

(a) AI systems intended to be used by or on behalf of competent authorities to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts; (b) AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda. This does not cover AI systems whose output natural persons are not directly exposed to, such as tools used to organise, optimise and structure political campaigns from an administrative perspective.

Plain English

The final Annex III category covers two distinct scenarios: (a) AI assisting judges or courts in legal research, fact-finding, or applying law to facts — not AI replacing judicial decision-making, but AI supporting it; and (b) AI influencing voters or electoral outcomes. Category 8(b) is particularly significant in the context of AI-generated political content and micro-targeted political advertising. The AI Act is careful to exclude back-office political campaign tools from 8(b) — only systems whose output is directly seen by voters are covered.

Clearly in scope — examples

  • AI legal research tools used by courts or judicial staff to research precedent and recommend applicable law
  • AI systems assisting with sentencing by providing comparator cases and statistical ranges
  • AI used in court to assess document authenticity or to summarise large volumes of evidence for judicial review
  • AI-generated personalised political messaging targeted to voters based on psychographic profiling
  • Deepfake generation or AI disinformation tools designed to influence how voters perceive candidates or issues

Borderline cases — how to assess

  • ?Legal research AI used by lawyers (not courts) — outside cat. 8(a) as it is not used by 'competent authorities' for judicial functions, though lawyers are subject to professional conduct rules
  • ?Political campaign analytics tools that do not generate voter-facing content — explicitly carved out of 8(b) by the parenthetical
  • ?Chatbots on political party websites answering questions about policies — arguable; if designed to influence voting behaviour, likely within 8(b)
  • ?AI-powered fact-checking tools used in election monitoring — likely outside scope, but context-dependent

Key obligations triggered

  • 1Full Art. 9–17 obligations apply
  • 2For cat. 8(a): courts and judicial institutions using AI must implement particularly strong human oversight (Art. 14) — judicial independence requires that AI is advisory only, not determinative
  • 3For cat. 8(b): election influence AI has overlap with the Digital Services Act (DSA) obligations for very large online platforms and the AI Act's transparency obligations for AI-generated synthetic content (Art. 50)
  • 4AI-generated political content must be labelled as AI-generated under Art. 50 of the AI Act
  • 5Public registration in EU AI database for cat. 8(a) judicial AI systems
Art. 26 — Deployer obligations

Courts deploying AI as deployers bear the Art. 26 obligations. The judicial independence principle means human oversight under Art. 14 must be designed to ensure the judge — not the AI — makes decisions. AI providing legal research assistance must not be designed in ways that anchor judicial reasoning or create defaults that are difficult to override. For election AI (cat. 8(b)), political parties and campaign technology companies deploying voter-targeting AI are subject to the full high-risk obligation stack.

Art. 7 — Delegated acts and the evolving Annex III

Annex III is not static. Art. 7 of the EU AI Act empowers the European Commission to expand Annex III by delegated act, adding new categories of high-risk AI systems as technology develops and new risks emerge. The criteria the Commission must apply are set out in Art. 7(2): the severity and irreversibility of harm, the likelihood of harm materialising, the number of people affected, the reversibility of harm, and the dependency of vulnerable persons on the AI.

The AI Office (established under the AI Act within the Commission) is tasked with ongoing monitoring of AI risks. Organisations operating in sectors adjacent to the current Annex III categories — environmental AI, AI in social care, AI in content moderation with significant real-world effects — should monitor the AI Office's work programme for signals of forthcoming delegated act proposals.

Compliance programmes should build in monitoring of the EU Official Journal for delegated act publications, and should have a process for rapidly assessing new Annex III categories against existing AI system inventories.

Art. 6(3) — The narrow low-risk exception for Annex III systems

Art. 6(3) provides that a provider may self-classify an Annex III AI system as NOT high-risk if the system does not pose a significant risk of harm to health, safety, or fundamental rights. The provider must document this assessment, notify the national competent authority, and register in the EU AI database (a limited registration, distinct from the full high-risk registration).

The assessment criteria under Art. 6(3) include: whether the AI system is intended to perform a narrow procedural task; whether the output is not used to make decisions, or where a human meaningfully reviews the output before it is acted on; whether the AI is intended to detect decision-relevant patterns, where a human reviews the output and documents the review; and whether the AI system is preparatory to an assessment relevant to one of the eight Annex III categories (not the assessment itself).

Do not use Art. 6(3) as a compliance shortcut. Regulators will scrutinise these self-assessments, and a wrongly applied Art. 6(3) exemption exposes the provider to enforcement for non-compliance with the full high-risk obligation stack.

Determine which Annex III category applies to your AI system

Use the Risk Classifier to work through a structured analysis of your system against the eight categories.

Classify your AI system →