EU AI Act Guide for Healthcare & Medical AI
Healthcare AI operates under a dual regulatory regime: the EU AI Act and either the Medical Device Regulation (MDR) or the In Vitro Diagnostic Regulation (IVDR), depending on the product. This guide explains how the two frameworks interact, when healthcare AI is high-risk, what clinical compliance looks like, and what digital health companies need to do before the enforcement deadlines.
The dual regime: AI Act + MDR/IVDR
Medical device AI is subject to both the EU AI Act and the MDR or IVDR. The EU AI Act does not replace or modify the MDR/IVDR — both apply simultaneously. However, the AI Act has a built-in efficiency mechanism for sectoral legislation: where an AI system is a safety component of a product already regulated under Annex I sectoral legislation (which includes MDR and IVDR), the conformity assessment under that sectoral legislation also covers the AI Act requirements. This means a CE-marked medical device AI system does not undergo a separate AI Act conformity assessment — but the MDR/IVDR assessment must demonstrably address AI Act obligations.
Classification Scenarios — Where Does Your AI System Fall?
The applicable obligations depend on whether your AI system is a medical device, a software as a medical device (SaMD), clinical decision support, or administrative AI. Work through these scenarios to determine your position.
AI as a safety component of a medical device
AI Act classification
High-risk (Annex I pathway)
MDR/IVDR position
Regulated under MDR/IVDR
Classification trigger
Where the AI system is a safety component of a product covered by Annex I sectoral legislation (MDR, IVDR), it is classified as high-risk under Art. 6(1) of the AI Act — regardless of whether it also appears in Annex III.
Examples
AI for detecting tumours in radiology images embedded in a CE-marked imaging device; AI for flagging sepsis risk in a patient monitoring system; AI for interpreting ECG waveforms integrated into a cardiac diagnostic device.
AI as a standalone medical device (Software as a Medical Device — SaMD)
AI Act classification
High-risk (Annex I pathway) — where classified Class IIa or above under MDR/IVDR
MDR/IVDR position
Regulated under MDR (Annex VIII) or IVDR
Classification trigger
Software that meets the MDR definition of a medical device (intended to diagnose, treat, prevent, monitor, or predict disease) and is classified Class IIa, IIb, or III is a high-risk AI system under Art. 6(1) where it also uses an AI technique within the Art. 3(1) definition.
Examples
AI diagnostic software for dermatology images (melanoma classification); AI-driven insulin dosing recommendation software; AI for predicting ICU readmission.
Clinical decision support not qualifying as a medical device
AI Act classification
Requires case-by-case analysis — may be Annex III cat. 5 if affecting access to healthcare
MDR/IVDR position
Not regulated under MDR/IVDR
Classification trigger
Clinical decision support (CDS) that provides general healthcare information, or does not directly influence a specific patient decision, may fall outside MDR scope. Under the AI Act, it may still be high-risk via Annex III cat. 5(b) if it effectively determines access to healthcare services — particularly where public health systems use it to allocate resources or prioritise treatment.
Examples
AI triage systems determining ED queue priority; AI for allocating limited surgical slots; AI for recommending whether a GP referral should proceed.
AI in hospital operations and administration
AI Act classification
Likely minimal risk — unless affecting staff employment (Annex III cat. 4) or patient access to services
MDR/IVDR position
Not regulated under MDR/IVDR
Classification trigger
Administrative AI — scheduling, billing, supply chain, general workflow automation — is generally outside the high-risk categories. However, AI used for workforce management of healthcare staff may fall within Annex III cat. 4 (employment and worker management).
Examples
AI appointment scheduling; predictive staffing tools; AI for medical coding and billing optimisation.
MDR / AI Act Interaction — Aspect by Aspect
CE marking
MDR/IVDR
MDR/IVDR requires CE marking via conformity assessment under the relevant MDR/IVDR classification route. Class IIa, IIb, and III devices require a Notified Body.
AI Act
The EU AI Act requires CE marking for high-risk AI systems — but Art. 6(1) systems (Annex I sectoral legislation pathway) do not undergo a separate AI Act conformity assessment. Instead, the MDR/IVDR conformity assessment covers AI Act compliance where the applicable sectoral legislation provides equivalent protection.
Practical approach
For Annex I pathway medical device AI, the MDR/IVDR conformity assessment is the mechanism. You do not need a separate AI Act CE marking process. The MDR Notified Body will assess AI Act requirements as part of the MDR assessment.
Technical documentation
MDR/IVDR
MDR Annex II/III technical documentation must include software lifecycle documentation, including for AI/ML software (IMDRF guidance on AI/ML-based SaMD applies).
AI Act
AI Act Annex IV specifies technical documentation requirements for high-risk AI systems — covering design, development, training data, accuracy metrics, risk management, and post-market monitoring.
Practical approach
For medical device AI, the Annex IV technical documentation requirements substantially overlap with MDR Annex II/III. Build one integrated technical file that satisfies both. Map each Annex IV heading to its MDR equivalent.
Post-market surveillance
MDR/IVDR
MDR Art. 83–86 require a post-market surveillance (PMS) plan, periodic summary safety update reports (PSUR), and serious incident reporting to national competent authorities.
AI Act
AI Act Art. 72 requires post-market monitoring of high-risk AI systems, with a monitoring plan and incident reporting obligations.
Practical approach
Integrate the AI Act Art. 72 monitoring plan into the MDR PMS plan. Serious incidents with patient impact will need to be reported under both frameworks — establish a single triage process that routes reports to both EUDAMED and the AI Act incident notification route.
Quality management
MDR/IVDR
MDR Art. 10(9) requires a quality management system (QMS) meeting ISO 13485. The QMS covers design, development, production, and post-market activities.
AI Act
AI Act Art. 17 requires a quality management system for providers of high-risk AI systems, covering strategy, procedures, data management, and post-market monitoring.
Practical approach
An MDR-compliant ISO 13485 QMS substantially satisfies AI Act Art. 17. Extend your existing QMS to explicitly address AI-specific elements: training data governance, model versioning, bias monitoring, and explainability procedures.
Key AI Act Obligations in Clinical Context
Art. 13 — Instructions for use
Clinical instructions for use
High-risk AI systems must include instructions for use enabling deployers (healthcare providers) to understand the system's purpose, performance characteristics, intended patient population, contraindications, and limitations. In healthcare, this maps to the IFU required under MDR. Instructions must specifically cover: what the AI can and cannot do, performance metrics broken down by relevant patient subgroups, and how to interpret AI outputs alongside clinical judgment.
Source
Regulation (EU) 2024/1689 — EU AI Act. This analysis covers the interaction with MDR Regulation (EU) 2017/745 and IVDR Regulation (EU) 2017/746.
Art. 14 — Human oversight
Clinician oversight — not checkbox compliance
Art. 14 requires that high-risk AI systems be designed and deployed so that natural persons can effectively oversee the system. For clinical AI, this means clinicians must be able to understand the basis for an AI recommendation, identify when the AI is operating outside its validated population, override AI recommendations without system friction, and detect when the system is malfunctioning. Human oversight obligations are incompatible with AI systems that are deployed in ways that create automation bias — where time pressure or interface design effectively removes clinician judgment from the loop.
Source
Regulation (EU) 2024/1689 — EU AI Act. This analysis covers the interaction with MDR Regulation (EU) 2017/745 and IVDR Regulation (EU) 2017/746.
Art. 9 — Risk management
Clinical risk management integration
The AI Act risk management system under Art. 9 must be iterative and continuous across the entire AI lifecycle. For medical device AI, this overlaps substantially with ISO 14971 (medical device risk management) and IEC 62304 (medical device software lifecycle). The most efficient approach is to extend your ISO 14971 risk management file to address AI-specific risks: dataset shift, performance degradation over time, bias in subpopulations, and adversarial robustness.
Source
Regulation (EU) 2024/1689 — EU AI Act. This analysis covers the interaction with MDR Regulation (EU) 2017/745 and IVDR Regulation (EU) 2017/746.
Art. 10 — Data governance
Training data quality and representativeness
Art. 10 requires that training data for high-risk AI systems be subject to data governance practices ensuring relevance, representativeness, and freedom from errors. In healthcare, this requires attention to: whether training datasets reflect the patient population where the AI will be deployed (demographic representativeness), whether data was collected at institutions with comparable imaging equipment or clinical practices, and how dataset provenance, labelling quality, and annotation methodology are documented. IMDRF guidance on AI/ML SaMD provides clinical-specific elaboration on these requirements.
Source
Regulation (EU) 2024/1689 — EU AI Act. This analysis covers the interaction with MDR Regulation (EU) 2017/745 and IVDR Regulation (EU) 2017/746.
Patient Rights — GDPR and AI Act Combined
Right to explanation of AI-influenced decisions
GDPR Art. 22 + AI Act Art. 13/14Where an AI system materially influences a healthcare decision affecting a patient — particularly where that decision is automated or semi-automated — the patient has GDPR Art. 22 rights to obtain human review, express their view, and contest the decision. In practice, this means healthcare providers must ensure that clinical AI outputs are always accompanied by a clinician making the final decision, and that patients are informed when AI tools were used in their care.
Right to know AI was used
AI Act Art. 26(4) + GDPR Art. 13/14Deployers of high-risk AI systems (hospitals, clinics) must inform natural persons that they are subject to an AI system making decisions about them. For clinical AI, this information obligation can be integrated into patient admission documentation or consent forms. GDPR privacy notices must also be updated to describe the use of AI and any automated processing of health data.
Special category data protections
GDPR Art. 9Health data is special category data under GDPR Art. 9. Processing health data for AI training or inference requires an Art. 9(2) exception — typically Art. 9(2)(h) (healthcare purposes) or Art. 9(2)(j) (scientific research). The lawful basis must be in place before the AI system processes patient data, and the basis must be documented in your DPIA and data protection notices.
DPIA requirement
GDPR Art. 35AI systems that process health data at scale will almost invariably require a Data Protection Impact Assessment under GDPR Art. 35. The DPIA should be integrated with the AI Act Art. 9 risk management documentation. Many EU supervisory authorities have published lists of processing activities that always require a DPIA — healthcare AI consistently appears on these lists.
Clinical Decision Support — The Borderline Question
The EU AI Act does not contain a definition of “clinical decision support.” Whether a CDS system is high-risk depends on: (1) whether it qualifies as a medical device (and therefore triggers the Annex I pathway), and (2) if not, whether it effectively controls access to healthcare services for natural persons (Annex III cat. 5(b)).
MDCG 2019-11 guidance from the European Commission is the authoritative source on software qualification as a medical device. The key question is whether the software has a medical purpose and whether its output is used to make clinical decisions affecting specific patients. General healthcare information tools or population-level analytics are less likely to qualify.
For borderline CDS systems, document your classification reasoning. Regulators and Notified Bodies will scrutinise attempts to classify CDS systems as non-medical devices to avoid regulatory obligations.
Practical Compliance Steps for Digital Health Companies
Step 1 — Classify your AI system
- 1Determine whether your AI software meets the MDR/IVDR definition of a medical device — apply the MDCG 2019-11 guidance on qualification and classification of software
- 2If it is a medical device, apply MDR/IVDR classification rules to determine Class I, IIa, IIb, or III — Class IIa and above triggers AI Act high-risk via Art. 6(1)
- 3If it is not a medical device, assess whether it falls within Annex III — particularly cat. 5(b) (access to healthcare services) or cat. 4 (employment, for AI managing clinical staff)
- 4Document your classification decision with reasoning — regulators will scrutinise borderline calls
Step 2 — Map obligations and identify gaps
- 1For Annex I pathway (medical device) AI: map AI Act Annex IV technical documentation requirements against your existing MDR technical file
- 2For Annex III pathway AI: identify all Art. 9–17 obligations and assess current state against each
- 3Review clinical instructions for use (IFU) against AI Act Art. 13 requirements — particularly performance metrics by subgroup and intended patient population
- 4Assess human oversight procedures: is clinical override genuinely possible and not discouraged by UI design or time constraints?
- 5Conduct GDPR Art. 35 DPIA assessment — is a DPIA required? Is one already in place and does it cover the AI system?
Step 3 — Integrate and remediate
- 1Extend ISO 13485 QMS to explicitly address AI Act Art. 17 quality management system requirements
- 2Extend ISO 14971 risk management file to address AI-specific risks and satisfy Art. 9
- 3Integrate AI Act Art. 72 post-market monitoring into MDR post-market surveillance plan
- 4Update patient information documentation and privacy notices to disclose AI system usage
- 5Establish data governance procedures covering training data provenance, representativeness assessment, and periodic revalidation of model performance
Step 4 — Conformity assessment and registration
- 1For Class IIa+ medical device AI: the Notified Body conformity assessment under MDR covers the AI Act conformity assessment pathway — coordinate with your Notified Body on AI Act-specific evidence
- 2For Annex III standalone AI systems: conduct internal conformity assessment per Art. 43(2) and draw up EU Declaration of Conformity
- 3Register in the EU AI database (mandatory for all Annex III high-risk systems, deadline 2 August 2026)
- 4Establish serious incident reporting procedure for clinical AI — integrate with EUDAMED and AI Act incident notification obligations
Related guides
Classify your healthcare AI system
Use the Risk Classifier to determine whether your AI system is high-risk, and which specific obligations apply.
Classify your AI system →