Annex IV — Technical Documentation
Every high-risk AI system must have technical documentation meeting the requirements of Annex IV before being placed on the EU market. This guide explains what each of the 8 required sections must contain, what level of detail is expected, and the most common mistakes that will cause a documentation review to fail.
Legal basis
Art. 11 + Annex IV
Retention period
10 years (Art. 18)
Who prepares it
Provider
8 required sections
Generate your Annex IV document
The Doc Generator pre-fills all 8 sections and exports a compliant Word document.
General Description of the AI System
Annex IV, §1
Purpose
Provide a complete description of the system so a regulator or assessor can understand what the system does and who it is intended for without needing to read technical code or model weights.
Must include
- The intended purpose — what the system is designed to do, in which sector, and for which specific use case
- The level of accuracy, robustness, and cybersecurity the system is designed to achieve, with reference to the metrics used
- The Annex III category or Annex I product classification and why the system falls within it
- The natural persons or groups of persons the system is intended to be used by (deployers) and those it affects (subjects)
- How the system interacts with hardware or other software it is combined with
- Version numbering and how significant changes are recorded and managed
Writing tips
- ✓Do not confuse this with marketing material. Be precise about what the system does and does not do.
- ✓Include the operational design domain — the specific conditions (environments, user types, data inputs) in which the system is intended to operate.
- ✓Regulators will compare this section against the instructions for use (Annex IV §5). They must be consistent.
Common mistakes
Over-broad descriptions that cover all possible uses, vague claims about accuracy ('high accuracy'), and omission of the specific Annex III category.
Design Specifications
Annex IV, §2
Purpose
Document the design choices that shape how the system works — including what type of AI or ML approach was used and why, and the specific design decisions that affect safety and performance.
Must include
- The general logic and algorithm(s) on which the AI system is based, in plain and accessible language
- The design specifications, including the general logic of the system and of the algorithms
- The key design choices made, including their rationale (e.g., choice of model architecture, training approach, loss functions)
- Any trade-offs made during design that affect performance, robustness, or limitations
- The computational resources required to run the system
Writing tips
- ✓You do not need to disclose model weights or trade secrets in full — but the logic and approach must be understandable to a technically qualified assessor.
- ✓Where you made choices that limit capabilities or impose constraints (e.g., limiting the system to a subset of inputs), document these and why.
- ✓Regulators focus on understanding whether design choices adequately address the risks associated with the system's intended use.
Common mistakes
High-level descriptions of model type without any design rationale; omission of constraints; treating this section as a sales pitch for the algorithm rather than a technical audit document.
Training, Validation, and Testing Methodologies
Annex IV, §3
Purpose
Describe how the system was built — what data was used, how it was processed and assessed, and how the system's performance was validated and tested against the requirements.
Must include
- Description of training, validation, and test datasets: source, characteristics, collection methodology
- Data governance and data management practices applied during training (Art. 10 compliance evidence)
- Information about the representativeness of datasets in relation to the intended operational domain and affected populations
- The metrics used to measure accuracy, robustness, and non-discrimination
- How potential biases were identified and mitigated
- Test procedures and results — including accuracy rates by relevant subgroup (age, gender, ethnicity where applicable)
- Whether any known limitations or failure modes were identified in testing
Writing tips
- ✓This section is scrutinised most heavily for high-risk AI systems in sensitive sectors (credit, HR, law enforcement). Include disaggregated performance metrics for protected characteristics.
- ✓Document data sources fully — including third-party datasets, their licences, and any pre-processing applied.
- ✓Bias analysis should be specific: what potential biases were looked for, what was found, and how it was addressed. Generic statements about 'fairness' are insufficient.
- ✓Where performance varies significantly between subgroups, this must be disclosed — and the system's instructions for use (§5) must reflect these limitations.
Common mistakes
Aggregate accuracy metrics without subgroup breakdowns; vague references to 'diverse datasets' without specifics; omitting known failure modes identified in testing.
Risk Management System
Annex IV, §4
Purpose
Evidence that you have implemented the Art. 9 risk management system — identifying risks, evaluating them, adopting mitigations, and establishing ongoing monitoring.
Must include
- Description of the risk management system established and maintained throughout the lifecycle
- The risk assessment methodology used (quantitative, qualitative, or hybrid)
- Identification of foreseeable risks to health, safety, and fundamental rights from foreseeable misuse and malfunction
- The risk mitigation measures adopted for each identified risk
- The residual risks remaining after mitigation, and why they are acceptable
- How the risk management system feeds into post-market monitoring (Art. 72)
Writing tips
- ✓ISO/IEC 23894 (AI Risk Management) and ISO 31000 provide useful frameworks that align with Art. 9 requirements.
- ✓Go beyond technical failure modes — Art. 9 explicitly requires analysis of risks arising from foreseeable misuse, and from the interaction of the AI system with other products.
- ✓Fundamental rights impacts are an explicit concern. Consider which fundamental rights are affected by the system's decisions and document how those risks are mitigated.
- ✓The risk management system is iterative — document how it will be updated as post-market data is received.
Common mistakes
Treating this as a one-time exercise rather than a living system; limiting risk analysis to technical/safety risks without addressing fundamental rights; no connection to post-market monitoring.
Instructions for Use
Annex IV, §5
Purpose
The document given to deployers so they can understand what the system does, use it correctly, and implement appropriate oversight. This is the primary deliverable from provider to deployer under Art. 13.
Must include
- System identity: name, type, version, intended purpose, and provider contact details
- The characteristics, capabilities, and limitations of the system — in particular regarding performance
- Changes to the system resulting from maintenance or relearning, and their expected effects on system behaviour
- Human oversight measures: who can perform oversight, what competence is required, and how oversight is to be implemented
- Technical measures to enable logging and monitoring by the deployer (Art. 26(6))
- Any known circumstances in which the system may fail or produce incorrect outputs
- The categories of natural persons on whom the system is intended to be used
- Where applicable: the EU AI database registration number
Writing tips
- ✓The instructions must be sufficient for a deployer to implement genuine human oversight — not just a checkbox. Describe specifically who should review outputs, at what stage, and what they should check.
- ✓Performance metrics must be broken down by relevant subgroups — the instructions must inform deployers about subgroup-level performance, not just aggregate accuracy.
- ✓Regulators and data protection authorities will review these instructions against the actual outcomes of AI-assisted decisions. Ensure they are accurate and not misleadingly optimistic.
- ✓Where the system can learn or be updated in operation, the instructions must describe how updates affect system behaviour.
Common mistakes
Generic boilerplate human oversight descriptions; failure to specify the competence level required of oversight personnel; omission of known failure modes and edge cases.
EU Declaration of Conformity
Annex IV, §6
Purpose
Formal legal document by which the provider declares that the high-risk AI system meets all applicable EU AI Act obligations. Required for CE marking.
Must include
- Provider name and address
- AI system name, version, and description
- Statement that the system complies with Regulation (EU) 2024/1689 and, where applicable, other applicable EU legislation
- Reference to harmonised standards applied, or common specifications followed
- Where applicable: notified body name, identification number, and certificate number
- Place and date of issue
- Name and signature of authorised signatory
Writing tips
- ✓The Declaration of Conformity must be drawn up before placing the system on the market or putting it into service.
- ✓Keep the Declaration up to date — if the system changes significantly, a new or updated Declaration may be required.
- ✓Retain the Declaration for 10 years after placing on the market (Art. 18).
- ✓If your system is a safety component of a product already covered by Annex I legislation, the sectoral Declaration of Conformity satisfies this requirement — coordinate with your sectoral conformity assessment body.
Common mistakes
Using a generic template without adapting to the specific system; failing to update the Declaration when the system is significantly modified; omitting applicable harmonised standards.
System Used as a Safety Component — Conformity Assessment (where applicable)
Annex IV, §7
Purpose
For AI systems regulated under Annex I sectoral legislation (e.g., medical devices under MDR, road vehicles under Regulation 2019/2144), provide information about the sectoral conformity assessment that covers the AI Act requirements.
Must include
- Reference to the applicable Annex I sectoral legislation under which the system is regulated
- Identification of the conformity assessment body (notified body, type-approval authority) that conducted or is conducting the assessment
- Certificate number and validity period (where already certified)
- Description of how the sectoral conformity assessment addresses AI Act Annex IV requirements
Writing tips
- ✓This section only applies to Annex I pathway systems. Most Annex III systems do not use this pathway.
- ✓Work closely with your sectoral notified body to ensure they have addressed AI Act-specific obligations within their assessment.
- ✓EASA (aviation), ERA (rail), and type-approval authorities (road vehicles) are developing specific guidance on how AI Act requirements are incorporated into sectoral assessments.
Common mistakes
Leaving this section blank for Annex I pathway systems; assuming the sectoral assessment automatically covers all AI Act obligations without verifying with the notified body.
Post-Market Monitoring System
Annex IV, §8
Purpose
Document the system you have established to collect and analyse data from deployed AI systems in operation — enabling detection of risks not identified in pre-deployment testing.
Must include
- Description of the post-market monitoring system established per Art. 72
- The data to be collected from deployers and how it will be collected
- Performance metrics to be monitored and the thresholds that trigger review or corrective action
- Frequency of review and the process for analysing monitoring data
- How serious incidents will be detected, assessed, and reported to market surveillance authorities
- The feedback loop between post-market monitoring findings and risk management system updates
Writing tips
- ✓Art. 72 requires a post-market monitoring plan proportionate to the nature of the AI technologies and the risks of the system. High-risk Annex III systems require more detailed monitoring than lower-risk systems.
- ✓Design your monitoring to detect performance degradation, distributional shift (training-deployment mismatch), and discriminatory outcomes over time — not just system errors.
- ✓Deployers are required to share information with you under Art. 26 — design your instructions for use (§5) so that deployers know what to report and how.
- ✓Document incident reporting timelines: serious incidents must be reported to market surveillance authorities within defined timeframes (Art. 73).
Common mistakes
Treating post-market monitoring as an afterthought; generic monitoring descriptions without specific metrics or thresholds; no connection between monitoring data and risk management system updates.
Proportionality and SME provisions
Art. 11(3) allows the Commission to establish forms and templates for technical documentation via implementing acts. Watch for the AI Office's published templates, which will provide the official format. Until then, document each Annex IV section in a way that clearly maps to the statutory requirements. SMEs and startups may take advantage of the simplified requirements under Art. 62 — but these still require all 8 sections to be addressed, just at a proportionate level of detail relative to the scale of the risk.
Related guides
Generate your technical documentation
The Annex IV Doc Generator guides you through all 8 sections and exports a formatted Word document ready for your compliance file.
Open Annex IV Doc Generator →