Skip to main contentSkip to main content
EU AI Act Art. 51–56Deadline passed: 2 Aug 2025

General Purpose AI (GPAI) Models

Title V of the EU AI Act creates a separate, dedicated compliance track for General Purpose AI model providers — the companies that train and release foundation models (large language models, image generators, multimodal models, etc.). The GPAI compliance deadline was August 2, 2025. If you provide a GPAI model and have not yet complied, you are in violation.

GPAI compliance deadline: 2 August 2025 — now passed

If you provide a General Purpose AI model and have not yet implemented the Title V obligations, you are in violation of the EU AI Act. Seek qualified legal counsel immediately and prioritise compliance.

Articles

Art. 51–56

Compliance deadline

2 Aug 2025

Systemic risk threshold

10²⁵ FLOPs

Max penalty

€15M / 3%

What is a GPAI model?

Under Article 3(63), a GPAI model is an AI model — including where trained with a large amount of data using self-supervision at scale — that displays significant generality and is capable of competently performing a wide range of distinct tasks. In practice, this means:

Examples of GPAI models

  • • Large language models (GPT, Claude, Gemini, Llama, Mistral)
  • • Image generation models (Stable Diffusion, DALL-E, Midjourney)
  • • Multimodal foundation models
  • • Code generation models
  • • Audio/speech/video generation models at scale

Important distinctions

  • • A GPAI model is not the same as a GPAI system
  • • You can be both a GPAI model provider AND an AI system provider
  • • Fine-tuned models may also qualify as GPAI models
  • • Open-weight models are included (with nuances for open source)

Art. 53

Obligations for All GPAI Model Providers

Plain English

Article 53 sets the baseline obligations for ALL GPAI model providers — regardless of whether the model has systemic risk. You must: (1) prepare and maintain technical documentation (model architecture, training data sources, training compute, evaluation benchmarks, capability assessments); (2) make documentation available to downstream AI system providers who integrate your model; (3) implement a copyright compliance policy — this means respecting text-and-data-mining opt-outs under the DSM Directive; (4) publish a training data summary. These apply to both proprietary and open-weight models, though open-weight models with weights publicly available have some reduced obligations.

Official Text (EUR-Lex)

Key obligations

  • 1Prepare technical documentation per Annex XI before making the model available
  • 2Include: model architecture, training data sources, training compute (FLOPs), evaluation results, known limitations
  • 3Make documentation available to downstream AI system providers upon request
  • 4Implement a copyright compliance policy — respect rights reservations under DSM Directive Art. 4(3)
  • 5Publish a publicly available summary of training data content
  • 6Register the model with the EU AI Office
  • 7Keep documentation updated as the model evolves

Source

Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.

Art. 51

GPAI Models with Systemic Risk — Classification

Plain English

A GPAI model is classified as having 'systemic risk' if it meets the 10²⁵ FLOPs training compute threshold — which today corresponds roughly to frontier models like GPT-4, Gemini Ultra, and Claude 3 Opus. The AI Office can also designate smaller models as systemic if they demonstrate capabilities equivalent to the threshold. The 10²⁵ FLOPs threshold will be updated over time as the Commission develops guidance. If you are unsure whether your model is systemic, you should assess your training compute and monitor AI Office guidance.

Official Text (EUR-Lex)

Key obligations

  • 1Calculate your model's total training compute in FLOPs
  • 2If training compute exceeds 10²⁵ FLOPs, you are presumed to have systemic risk
  • 3Notify the AI Office if you believe your model reaches systemic risk threshold
  • 4Monitor AI Office designation decisions — you may be designated systemic even below the threshold
  • 5Implement all Art. 55 obligations if classified as systemic

Tools for this article

Source

Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.

Art. 55

Obligations for GPAI Models with Systemic Risk

Plain English

On top of the baseline Art. 53 obligations, systemic risk GPAI providers must: (1) conduct adversarial testing / red-teaming using state-of-the-art methods to identify and mitigate systemic risks; (2) assess systemic risks at EU level that could arise from deployment; (3) report serious incidents to the AI Office without undue delay — including information about corrective measures; (4) implement cybersecurity measures adequate for the scale and risk of the model. These are significantly heavier obligations designed for the most capable frontier AI models.

Official Text (EUR-Lex)

Key obligations

  • 1Conduct adversarial testing (red-teaming) using state-of-the-art methodologies before model release
  • 2Document all adversarial testing protocols, results, and identified risks
  • 3Assess and mitigate systemic risks at Union level stemming from the model
  • 4Report serious incidents to the AI Office without undue delay
  • 5Include corrective measures in incident reports
  • 6Implement cybersecurity measures commensurate with the model's risk profile
  • 7Report energy consumption and computational resources used

Source

Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.

Art. 56

GPAI Code of Practice — Walkthrough

Article 56 mandates the AI Office to facilitate the development of a Code of Practice for GPAI providers. Adherence creates a presumption of conformity with Title V obligations — making it the primary compliance route for most GPAI model providers. Non-signatories must demonstrate compliance by alternative means, creating significant legal uncertainty.

Code of Practice Timeline

Sep–Nov 2024Working groups established; First draft published for consultation
Dec 2024 – Mar 2025Second draft published — 4 commitment areas finalised
May 2025Third draft with refined obligations and measurement criteria
2 Aug 2025GPAI compliance deadline — Title V obligations become enforceable
Aug 2025 onwardsAI Office monitors compliance; CoP updated as technology evolves

The Four Commitment Areas

1

Transparency & Technical Documentation

Art. 53(1)(a)–(b)
  • Publish a model card / technical documentation per Annex XI before model release
  • Include: model architecture, intended use, training data description, training compute (FLOPs), known limitations, evaluation benchmarks
  • Make downstream documentation available to AI system providers integrating the model
  • Update documentation whenever the model is significantly updated or fine-tuned
  • Register the model with the EU AI Office via the publicly accessible database
2

Copyright & Training Data

Art. 53(1)(c)–(d)
  • Implement a state-of-the-art copyright compliance policy before training
  • Identify and respect rights reservations under DSM Directive Art. 4(3) (text-and-data-mining opt-outs)
  • Maintain a register of data sources used for training (or a sufficiently detailed description)
  • Publish a public training data summary — including the types of content and sources
  • Document any licensed datasets and the terms under which they were used
3

Safety Evaluation & Risk Assessment (All GPAI)

Art. 53, 55(1)(a)–(b)
  • Conduct capability evaluations prior to model release
  • Test for: harmful content generation, CBRN information, cyberattack facilitation, deception, manipulation
  • Document risk assessment methodology and results
  • Implement mitigations for identified risks before deployment
  • For systemic risk models: conduct adversarial testing (red-teaming) using standardised protocols
4

Incident Reporting & Cybersecurity (Systemic Risk)

Art. 55(1)(c)–(d)
  • Establish an incident monitoring and reporting process
  • Report serious incidents to the AI Office without undue delay
  • Include in reports: nature of incident, affected users, corrective measures taken
  • Implement cybersecurity measures commensurate with model scale and risk
  • Monitor for misuse, jailbreaks, and novel attack vectors post-deployment

Signing the Code

  • • Creates presumption of conformity with Title V
  • • Simplifies AI Office audit and inspection
  • • Signals good faith to regulators
  • • Provides structured implementation framework
  • • Access to AI Office guidance and working groups

Not Signing

  • • Must demonstrate Art. 53/55 compliance independently
  • • Higher burden of proof in regulatory investigations
  • • No safe harbour from CoP compliance presumption
  • • AI Office may scrutinise more closely
  • • No access to Code-based compliance frameworks

Quick Self-Assessment: Are You Prepared?

Do you have Annex XI technical documentation ready?
Is your training data summary publicly published?
Have you implemented a copyright opt-out compliance policy?
Have you conducted pre-release capability evaluations?
Have you documented your risk mitigation measures?
Is your model registered in the EU AI Office database?
Have you conducted adversarial testing (if systemic)?
Do you have an incident reporting process in place?

Classify your AI system type

Use the Risk Classifier to determine whether you are a GPAI provider, AI system provider, or both.

Start Risk Classification →