Skip to main contentSkip to main content
Dir. EU 2024/2853 (NPLD)Transposition: Dec 2026

AI Liability

The liability landscape for AI systems in the EU is shaped by two key instruments: the New Product Liability Directive (2024/2853), which explicitly covers software and AI for the first time, and the now-withdrawn AI Liability Directive proposal. Understanding who is liable when an AI system causes harm is critical for any company in the AI supply chain.

AdoptedNew Product Liability Directive

Dir. EU 2024/2853 — adopted November 2024

Explicitly covers software and AI systems as “products”. Reversed burden of proof in certain circumstances. Member states must transpose by December 9, 2026.

WithdrawnAI Liability Directive

COM(2022) 496 — proposal withdrawn February 2025

The standalone AI Liability Directive proposal was withdrawn by the Commission in February 2025. Non-contractual fault-based civil liability for AI remains governed by national law for now.

NPLD in force

9 Dec 2024

Transposition deadline

9 Dec 2026

Software covered?

Yes — explicit

Burden of proof

Reversed (conditions)

NPLD Art. 4

Software and AI as “Products”

Plain English

The New Product Liability Directive explicitly includes software (including AI systems and digital services) within the definition of “product”. This closes the gap from the 1985 Product Liability Directive, which was unclear on software. Under the NPLD, if an AI system causes damage to a person (physical injury, property damage, psychological harm, or data loss), the manufacturer can be held strictly liable — without the claimant needing to prove fault.

Who is liable under the NPLD?

Primary liability

  • Manufacturer of the AI system (developer/provider)
  • • Authorised representative in the EU (if manufacturer is non-EU)
  • • Importer if no EU manufacturer/representative

Conditional liability

  • • Fulfilment service providers (warehousing/delivery)
  • • Distributors (if no EU manufacturer available)
  • • Online platforms (in specific circumstances)

NPLD Art. 9–10

Reversed Burden of Proof

Plain English

The NPLD introduces a significant shift: in certain circumstances, the burden of proof is reversed. The claimant no longer needs to prove that the AI system was defective — instead, the manufacturer must prove it was not defective. This applies where: (1) the manufacturer fails to disclose requested evidence, (2) the damage is causally linked to an obvious malfunction visible during normal use, or (3) technical complexity makes it excessively difficult for the claimant to prove defect (a “disclosure of evidence” regime applies).

Practical implication for AI providers

If your AI system causes harm and a claim is brought, a court may order you to disclose technical documentation, logs, risk assessments, and testing records. Failure to disclose can trigger the reversed burden of proof against you. This makes maintaining comprehensive Annex IV documentation under the EU AI Act not just a regulatory obligation but also critical litigation protection.

NPLD Art. 6

What Counts as “Damage”?

Plain English

The NPLD expands the categories of recoverable damage compared to the 1985 directive. Claimants can now recover for: death or personal injury, psychological harm, property damage, damage to or destruction of data (new!), and loss of data (where no business purpose). The inclusion of data damage is particularly significant for AI systems, as erroneous AI outputs that corrupt data or lead to data-related losses may now trigger liability.

Covered damage types

  • • Death or personal injury
  • • Psychological harm (new explicit coverage)
  • • Property damage
  • • Destruction or corruption of data (new)

Exclusions

  • • Pure economic loss (generally)
  • • Business data losses (in commercial contexts)
  • • Damage below €1,000 threshold (de minimis)

How EU AI Act compliance reduces NPLD liability risk

The NPLD requires claimants to prove a defect, damage, and causal link. A product is “defective” if it does not provide the safety that persons generally are entitled to expect. AI systems that comply with EU AI Act requirements (technical documentation, risk management, conformity assessment, post-market monitoring) are in a much stronger position to defend against NPLD claims — both because the safety standards have been met, and because the documentation exists to prove it.

AI Act requirements that reduce NPLD risk

  • • Annex IV technical documentation → evidence of due diligence
  • • Art. 9 risk management → evidence of foreseeable risk mitigation
  • • Art. 15 robustness testing → evidence of safety validation
  • • Art. 72 post-market monitoring → evidence of ongoing oversight

Risk areas to prioritise

  • • Medical AI systems — highest injury liability risk
  • • Employment AI — psychological harm and discrimination
  • • Credit scoring / essential services — financial harm
  • • Autonomous systems — property damage and injury

Protect yourself with complete documentation

Generate your Annex IV technical file — your primary defence against NPLD claims.

Generate Annex IV document →