Skip to main contentSkip to main content
Sector Guide

Education & Vocational Training

AI systems used in education face particularly stringent obligations because of their impact on individuals' life opportunities. The EU AI Act treats most assessment and admissions AI in education as high-risk, requiring full Annex IV documentation, human oversight, and FRIA.

Point 3

Annex III category

2 Aug 2026

Enforcement deadline

€30M / 6%

Max penalty

Yes (Art. 27)

FRIA required

High-risk AI systems in education (Annex III, point 3)

Annex III, point 3 designates certain educational AI as high-risk. All Art. 9–17 obligations apply — risk management, data governance, transparency, human oversight, accuracy, and post-market monitoring.

HIGH-RISKAnnex III — 3(a)

AI systems for determining access or admission to education

Systems used to determine access to educational institutions (school admissions, university selection) or to allocate places on educational programmes. This includes AI-scored entrance tests, algorithmic admissions screening, and university ranking systems that influence admissions decisions.

Examples

  • University admissions scoring tools
  • School placement algorithms
  • Scholarship allocation systems
HIGH-RISKAnnex III — 3(b)

AI systems for student assessment and evaluation

Systems used to assess and evaluate students, including automated essay scoring, AI-driven examination tools, and systems that evaluate the learning outcomes of persons in educational contexts. Covers both formative and summative assessment AI.

Examples

  • Automated essay grading
  • AI-based proctoring with performance scoring
  • Adaptive learning assessments
HIGH-RISKAnnex III — 3(c)

AI systems for vocational guidance and career counselling

Systems that monitor students and detect behavioural patterns relevant to learning, or systems used for vocational guidance based on analysis of individual characteristics that may affect access to professional opportunities.

Examples

  • AI career counselling tools
  • Learning analytics that influence course recommendations
  • AI-based student profiling systems

What is NOT high-risk in education

Not all AI in education is high-risk. Systems that are purely tools for teachers or administrative efficiency, without directly affecting students' access to opportunities, are generally not in scope:

  • AI tools that help teachers create lesson plans or educational materials
  • Chatbots used for general student Q&A (unless they make consequential decisions)
  • Plagiarism detection tools (informing a teacher decision, not making the decision)
  • Administrative AI for timetabling, resource allocation, or school logistics
  • Translation or accessibility tools for educational content

Note: If any of these systems feed into a consequential decision about a student, they may become in-scope.

Key obligations for educational AI providers and deployers

Art. 27Deployer

Fundamental Rights Impact Assessment

Educational institutions deploying high-risk AI must conduct a FRIA before deployment, covering impacts on students' rights to education, non-discrimination, privacy, and dignity. The FRIA must be updated when the system changes significantly.

Art. 14Both

Human oversight — override capability

Decisions about students based on AI outputs must always be reviewable and overridable by a human. A teacher or administrator must be able to disregard AI recommendations. Students must have access to human review (see also GDPR Art. 22).

Art. 13Provider

Transparency and explainability

Instructions for use must explain the AI system's purpose, accuracy, and limitations in plain language understandable to educators. Particularly important for assessment AI where educators may not have AI literacy.

Art. 10Provider

Data governance — protected characteristics

Training data must be examined for bias with respect to protected characteristics: age, disability, gender, race and ethnicity, socioeconomic background. Particular attention to historical data that encodes systemic educational inequalities.

Art. 22 GDPRDeployer

Student rights — GDPR overlap

Students subject to consequential automated decisions (admissions, assessments) have rights under GDPR Art. 22: the right not to be subject to solely automated decisions, the right to explanation, and the right to human review. AI Act and GDPR obligations must be fulfilled simultaneously.

Special case: AI exam proctoring

AI-based exam proctoring tools (which monitor student behaviour via webcam, detect suspicious activity, and generate reports used in academic integrity decisions) are in a complex position. They likely qualify as high-risk under Annex III, point 3(b) — and may also engage the biometric data provisions if they analyse physical/behavioural characteristics. Several EU data protection authorities have raised concerns about the lawfulness of remote proctoring, and some national implementations are being challenged. Exercise extreme caution with proctoring AI.

This guide is for informational purposes only and is not legal advice. Verify all requirements with the relevant national supervisory authority.