EU AI Act Guide for HR & Employment AI
HR and employment AI is one of the most directly regulated areas under the EU AI Act. Annex III Category 4 explicitly covers CV screening, performance monitoring, task allocation, and redundancy planning AI. This guide explains what falls in scope, the obligations on HR technology providers and employers as deployers, the GDPR intersection, and what organisations need to do before August 2026.
Why HR is high-exposure under the EU AI Act
Annex III Category 4 — “Employment, workers management and access to self-employment” — was included in the EU AI Act because AI hiring and management tools can profoundly and irreversibly affect individuals' livelihoods. A candidate rejected by an AI shortlisting system may never know, and may have no practical recourse. A worker scored poorly by a productivity monitoring algorithm may face dismissal.
Employers deploying these systems are “deployers” under the AI Act regardless of whether they built the system themselves or procured it from a vendor. Art. 26 obligations — including the unique Art. 26(4) worker consultation requirement — apply to the deploying employer, not just the HR technology vendor. This creates compliance obligations for every business that uses AI in its HR processes.
What Falls Under Annex III Category 4
Category 4 covers four broad system types, each of which triggers the full Art. 9–17 high-risk obligation stack.
CV screening and candidate shortlisting AI
Annex III, cat. 4(a)AI systems that filter, rank, or shortlist candidates based on CVs, application forms, or other submitted materials are explicitly listed as high-risk. The system is in scope regardless of whether the final hiring decision is made by a human — the automated shortlisting step itself triggers the classification.
Typical deployers
Employers using applicant tracking systems with AI ranking, HR technology platforms offering AI-powered screening as a service
Borderline considerations
A keyword search tool that simply matches terms without ranking or scoring candidates may fall outside the AI definition under Art. 3(1). Any system that applies a model to generate a score, ranking, or recommendation is almost certainly in scope.
Interview scheduling AI (where it influences outcomes)
Annex III, cat. 4(a)AI systems that analyse video or audio of interviews to produce assessments, scores, or recommendations about candidates are high-risk. Purely administrative scheduling tools — which book calendar slots without evaluating candidates — are outside scope. The test is whether the AI's output influences the recruitment decision.
Typical deployers
HR tech vendors offering automated video interview analysis, employers using AI-generated interview scoring
Borderline considerations
Tools that transcribe interviews for human review without producing candidate assessments are likely outside scope, but any AI-generated insight about candidate suitability brings the tool into Category 4.
Performance monitoring systems that influence employment
Annex III, cat. 4(b)AI systems used to monitor employee performance — including productivity tracking, quality scoring, and behaviour monitoring — are high-risk where their outputs are used to make or inform employment decisions: pay, promotion, disciplinary action, or termination. The influence on the employment relationship is the trigger.
Typical deployers
Employers using AI productivity tools, call centre quality monitoring platforms, warehouse management systems with AI performance scoring
Borderline considerations
Systems that collect raw data without AI analysis (e.g., simple time-tracking software) are not in scope. Where an AI inference layer processes the data to generate performance insights used in decisions about workers, the high-risk classification applies.
AI that allocates tasks or monitors productivity
Annex III, cat. 4(b)AI systems that distribute or assign work tasks to workers — including algorithmic management systems common in gig economy platforms — fall under Category 4 where their outputs affect the working conditions, earnings, or access to work of natural persons. This explicitly covers platform worker management, shift allocation algorithms, and AI-driven workload assignment tools.
Typical deployers
Gig economy platforms, logistics companies using AI dispatch, retail and hospitality employers using AI shift scheduling with algorithmic optimisation
Borderline considerations
Simple rule-based rota tools that apply fixed business logic are not AI systems. The Category 4 risk classification is triggered where the system uses machine learning or optimisation algorithms that adapt over time.
AI used in redundancy decision processes
Annex III, cat. 4(b)Any AI system used to identify candidates for redundancy, score workers for retention, or support workforce reduction decisions is high-risk. The potential for irreversible impact on individuals — job loss — means the EU legislator treats this as one of the most serious employment AI applications. Providers and deployers face the full obligation stack.
Typical deployers
Employers using AI-assisted workforce planning tools that generate individual-level redundancy recommendations or risk scores
Borderline considerations
Aggregate workforce planning tools that model headcount at team or department level without scoring individuals are lower risk. The threshold is crossed when the AI produces outputs about specific named or identifiable workers.
Provider Obligations for HR AI Vendors
HR technology companies that develop and place AI recruitment or workforce management products on the EU market are “providers” under Art. 3(3). Providers bear the primary technical compliance burden:
Identify and mitigate risks specific to HR use cases — bias in candidate scoring, fairness across protected characteristics, performance in edge cases.
Training data must be representative of the populations the system will assess, and bias analysis must be conducted. Underrepresentation of protected groups in training data is a known HR AI risk.
Full Annex IV technical documentation must be compiled and kept up to date. This includes system architecture, training methodology, performance metrics, and known limitations.
Deployers (employers) must receive instructions that enable them to understand what the system does, its limitations, and how to implement human oversight. Vague marketing copy does not satisfy this requirement.
Providers must register high-risk HR AI systems in the EU AI database before placing them on the market or putting them into service.
Most HR AI systems under Annex III Category 4 require internal conformity assessment (self-assessment). CE marking and a Declaration of Conformity must be prepared.
Deployer (Employer) Obligations — Art. 26
Every employer using a high-risk HR AI system is a “deployer” under the AI Act. Art. 26 imposes the following obligations directly on the employer — even where the system was built by a vendor.
| Article | Obligation |
|---|---|
| Art. 26(1) | Use in accordance with instructions |
| Art. 26(2) | Human oversight |
| Art. 26(4) | Worker consultation |
| Art. 26(5) | FRIA where applicable |
| Art. 26(6) | Logging and monitoring |
| Art. 50(1) | Transparency to individuals |
Unique to employment AI: mandatory consultation with worker representatives
Art. 26(4) requires deployers to inform and consult worker representatives before deploying a high-risk AI system that will affect workers. This is not merely a transparency obligation — it is a procedural requirement to engage with works councils, trade unions, or other worker representative bodies before deployment. In many member states this obligation stacks on top of existing national co-determination rights (see cross-border section below), making early engagement with worker representatives essential.
Fundamental Rights Impact Assessment (FRIA)
Art. 26(5) requires certain deployers — including public bodies and entities specified in that provision — to conduct a FRIA before deploying any high-risk AI system. For HR AI systems affecting workers' rights, a FRIA should be conducted by all large employers as a matter of best practice, even where not strictly mandated, because the potential impact on fundamental rights is high.
What the FRIA must cover for HR AI
- Impact on the right to non-discrimination in employment (Art. 21 EU Charter)
- Impact on workers' right to data protection (Art. 8 EU Charter)
- Impact on dignity at work and the right to fair working conditions (Art. 31 EU Charter)
- Assessment of whether the AI system could perpetuate or amplify existing labour market disadvantage for protected groups
- Mitigation measures and residual risk assessment
Relationship to other assessments
- FRIA is separate from and additional to any GDPR DPIA required under Art. 35
- Elements may overlap — a single integrated assessment can satisfy both, provided it addresses each instrument's requirements explicitly
- National equality impact assessments may also be required under applicable employment discrimination law
- Document the FRIA in a way that can be provided to the market surveillance authority on request
GDPR Intersection — Employment Data
Lawful basis for employment data
Processing employee or candidate personal data for AI training or inference requires a lawful basis under GDPR Art. 6. In employment contexts, legitimate interests under Art. 6(1)(f) is available but requires a balancing test — the employer's interests in using AI must not override workers' reasonable expectations. Consent (Art. 6(1)(a)) is generally not a valid basis in employment relationships because of the power imbalance between employer and worker: consent cannot be freely given where refusal risks adverse employment consequences.
Special category data
CV screening and performance monitoring systems often process data that reveals protected characteristics — disability, ethnicity, religion, health — even where this is not the system's intent. Where special category data is processed, an Art. 9 exception must be identified. In employment, Art. 9(2)(b) (employment law obligations) is commonly relied upon, but it requires a basis in national law and appropriate safeguards. Accidental processing of special category data through proxy variables is a significant risk in AI hiring systems.
Data minimisation
GDPR Art. 5(1)(c) requires that personal data be adequate, relevant, and limited to what is necessary for the processing purpose. HR AI systems — particularly those trained on broad behavioural or productivity data — must be designed to collect and use only the minimum data required. Unnecessary data collection for model training is a common compliance gap.
GDPR Art. 22 — automated decisions
Candidate shortlisting and performance scoring decisions with significant effects on employment are 'automated decisions with significant effects' under GDPR Art. 22 where human review is not meaningful. Employers must identify a lawful basis (usually Art. 22(2)(a): contract necessity for hiring, or Art. 22(2)(b): national law authorisation), provide meaningful information about the decision logic, and enable candidates or workers to request human intervention and contest the decision.
Cross-Border Considerations
The EU AI Act sets a baseline. Several member states have existing labour law obligations that apply on top of Art. 26(4) and may require earlier, deeper engagement with worker representatives than the AI Act alone mandates.
Works council co-determination rights under the Betriebsverfassungsgesetz (BetrVG §87(1) no. 6) apply to technical monitoring of workers and AI-based performance systems. Employers must obtain works council agreement before deploying such systems. This obligation operates independently of and in addition to Art. 26(4).
The French Labour Code requires consultation with the Social and Economic Committee (CSE) on any significant change to working conditions, including the introduction of AI-based monitoring or management tools. CNIL guidance on employee data processing and AI adds further requirements on transparency and proportionality.
The Works Councils Act (WOR Art. 27) gives works councils right of consent over monitoring systems and performance appraisal methods. Dutch DPA (AP) has been active in enforcement of GDPR employment data issues and has issued specific guidance on AI and employee monitoring.
Royal Decree-Law 9/2021 introduced an obligation for companies using algorithmic systems to manage working conditions to inform worker representatives. Spanish courts have been active in striking down algorithmic dismissals. The 'algorithmic transparency' right for worker representatives is codified in the Workers Statute.
Practical Action Plan
A phased compliance approach for HR technology companies (as providers) and employers (as deployers).
Phase 1 — Inventory (now)
- 1Map every AI-assisted HR process: recruitment screening, interview analysis, performance monitoring, task allocation, redundancy planning
- 2For each system, determine whether it falls within Annex III Category 4 — whether in-house built or procured from a vendor
- 3For vendor systems, request technical documentation, instructions for use, and the provider's EU AI Act compliance position
- 4Identify which workers and candidates are subject to AI-assisted decisions and in which jurisdictions
Phase 2 — Legal and HR assessment (Q3–Q4 2025)
- 1Map obligations by role: where your organisation is a provider (built in-house), deployer (vendor product), or both
- 2Assess GDPR lawful basis for each system's personal data processing — eliminate reliance on consent in employment contexts
- 3Check national co-determination and works council obligations in each member state where the system is deployed
- 4Commission a Fundamental Rights Impact Assessment if required; conduct a GDPR DPIA for any high-risk personal data processing
Phase 3 — Remediation (Q4 2025 – Q2 2026)
- 1Implement human oversight procedures: define which HR decisions require human review, how overrides are documented, and what training reviewers need
- 2Consult worker representatives under Art. 26(4) and comply with applicable national co-determination obligations before or during deployment
- 3Update candidate-facing notices, employment contracts, and workplace policies to reflect AI Act Art. 50 transparency obligations
- 4Establish monitoring procedures: track demographic outcomes of AI decisions, set thresholds for performance review, and define escalation paths
Phase 4 — Conformity and ongoing compliance (by 2 August 2026)
- 1Ensure providers of in-house-built HR AI systems complete conformity assessment, technical documentation, and EU database registration
- 2Verify vendor systems are registered in the EU AI Act database and hold valid CE marking before the enforcement deadline
- 3Establish a recurring audit cycle: review AI decision outcomes for bias annually, test human oversight effectiveness, and update FRIAs as systems change
- 4Train HR teams, hiring managers, and works council representatives on the system's capabilities, limitations, and their obligations under the AI Act
Case Study Reference
See how these obligations apply in a worked example involving an AI-powered recruitment platform used by a mid-size employer across multiple EU member states.
TalentBot Ltd — HR recruitment AI case study →Related guides
Annex III Deep-Dive
Full coverage of all 8 high-risk categories including Category 4 employment AI.
Provider vs Deployer
Understand which obligations fall on HR tech vendors versus employers using those products.
GDPR Intersections
Art. 22 automated decisions, DPIAs, and simultaneous GDPR + AI Act compliance in employment.
Build your HR AI compliance checklist
Generate a tailored checklist covering Annex III Category 4 obligations, Art. 26 deployer requirements, and the worker consultation process.
Build compliance checklist →