EU AI Act Compliance Checklist for Recruiters (Free Template)

Key Takeaway

The EU AI Act classifies all AI used in recruiting as high-risk, with full compliance required by August 2, 2026. Non-compliance penalties reach up to 35 million euros or 7% of global turnover. This 20-item checklist covers every obligation.

The EU AI Act is no longer a future concern. It is the law. By August 2, 2026, every organisation that uses AI in hiring across the European Union must comply with strict requirements for high-risk AI systems. The penalties for non-compliance are severe: fines of up to €35 million or 7% of global annual turnover, whichever is higher.

If you use AI-powered tools for CV screening, candidate ranking, interview analysis, or any other recruiting function, your systems are classified as high-risk under Annex III, point 4 of the EU AI Act. That means you need documented processes, technical safeguards, and operational controls in place before the deadline.

This EU AI Act checklist gives you a practical, step-by-step template to get compliant. We have broken it into four phases - pre-assessment, technical compliance, documentation, and operations - with 20 actionable items and a month-by-month timeline from February to August 2026. For a deeper overview of the regulation itself, see our EU AI Act recruiting compliance guide.

Why You Need This AI Hiring Compliance Checklist

The EU AI Act (Regulation 2024/1689) entered into force on 1 August 2024. Its provisions are being phased in over three years, and the high-risk AI obligations that affect recruitment take full effect on 2 August 2026.

Here is what is at stake:

  • Fines up to €35 million or 7% of global turnover for prohibited AI practices
  • Fines up to €15 million or 3% of global turnover for breaching high-risk AI requirements
  • Reputational damage from publicised enforcement actions
  • Candidate lawsuits under the Act's individual rights provisions
  • Market access restrictions - non-compliant AI systems cannot be deployed in the EU

According to recent AI recruiting statistics, over 78% of large European employers already use some form of AI in their hiring pipeline. Yet fewer than 30% have started formal compliance programmes. According to Taleva's analysis of 200M+ European profiles, the gap between AI adoption and regulatory readiness is a ticking liability.

For the latest European recruiting data, see Taleva's recruiting data hub. This checklist closes that gap. Print it, share it with your legal and HR teams, and start working through each item today.

Phase 1: Pre-Assessment (Weeks 1–3)

Before you can comply, you need to understand what you are working with. This phase maps your current AI landscape and identifies your obligations.

1. Inventory all AI tools used in your hiring process. Create a complete register of every AI-powered system touching your recruitment workflow. Include your ATS screening features, AI sourcing tools, chatbots, video interview analysers, assessment platforms, and any internal models. For each tool, record the vendor name, version, purpose, and which stage of hiring it supports. If you need help with terminology, consult our AI recruiting glossary.

2. Classify the risk level of each AI system. Under the EU AI Act, AI used for "recruitment or selection of natural persons, for making decisions affecting terms of the work-related relationship" is explicitly listed as high-risk (Annex III, Section 4). Confirm which of your tools fall into this category. Note that some tools may have AI features you are not aware of - check with each vendor. Any AI that analyses emotion or biometric data during interviews may be prohibited entirely under Article 5.

3. Document data flows for each AI system. Map exactly what data enters each AI tool, where it comes from (job boards, your career site, LinkedIn, internal database), how it is processed, where outputs go, and where data is stored. Include cross-border transfers. This overlaps with your GDPR obligations but the AI Act requires additional specificity about how the AI uses this data for decision-making. See our list of GDPR-compliant AI recruiting tools for platforms that already handle data flows correctly.

4. Identify your role under the AI Act (provider vs deployer). If you built or substantially modified the AI system, you are a "provider" with the heaviest obligations. If you use a third-party tool as-is, you are a "deployer" under Article 26. Most recruiters are deployers, but if you have fine-tuned models or built custom scoring algorithms on top of vendor APIs, you may have provider obligations too. Document your classification for each system.

5. Conduct a gap analysis against AI Act requirements. For each high-risk AI system, compare your current controls against the requirements in Articles 9–15 (risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy/robustness). Create a spreadsheet listing each requirement, your current state, the gap, and the remediation action needed. This becomes your compliance project plan.

Phase 2: Technical Compliance (Weeks 4–10)

With your inventory and gap analysis in hand, this phase addresses the technical safeguards the EU AI Act requires for high-risk systems.

6. Implement bias testing and fairness audits. Article 10 requires that training, validation, and testing datasets are "relevant, sufficiently representative, and to the best extent possible, free of errors and complete." For recruiting AI, this means testing for bias across protected characteristics: gender, age, ethnicity, disability, and nationality at minimum. Establish a regular testing cadence - quarterly at minimum - using statistical fairness metrics (demographic parity, equalised odds, predictive parity). Document every test, its methodology, results, and any corrective actions taken.

7. Establish human oversight mechanisms. Article 14 mandates that high-risk AI systems are "designed and developed in such a way that they can be effectively overseen by natural persons." In practice, this means: no fully automated rejection of candidates without human review; a qualified recruiter must be able to override any AI recommendation; the human reviewer must understand the AI's output well enough to challenge it; and there must be a clear "stop button" - the ability to immediately disable the AI system. Define who your human overseers are, what training they receive, and how they exercise oversight.

8. Create transparency notices for candidates. Article 13 requires that high-risk AI systems operate with "sufficient transparency to enable deployers to interpret a system's output and use it appropriately." Article 26(7) specifically requires deployers to inform candidates that they are subject to AI-assisted decision-making. Draft clear, plain-language notices that tell candidates: which AI tools are used, what data is processed, how decisions are influenced by AI, their right to human review, and how to contest AI-driven decisions. Place these notices in your application process before any AI processing begins.

9. Verify data quality and governance controls. Article 10 sets strict data governance requirements. For each AI system, verify: training data is relevant and representative of your actual candidate population; data is checked for errors, gaps, and biases before use; appropriate data preparation and cleaning processes exist; personal data handling complies with both GDPR and AI Act requirements simultaneously; and data used for testing is separate from training data. If you rely on a vendor's model, request their data governance documentation and verify it meets these standards.

10. Enable automatic logging and audit trails. Article 12 requires that high-risk AI systems support automatic logging of events "to the extent that such logging is technically possible." For recruiting AI, this means recording: every candidate interaction with the AI system, all inputs and outputs (queries, rankings, scores, recommendations), human oversight decisions (approvals, overrides, rejections), system performance metrics, and any incidents or malfunctions. Logs must be retained for a period "appropriate in the light of the intended purpose of the high-risk AI system" - we recommend at least 24 months for recruiting decisions. Ensure logs are tamper-proof and accessible for regulatory audits.

Phase 3: Documentation (Weeks 8–14)

The EU AI Act is heavily documentation-driven. This phase creates the paper trail that proves compliance.

11. Compile technical documentation for each AI system. Article 11 and Annex IV specify detailed technical documentation requirements. For each high-risk AI system, compile: a general description of the system and its intended purpose; detailed information about system design and development; data requirements and data governance measures; performance metrics and accuracy benchmarks; a description of the system architecture; and information about hardware and software requirements. As a deployer, you may rely on your vendor's documentation, but you must verify it exists and is complete. Request it formally and store it securely.

12. Complete a risk management assessment. Article 9 requires a continuous, iterative risk management system. Document: identified risks to health, safety, and fundamental rights of candidates; the likelihood and severity of each risk; risk mitigation measures you have implemented; residual risks and why they are acceptable; and how risks are monitored over time. For recruiting, focus on risks of discriminatory outcomes, privacy violations, lack of transparency, and wrongful candidate rejection. Review and update this assessment at least annually.

13. Conduct a Fundamental Rights Impact Assessment (FRIA). Article 27 requires deployers of high-risk AI systems to conduct an impact assessment on fundamental rights before putting the system into use. This must include: a description of your processes where AI is used; the period and frequency of AI system use; categories of candidates affected; specific risks of harm to fundamental rights; human oversight measures; steps taken if risks materialise; and governance and complaint mechanisms. Submit the results to the relevant national supervisory authority if required by your member state.

14. Update data processing records and DPIAs. Your existing GDPR records of processing activities (Article 30 GDPR) and Data Protection Impact Assessments (Article 35 GDPR) must now explicitly address AI-specific risks. Our GDPR-compliant sourcing checklist covers the GDPR baseline. Update them to include: how AI processes personal data differently from manual processes; automated decision-making logic (Article 22 GDPR alignment); data retention specific to AI logging requirements; and cross-references to your AI Act technical documentation. This is not a new exercise - it extends your existing GDPR documentation with an AI-specific layer.

15. Review and update vendor contracts. If you use third-party AI tools (as most recruiters do), your contracts must address AI Act obligations. Ensure each vendor contract includes: confirmation the AI system is CE-marked and registered in the EU database; the vendor's commitment to provide technical documentation; clear allocation of AI Act responsibilities between provider and deployer; incident reporting obligations and timelines; cooperation requirements for regulatory audits; data governance and bias testing commitments; and SLAs for updates, patches, and compliance fixes. Any vendor that cannot or will not provide these contractual assurances is a compliance risk you should replace before August.

Phase 4: Operational Compliance (Weeks 12–20)

Documentation alone is not enough. This phase puts your compliance programme into daily practice.

16. Deploy candidate notification processes. Build AI transparency into your candidate journey. At a minimum: add an AI disclosure statement to your careers page; include AI processing information in the application confirmation email; present a clear AI notice before any AI-assisted screening step; and provide an easy opt-out or human-only review request mechanism. Test this by going through your own application process as a candidate. If the AI use is not obvious and clearly explained at every relevant step, fix it.

17. Operationalise human review workflows. Define and implement the exact process by which humans oversee AI decisions. Document: at which stages a recruiter reviews AI outputs; the maximum number of AI-processed candidates a reviewer handles per day (to prevent rubber-stamping); what training reviewers receive on the AI system's limitations; how overrides are recorded and escalated; and what happens when a reviewer disagrees with the AI's recommendation. Run tabletop exercises to ensure the workflow is practical, not just theoretical.

18. Establish a candidate complaint handling process. Candidates have the right to contest AI-assisted decisions. Create a clear process: publish a dedicated channel for AI-related complaints (email, form, or portal); set response time SLAs (we recommend 15 business days maximum); define who investigates complaints and what authority they have; document how complaints are resolved and feed into bias testing; and maintain a complaint register for regulatory reporting. Train your recruitment team to recognise and correctly route AI-related complaints.

19. Set up ongoing monitoring and incident reporting. Article 26(5) requires deployers to monitor AI system operation and report serious incidents. Implement: real-time monitoring dashboards for AI system performance; automated alerts for anomalous outputs (e.g., sudden demographic skew in shortlists); a defined incident response plan specifically for AI failures; a reporting channel to your vendor and to the relevant supervisory authority; and periodic review of AI outputs against expected baselines. Any incident that poses a risk to fundamental rights must be reported immediately.

20. Schedule annual compliance audits. Compliance is not a one-time project. Schedule: annual internal audits of all 19 items above; an annual refresh of your AI system inventory (new tools get added constantly); annual bias testing reviews with updated demographic data; annual training refreshers for human overseers; and a three-year cycle for external independent audits. Put these in the calendar now. Compliance drift is the biggest risk after initial implementation.

Month-by-Month Action Plan: February to August 2026

Use this timeline to pace your compliance programme. Adjust based on the size of your AI inventory and internal resources.

Month Focus Key Deliverables
February 2026 Pre-Assessment AI tool inventory complete; risk classifications assigned; data flows mapped
March 2026 Gap Analysis & Planning Gap analysis spreadsheet finalised; project plan approved; vendor outreach started
April 2026 Technical Compliance Bias testing framework implemented; human oversight roles defined; logging enabled
May 2026 Technical Compliance & Documentation Transparency notices drafted; data quality audits complete; technical docs compiled
June 2026 Documentation Risk assessment complete; FRIA submitted; vendor contracts updated; DPIAs refreshed
July 2026 Operational Deployment Candidate notices live; human review workflows tested; complaint process launched; monitoring active
August 2026 Final Review & Go-Live End-to-end compliance audit; remediate any gaps; annual audit schedule locked in

How Taleva Helps With EU AI Act Compliance

Taleva is built for the European recruiting market, which means EU AI Act compliance is part of the architecture - not a bolt-on afterthought.

  • Transparency by design: Every AI-generated candidate ranking on Taleva's search platform includes an explainability layer. Recruiters can see why a candidate was ranked and easily override the AI.
  • Human oversight built in: No candidate is automatically rejected. Every AI recommendation requires human confirmation before progressing or declining.
  • Automatic audit logging: All AI interactions, inputs, outputs, and human override decisions are logged and exportable for regulatory audits.
  • Bias monitoring: Taleva runs continuous fairness checks across gender, age, and nationality dimensions, with alerts for statistical anomalies.
  • GDPR + AI Act documentation: Taleva provides deployer-ready technical documentation, data processing records, and compliance attestations you can plug directly into your AI Act files.
  • EU data residency: All candidate data is processed and stored within the EU, eliminating cross-border transfer complications.

If you are evaluating your AI recruiting tools against this checklist and finding gaps, try Taleva for free and see how compliance-ready AI sourcing works in practice.

Frequently Asked Questions

Does the EU AI Act apply to companies based outside the EU?

Yes. The EU AI Act has extraterritorial reach. If the output of your AI system is "used in the Union" - meaning if you use AI to evaluate candidates located in the EU, regardless of where your company is headquartered - you must comply. This mirrors the GDPR's extraterritorial scope. US, UK, and other non-EU companies hiring in the EU are fully in scope.

What is the difference between a provider and a deployer?

A provider is the entity that develops the AI system or places it on the market. A deployer is the entity that uses the AI system under its authority. Most recruiters are deployers: you use a vendor's AI tool within your hiring process. Providers have heavier obligations (conformity assessment, CE marking, EU database registration), but deployers still must ensure human oversight, transparency, data quality, risk assessments, logging, and incident reporting. If you substantially modify a vendor's AI system, you may be reclassified as a provider.

What if our AI vendor says they handle all compliance?

They cannot - and claiming otherwise is a red flag. The EU AI Act explicitly places obligations on both providers and deployers. Even if your vendor (the provider) handles technical documentation, conformity assessment, and system-level requirements, you as the deployer remain responsible for: conducting a Fundamental Rights Impact Assessment, ensuring human oversight in practice, informing candidates about AI use, monitoring the system during operation, and reporting serious incidents. Compliance is a shared responsibility. Use this checklist to verify which obligations are genuinely covered by your vendor and which remain yours.

← Back to all posts