Problem Services Results About Blog Book a Free Strategy Call
EU AI Act compliance for recruiting agencies

If your recruiting agency uses AI tools — whether it's an automated CV screener, an ATS with ranking features, a chatbot that pre-qualifies candidates, or AI-generated outreach — the EU AI Act concerns you. Directly. This isn't a distant regulation that only applies to tech giants building AI from scratch. It applies to any organisation operating in the EU that deploys AI systems, including agencies that simply purchase or integrate third-party AI tools into their hiring workflow.

The good news: the regulation was designed to be workable. Most AI tools your agency uses will fall into low-risk categories with minimal obligations. But a specific set of use cases — particularly anything that scores, ranks, or filters job candidates — is classified as high-risk, and that comes with real requirements you need to understand before a compliance deadline hits.

This guide breaks down exactly what the EU AI Act means for recruiting agencies in plain language, which parts of your workflow are affected, and what concrete steps you should take now.

What Is the EU AI Act — and Why Does It Apply to You?

The EU AI Act (Regulation 2024/1689) entered into force in August 2024 and is being rolled out in phases through 2027. It establishes a risk-based framework that categorises AI systems into four tiers: unacceptable risk (banned), high risk (heavily regulated), limited risk (transparency obligations), and minimal risk (largely unregulated).

The Act applies to you if:

In practice, this means almost every recruiting agency operating in Europe is in scope — even if the AI tool was built by a vendor based outside the EU. As a deployer of AI (meaning you use an AI system built by someone else), you have specific obligations that are separate from the obligations on the company that built the tool.

"The EU AI Act doesn't just regulate the companies that build AI — it regulates the companies that use it. Recruiting agencies are deployers, and deployers have real responsibilities."

The Risk Tiers That Matter for Recruiting

Unacceptable Risk — Outright Banned

Certain AI applications are completely prohibited. For recruiting agencies, the most relevant banned use cases are:

These prohibitions have been in effect since February 2025. If any tool you use comes close to these descriptions, stop using it immediately and review your vendor contracts.

High Risk — Where Most Recruiting AI Lives

This is the category that matters most for your day-to-day operations. The EU AI Act explicitly lists "AI systems used for recruitment or selection of natural persons, in particular for advertising vacancies, screening or filtering applications, evaluating candidates" as high-risk AI.

This includes, but is not limited to:

High-risk AI systems face the most significant compliance obligations. The key requirements applying to deployers (i.e. your agency) are:

  1. Human oversight — you must ensure a human is in the loop and can meaningfully override the AI's decisions
  2. Data governance — you must use the system only on data it was designed for and avoid processing special categories of data without explicit legal basis
  3. Transparency to candidates — candidates must be informed when AI is being used in the selection process
  4. Logging and monitoring — you must keep records of how the system is being used
  5. Fundamental rights impact assessment — for public bodies (and encouraged for private ones) a FRIA must be completed before deploying high-risk AI

Limited Risk — Chatbots and Generative AI

AI systems that interact with candidates directly — such as a chatbot that answers initial questions, schedules interviews, or sends personalised outreach — fall under limited risk if they don't make filtering or scoring decisions. The primary obligation here is transparency: candidates must know they are interacting with an AI, not a human recruiter.

If your agency uses AI-generated email sequences or automated chat for candidate outreach, you need a disclosure. This doesn't have to be intrusive — a short line at the end of a message such as "This message was prepared with the assistance of AI" is typically sufficient.

What You Must Tell Candidates: The Transparency Rules

One of the clearest obligations for recruiting agencies is the duty to inform candidates when AI plays a role in decisions about them. Under Article 50 and the high-risk AI provisions, candidates have the right to know:

In practice, this means updating your candidate-facing communications: your job application forms, your privacy notice, and any automated emails you send during the selection process. The disclosure doesn't need to be lengthy, but it must be clear and accessible — buried small print won't satisfy the regulation's intent.

This also ties closely to your existing GDPR obligations. The EU AI Act does not replace GDPR; it sits on top of it. If you're already compliant with GDPR transparency requirements for automated decision-making (Article 22 GDPR), you're partway there — but the AI Act goes further and requires proactive disclosure even when decisions are only partially automated.

Human Oversight: The Non-Negotiable Requirement

For any high-risk AI system you deploy, the EU AI Act requires that a qualified human can effectively oversee, intervene, and override the system's outputs. This has practical implications for how you run your recruiting process:

The intent here is not to make AI unusable — it's to ensure AI is a tool that assists recruiters, not one that replaces their judgment. The good news is that well-designed AI automation for recruiting agencies already works this way: it surfaces the best candidates faster, but the recruiter makes the call.

Your EU AI Act Compliance Checklist

Here's a practical starting point for any recruiting agency reviewing its AI use today:

  1. Inventory your AI tools — list every AI-powered feature you use, from your ATS to outreach automation to any assessment platforms
  2. Classify each tool by risk tier — does it screen, rank, or score candidates? If yes, it's high-risk. If it only communicates or schedules, it's likely limited risk
  3. Review vendor documentation — high-risk AI providers have their own obligations (conformity assessments, CE marking from August 2026). Ask your vendors where they stand
  4. Update your candidate disclosures — add AI transparency language to your application forms, privacy notices, and automated email sequences
  5. Establish a human oversight process — document who reviews AI outputs and how candidates can request human reconsideration
  6. Check your data practices — confirm you have a valid legal basis for any special category data (health, disability, ethnicity) that might enter an AI system
  7. Keep records — log how and when high-risk AI systems are used; this log is required and may be audited

Key Deadlines to Know

The EU AI Act is being phased in over several years. Here's what's already in effect and what's coming:

August 2026 is your primary compliance target. That sounds like plenty of time, but updating contracts, revising candidate communications, implementing oversight processes, and training your team all takes longer than you'd expect. Starting now gives you room to do it properly.

The Bottom Line for Recruiting Agencies

The EU AI Act is not a reason to stop using AI in your recruiting process. It is a reason to use it responsibly and transparently. Agencies that get this right will have a genuine competitive advantage: candidates increasingly trust agencies that are upfront about how AI is used, and clients increasingly ask about AI governance in supplier audits.

The core message of the regulation is straightforward: know what AI you're using, tell candidates about it, keep a human in the loop, and document what you do. None of that conflicts with running an efficient, AI-powered recruiting operation. In fact, well-implemented AI automations are already built around these principles — they make your recruiters faster without removing their judgment from the process.

If you're unsure where to start, begin with your AI inventory. You can't comply with a law covering tools you don't know you're using.

Ready to use AI compliantly in your recruiting workflow?

We build custom AI automations for recruiting agencies that are designed with human oversight and transparency from day one — so you get the efficiency gains without the compliance headaches. Book a free 30-minute strategy call to map out your options.

Book Your Free Strategy Call →
Back to all articles