Whitepaper on the EU AI Act

Implementing AI in HR Correctly: Checklist, Human-in-the-Loop & Vendor Tips

The Most Important Takeaways at a Glance
- The “click-to-approve” trap is the most common compliance mistake when using AI in HR
- Human-in-the-loop means real, substantive review - not formal confirmation
- Applicants must be actively and clearly informed about the use of AI
- As an employer, you remain responsible - even if an external provider supplies the tool
- Five concrete steps can make recruiting AI compliance-ready
Introduction
You already know this: AI in recruiting operates within a strict legal framework. GDPR Article 22 and the EU AI Act set clear boundaries - you can find everything about this in Part 1 of this guide.
The real question is: How can you still use AI in HR efficiently, legally, and with peace of mind? This article provides the answer with concrete processes, a checklist, and practical tips for working with AI HR vendors.
Table of Contents:
The 4 Most Common Mistakes When Using AI in HR
Before we move on to solutions, let’s look at the mistakes we see most often in practice.
Mistake 1 - The “Click-to-Approve” Trap
HR employees formally confirm AI decisions without actually reviewing them in substance. What looks like human oversight is not real oversight.
This mistake is by far the most common - and the most dangerous, because it often goes unnoticed.
Mistake 2 - Lack of Transparency
Applicants receive vague statements such as “modern technology” instead of clear information about which AI HR tool is being used and why.
That is not sufficient to fulfill transparency obligations.
Mistake 3 - Vendor Blindness
Companies rely on vendor assurances without verifying them independently.
The result: no Data Processing Agreement (DPA), unclear data locations, no bias testing. Yet liability still remains with you.
Mistake 4 - Missing Risk Assessment
There is no systematic analysis of the discrimination risks posed by the specific AI HR system.
A Data Protection Impact Assessment (DPIA) is mandatory for high-risk processing - and is still regularly skipped.
Whitepaper on the EU AI Act
Human-in-the-Loop: What Real Human Oversight Means
The central compliance mechanism when using recruiting AI is the human-in-the-loop principle (HitL). But most companies misunderstand it.
The 5 Requirements for Effective Review
Subject-Matter Expertise:
The reviewing person must be able to assess the AI HR recommendation in substance. Anyone without recruiting expertise cannot properly evaluate an AI recommendation.
Full Access to Data:
The reviewer must be able to see the full application - not just the AI summary. Anyone who only sees the ranking generated by the AI HR tool is not actually reviewing anything.
Real Decision-Making Authority:
The reviewer must be able and allowed to deviate from the AI recommendation - without pressure to justify doing so. If the system is designed in a way that makes deviations difficult, that is not human-in-the-loop. It is a facade.
Sufficient Time:
Clicking through 200 applications in 20 minutes is not a review. Human oversight requires time and capacity - and this must be considered during process planning.
Understanding of the AI Logic:
The reviewer should at least broadly understand the criteria used by the system. Only then can they recognize where the AI may be wrong.
Important: A “four-eyes principle,” where a second person simply confirms the AI decision again, is not enough. A genuine substantive review of the original application is required.
Transparency Obligations: What Applicants Need to Know
When AI is used in human resources, job applicants have extensive rights to information. Ignoring these rights is not only legally risky - it also signals a poor corporate culture.
You must communicate the following: that AI is used in the recruitment process and at which stage, what it is used for (pre-screening, skill matching, interview analysis), what data is processed, how AI recommendations influence the decision, that a human review takes place-and what rights applicants have (right to information, right to object, right to a human review).
Where should you communicate this? The most effective combination: a note in the job posting, detailed information in the Privacy Policy for Applicants, and a specific notice before each AI-supported step in the process. Clear language is a must-legal jargon won’t protect you if applicants couldn’t really understand the information.
Implementing Recruiting AI in a Compliant Way in 5 Steps
Step 1: Clarify the Legal Basis
- Define precisely what you want to use the AI HR tool for
- Verify the legal basis (typically Art. 6(1) (b) or (f) GDPR)
- Conduct a DPIA - mandatory for high-risk processing
Step 2: Vendor Due Diligence
- Request evidence of GDPR and AI Act compliance
- Verify whether the tool meets high-risk requirements
- Sign a Data Processing Agreement (DPA) - this is mandatory
Step 3: Establish Human-in-the-Loop
- Document human review processes in writing
- Train the HR team on using the AI HR system
- Allocate enough time for careful review
Step 4: Ensure Transparency
- Actively inform applicants about the use of recruiting AI
- Update privacy policies and job postings
- Prepare processes for handling access requests
Step 5: Document and Monitor
- Document all AI HR decisions and human reviews
- Conduct regular risk reviews
- Test AI systems for bias and adapt processes when laws change
Want to know whether your AI HR process is already compliant today? In a free demo, we’ll show you where your company currently stands - and what is still missing.
Vendor Management: Your Responsibility Does Not End with the Contract
A common misconception: Once you’ve signed an DPA, the matter is settled. It isn’t.
As an employer, you remain the GDPR controller for applicant data-regardless of who provides the tool. The provider is the data processor. Any violations primarily affect you.
Here’s what you should consistently request from and verify with the provider: up-to-date proof of GDPR compliance, information on AI logic and bias testing, details on subcontractors and data locations, and clear processes for data subject rights. Providers who refuse or delay providing this information pose a risk-regardless of how well the tool otherwise works.
Conclusion
AI in HR and recruiting offers real potential for efficiency gains. When implemented correctly, AI HR tools can even reduce discrimination by minimizing unconscious human biases. But only if their use is legally compliant.
The good news: With the five steps in this guide, you have a solid foundation. Genuine human oversight, clear transparency, and careful vendor management-these are the building blocks that turn recruiting AI from a risk into an advantage.
Compliance is not a one-time project. Laws change, and AI systems continue to evolve. Those who view this as an ongoing process will be on the safe side in the long run.
Don’t want to set this up on your own? With heyData, you get structured support-from the Data Protection Impact Assessment (DPIA) and AVV management to ongoing compliance monitoring.
FAQ
What exactly does “human-in-the-loop” mean in the context of AI in recruiting?
Human-in-the-Loop (HitL) means that a human reviews the content of every AI decision and has the final say. This requirement is not met if HR staff merely formally approve AI recommendations. The reviewer must have seen the complete application, understand the AI logic at least in broad terms, and be able to deviate from the AI’s recommendations without feeling pressured to justify their decision.
Is a note in the legal notice regarding the use of AI sufficient?
No. The information requirements must be met where the data is collected-typically in the Privacy Policy for Applicants, which must be explicitly referenced when the application is submitted. A hidden note in the legal notice is not sufficient.
Do I need a DPIA for every AI HR tool?
Not automatically-but for all tools that perform a comprehensive, systematic evaluation of personal aspects, as AI-based applicant assessments typically do. A Data Protection Impact Assessment (DPIA) is then mandatory. When in doubt: better to have one too many than one too few.
Important: The content of this article is for informational purposes only and does not constitute legal advice. The information provided here is no substitute for personalized legal advice from a data protection officer or an attorney. We do not guarantee that the information provided is up to date, complete, or accurate. Any actions taken on the basis of the information contained in this article are at your own risk. We recommend that you always consult a data protection officer or an attorney with any legal questions or problems.


