Whitepaper on the EU AI Act

AI in Recruiting: Between Innovation and Compliance – What You Need to Know

Key Takeaways at a Glance
- Fully automated hiring decisions made by AI violate GDPR Art. 22 in most cases
- AI in recruiting often falls into the high-risk category under the EU AI Act
- As an employer, you are the GDPR controller – not the tool provider
- There are clear warning signs that help you identify unreliable AI HR tools immediately
Introduction
Imagine this: Six months ago, you introduced an AI HR tool for your recruitment processes. Candidate preselection runs automatically, and your team saves time. Then an email arrives from the data protection authority. A rejected applicant wants to know the logic behind the AI’s evaluation of their application – and whether a human was involved at all.
For more and more HR managers in SMEs, this is no longer fiction.
AI in HR is one of the most legally sensitive application areas of all. Many companies use AI HR tools without realizing that they are entering a minefield of GDPR requirements and EU AI Act obligations. This article shows you where the legal boundaries lie – and how to identify problematic tools before they become a problem.
Table of Contents:
Why AI in HR Is Especially Regulated
AI in human resources differs fundamentally from other AI applications. These systems influence decisions that directly affect people’s life opportunities, career paths, and financial livelihoods.
That is precisely why lawmakers have introduced particularly high barriers here – higher than for AI in marketing or logistics. Four reasons are crucial:
- Sensitive data: Recruiting often involves processing highly sensitive information – from age and gender to ethnic background, for example through photos or names on resumes.
- Reproduction of bias: AI HR systems inherit existing biases from historical hiring data. Amazon’s internal recruiting tool was proven to disadvantage women – and it was not an isolated case.
- Structural imbalance of power: Applicants can rarely challenge AI decisions and may lose a job opportunity without ever understanding why.
- Constitutional protection: The right to work and protection against discrimination are constitutionally protected rights – placing clear limits on the use of automated systems.
Whitepaper on the EU AI Act
GDPR Article 22: The Ban on Automated Individual Decisions
The core of GDPR requirements for AI in recruiting is Article 22 GDPR. It generally prohibits automated individual decisions that have legal effects on a person or significantly affect them.
What This Means in Practice
A purely automated decision exists when an AI HR system analyzes applications and independently decides who gets rejected – without a human substantively reviewing the recommendation.
“Important: Not being invited to a job interview qualifies as a significant impact. Applicants lose a job opportunity – and that alone is enough to trigger Article 22.”
The Exceptions Rarely Help
Article 22 GDPR contains exceptions – but they typically do not apply in recruiting:
- Consent is rarely usable because applicants are under pressure, making voluntary consent legally questionable.
- Contract necessity applies only in existing employment relationships.
- There is currently no explicit legal authorization for AI recruiting systems.
The consequence: Fully automated hiring decisions without genuine human review are unlawful in most cases. Simply clicking “OK” on AI-generated recommendations is not enough.
EU AI Act: When Recruiting AI Becomes a High-Risk Application
The EU AI Act further increases the requirements for AI in HR. It categorizes AI systems according to risk classes – and a large share of available recruiting tools falls into the high-risk category.
When Is an AI HR Tool Considered High-Risk?
AI systems are considered high-risk when they are used for recruitment procedures – especially for preselection, evaluation, or hiring decisions – as well as for promotion decisions or employee performance monitoring.
This includes CV parsing systems with intelligent scoring as well as interview analysis tools that evaluate facial expressions or speech patterns.
What This Means for You
High-risk AI systems are subject to strict obligations across six areas:
| Area | Requirement |
| Risk Management | Systematic identification and minimization of risks |
| Data Quality | Training with representative, error-free datasets |
| Documentation | Comprehensive technical documentation |
| Transparency | Clear information for affected individuals |
| Human Oversight | Genuine human oversight mechanisms |
| Accuracy | Demonstrable system performance |
These obligations primarily target providers – but as a user, you must ensure that the tool complies with these requirements and is used as intended.
When You Should Immediately Question an AI HR Tool
Before purchasing or renewing an AI recruiting tool, check for these five warning signs:
1. No Explanation of the AI Logic
The provider cannot explain how recommendations are generated. If you do not understand the logic, you cannot explain it to applicants either.
2. No Bias Testing
There is no information about how the system is tested for discrimination. In AI HR systems, this is a dealbreaker.
3. No EU Data Hosting
The provider is based outside the EU and cannot provide sufficient data protection guarantees – no DPA, no Standard Contractual Clauses.
4. Promises of Full Automation
“Process applicants without HR effort” may sound attractive – but under current law, it is a clear warning sign.
5. Compliance Responsibility Shifted to the Customer
The provider places the entire compliance burden on you without offering any evidence or documentation themselves.
If you encounter one or more of these issues: Get written clarification – or choose another tool.
Conclusion and Next Step
AI in recruiting is legally complex – but manageable if you understand the fundamentals. GDPR Article 22 and the EU AI Act define the framework. Your task as an employer is to operate within that framework: with genuine human review processes, transparent communication with applicants, and careful selection of your AI HR tools.
The second part of this guide explains how to implement this in practice – including a step-by-step checklist, human-in-the-loop processes, and vendor management tips.
FAQ
Can I use ChatGPT to summarize applicant resumes?
Only with extreme caution. If you enter personal data into the standard version of ChatGPT, the data leaves the protected GDPR environment – it may be processed in the US and could contribute to model training. Only use enterprise solutions with isolated environments and a signed Data Processing Agreement (DPA).
What is the difference between AI in recruiting and AI in HR?
AI in recruiting refers specifically to the use of AI HR tools in talent acquisition – from job postings to hiring decisions. AI in HR is the broader term and also includes performance evaluations, training recommendations, or employee attrition predictions. Both areas are subject to the same strict compliance requirements under the GDPR and the EU AI Act.
Does the EU AI Act also apply to software we have been using for a long time?
Yes. There are transition periods, but in the long term, existing high-risk systems must also comply with the requirements. Especially when software receives significant updates, the obligations under the AI Act apply. Ask your vendor for a compliance roadmap now – not shortly before the deadlines expire.
How can heyData support you?
AI in recruiting is legally complex – but you do not have to navigate it alone. heyData supports you in three specific areas:
- Tool assessment: We review whether your planned or existing AI HR tool meets the requirements of the GDPR and the EU AI Act – and identify where action is needed before regulators do.
- Data Processing Agreements (DPA): Every AI tool processing personal data requires a legally compliant DPA. We help you set it up correctly or review existing agreements for gaps.
EU AI Act compliance: If your tool falls into the high-risk category, we guide you through implementing the required legal obligations.
Important: The content of this article is for informational purposes only and does not constitute legal advice. The information provided here is no substitute for personalized legal advice from a data protection officer or an attorney. We do not guarantee that the information provided is up to date, complete, or accurate. Any actions taken on the basis of the information contained in this article are at your own risk. We recommend that you always consult a data protection officer or an attorney with any legal questions or problems.


