Your Recruiters Are Using ChatGPT for CVs.
Here's What EU Law Now Requires.
Most HR teams in Europe are already using AI for candidate screening. Almost none of them know that doing it without safeguards became a legal violation on August 2, 2026.
"A recruiter opens ChatGPT, pastes a candidate's full CV, and types: "Is this person a good fit for the role?" The LLM responds with a score and a summary. The recruiter moves on. No audit trail. No bias check. No human sign-off. As of August 2, 2026, this is a compliance violation with potential fines of €35 million or 7% of global revenue."
What Your HR Team Is Doing Right Now
These are the specific behaviors that trigger Annex III and GDPR exposure.
CV uploads to public LLMs
Candidates' full CVs — containing names, addresses, education, and employment history — are being pasted directly into ChatGPT, Gemini, or Claude. This data is processed by external servers.
AI-generated shortlisting
Recruiters are asking LLMs to rank or filter candidates. When an AI recommendation influences who gets interviewed, it triggers GDPR Article 22: the right not to be subject to automated decisions.
No audit trail exists
EU AI Act Article 12 requires a logged record of every high-risk AI decision. ChatGPT conversations are ephemeral. There is no evidence trail if a candidate claims unlawful discrimination.
No bias documentation
High-risk AI systems must demonstrate they do not systematically discriminate by gender, age, or nationality. Off-the-shelf LLMs have no built-in bias audit. Using them for hiring with no documentation is non-compliant by default.
The Specific Legal Exposure
Three rules, all triggered at once.
Your AI is classified as "High-Risk"
Any AI used in CV screening, shortlisting, or performance monitoring of employees is explicitly classified as a High-Risk AI System under Annex III Category 4. This classification triggers a mandatory compliance framework — not optional guidelines.
Candidates have the right to a human decision
Automated processing that "significantly affects" a person — including hiring decisions — requires either explicit candidate consent, or a human decision-maker as the final authority. Using an LLM shortlist as the primary filter without documented human override violates this directly.
You must have an audit trail
Every action taken by a high-risk AI system must be automatically logged. Since August 2, 2026, absence of this logging is itself a violation — independent of whether discrimination occurred.
What a Compliant HR AI Process Looks Like
You don't have to ban AI. You have to govern it.
Classify your AI tools
If any tool touches candidate data, it is Annex III Category 4 by default. Start with a complete inventory of which tools your HR team uses.
Implement a DPIA
A Data Protection Impact Assessment is legally required before deploying any high-risk AI in HR contexts. This documents the risks and the mitigations.
Enforce audit trails
Every AI-assisted decision must be logged. The log must be immutable. SupraWall's SDK intercepts tool calls at the runtime boundary and writes signed audit entries automatically.
Gate final decisions with human sign-off
AI can assist. Humans must decide. SupraWall's Human-in-the-Loop (HITL) queue routes all candidate ranking outputs to a nominated HR manager before any action proceeds.
Activate the HR Compliance Template
SupraWall's pre-configured HR template implements all four steps above in one import. It is specifically calibrated for Annex III Category 4 + GDPR Article 22 requirements.
Two Ways to Solve This
Whether you want to implement it yourself or speak to an expert.
Business Path (C-Suite)
For CHROs and HR Directors. 30-minute assessment.
Technical Path (Developers)
One import. Annex III + GDPR Article 22 configured.