Your Company Is Using AI.
Your Board Will Ask About Compliance in 90 Days.
EU AI Act fines reach €35 million or 7% of global annual revenue. Boards are now asking the compliance question. The question is whether you have an answer.
""We've seen the EU AI Act headlines. We use AI across HR, finance, and operations. Are we compliant? If not, what's the exposure? Who is accountable?" This question is now appearing on audit committee agendas across Europe. The CEOs who can answer it are the ones who started six months ago."
The Five Questions Your Board Will Ask
Prepare your answers before they ask.
Q: Are we EU AI Act compliant?
A: This requires knowing which AI tools you use, which are High-Risk under Annex III, and whether each one has: a DPIA, an audit trail, a human oversight mechanism, and documented risk management.
Q: What is our maximum fine exposure?
A: €35 million or 7% of global revenue — whichever is higher — for prohibited AI use. €15 million or 3% for other violations. Both can apply simultaneously with GDPR fines.
Q: Who is personally accountable?
A: The DPO is accountable for GDPR. The EU AI Act creates a "provider" and "deployer" liability structure. If you deploy a third-party AI tool for a High-Risk use case, you are responsible for compliance — not the AI vendor.
Q: Do we have an AI risk register?
A: EU AI Act Article 9 requires a documented risk management system for the full lifecycle of any High-Risk AI. Without a register, you cannot demonstrate compliance.
Q: What happens if an employee uses AI incorrectly?
A: The deployer organization — your company — is liable for how its employees use AI in professional contexts. Informal ChatGPT use by HR or clinical staff is your compliance exposure, not the employee's.
Department-by-Department Risk Map
Your exposure by function.
CV screening, performance monitoring, hiring AI
Credit decisions, fraud detection, trading AI
Clinical AI, triage, patient benefits
Document AI, LLM research, discovery AI
Logistics AI, infrastructure management AI
The 30-Day Executive Action Plan
Concrete steps. Measurable outputs.
AI Inventory
Identify every AI tool in use across all departments. Include informal ChatGPT usage. This is your baseline.
Risk Classification
For each tool, determine whether it triggers Annex III. Use the EU AI Act Annex III Category list as the classification framework.
DPIA for High-Risk Tools
Commission a Data Protection Impact Assessment for every High-Risk AI system identified. This is a legal requirement, not optional.
Policy Engine Deployment
Enforce which AI tools can process which categories of data via a technical policy layer — not just an acceptable use policy. Policy documents do not satisfy Article 9 requirements.
Audit Trail Activation
Implement immutable logging of all High-Risk AI decisions. SupraWall's SDK provides signed, tamper-proof audit records at the tool call boundary.
Two Ways to Solve This
Whether you want to implement it yourself or speak to an expert.
Business Path (C-Suite)
30-minute board-ready AI compliance assessment.
Technical Path (Developers)
Technical templates for your engineering team.