GDPR vs EU AI Act: What's the Difference for AI Companies?
They are two separate laws. They often apply to the same system. They have different enforcers, different requirements, and different penalties. Here is how to tell them apart — and why you need to comply with both.
The short answer
GDPR is a privacy law. It regulates what data you collect, how you store it, and who can see it. The EU AI Act is a safety law. It regulates the risks your AI system creates for people.
If your AI system processes personal data — which almost every AI system does — both laws apply to you at once. GDPR has been enforced since 2018. The EU AI Act's main requirements apply from August 2, 2026.
Common mistake
Many companies assume that if they are GDPR-compliant, they are also EU AI Act compliant. They are not the same. GDPR compliance does not satisfy the EU AI Act, and vice versa. You need to address both.
What GDPR covers (and what it does not)
GDPR applies whenever you process personal data — names, email addresses, location data, IP addresses, or any data that can identify a person. It requires:
- A legal basis for processing (consent, contract, legitimate interest, etc.)
- Data minimization — collect only what you need
- Storage limitation — keep it only as long as needed
- Data subject rights — access, deletion, portability, objection
- Breach notification within 72 hours
- Data Protection Impact Assessments (DPIAs) for high-risk processing
GDPR does not care whether your AI is accurate, fair, or safe. It cares about data. A completely inaccurate AI system that makes bad predictions is fine under GDPR as long as the data is handled correctly.
What the EU AI Act covers (and what it does not)
The EU AI Act applies to AI systems deployed in the EU — regardless of where the company is based. It classifies AI systems by risk level:
- Unacceptable risk: Banned entirely (social scoring, real-time biometric surveillance in public)
- High risk: Requires conformity assessment, documentation, human oversight, accuracy standards (applies to AI in hiring, credit scoring, healthcare, biometric ID, safety-critical infrastructure)
- Limited risk: Transparency obligations only (chatbots must say they are AI)
- Minimal risk: No specific obligations
The EU AI Act does not care about data privacy in detail — that is GDPR's job. It cares about whether the AI system could harm people through wrong decisions, biased outputs, or lack of human oversight.
The overlap: GDPR Article 22 and automated decisions
The most significant overlap is GDPR Article 22, which restricts automated decision-making. When an AI system makes a decision "solely based on automated processing" that has a "significant effect" on a person — credit decisions, hiring screening, insurance pricing — Article 22 requires:
- Disclosure that automated decision-making is happening
- The right to request human review
- The right to contest the decision
- An explanation of the logic involved
The EU AI Act's requirements for high-risk AI systems in hiring, credit, and healthcare cover much of the same ground — but from a different angle. GDPR Article 22 gives individuals rights. The EU AI Act puts obligations on the AI provider to build systems that are accurate, auditable, and overseen.
Side-by-side comparison
| Aspect | GDPR | EU AI Act |
|---|---|---|
| What it protects | Personal data and privacy | Safety, rights, democracy from AI risks |
| Who enforces it | National Data Protection Authorities | National Market Surveillance Authorities |
| Max fine (individuals) | €20M or 4% of global revenue | €15M or 3% (provider); €35M or 7% (unacceptable risk) |
| Key document | DPIA for high-risk processing | Conformity assessment for high-risk AI |
| Applies to | Any personal data processing | AI systems with EU users or EU impact |
| In force since | 2018 | Prohibited practices: Feb 2025. High-risk: Aug 2026 |
| Automated decisions | Article 22 — rights to object | High-risk AI must have human oversight |
| Training data | Legal basis required for personal data | Data governance requirements for high-risk AI |
Practical example: an AI hiring tool
You build a tool that screens CVs and ranks candidates. It processes names, employment history, education — personal data. Here is what both laws require:
Under GDPR:
- Legal basis for processing applicant data (likely legitimate interest or consent)
- DPIA required — CV screening clearly affects people's employment opportunities
- Article 22 compliance — candidates have the right to know automated screening is happening and to request human review
- Data retention policy — delete applicant data after X months
Under the EU AI Act (high-risk — AI in employment):
- Conformity assessment before deployment
- Technical documentation (architecture, training data, accuracy metrics)
- Logging of every decision for auditability
- Human oversight — a human must be able to review, override, and stop the system
- Bias testing across protected characteristics
- Register in the EU AI database (once it goes live)
- Inform the deploying employer of your obligations as provider
You can see that the EU AI Act requires far more technical work than GDPR for an AI hiring tool. GDPR is about data handling. The AI Act is about the system itself.
Different enforcement bodies — two separate audits
GDPR is enforced by Data Protection Authorities (the ICO in the UK, CNIL in France, the DPC in Ireland for most US companies). The EU AI Act is enforced by newly-created Market Surveillance Authorities in each EU member state. These are different agencies, often different government departments.
In practice, this means you can be fined by the CNIL for a GDPR violation and by a French market surveillance authority for an EU AI Act violation — for the same AI system, in the same week. Non-compliance with one does not protect you from enforcement of the other.
What to do now
Work through these in order:
- Inventory your AI systems. For each one, determine what personal data it processes and what risk classification it gets under the EU AI Act.
- GDPR first if you have not done it. Get your legal basis, privacy notices, and DPIAs in order for personal data in AI systems. This is already required and overdue.
- EU AI Act high-risk assessment. If any system is high-risk (hiring, credit, healthcare, biometrics, safety-critical infrastructure), start the conformity assessment process now. August 2026 is not far for systems that require significant documentation and technical changes.
- Chatbot transparency. If you have customer-facing chatbots, add a clear disclosure that users are talking to AI. This is required from August 2026 and is trivial to implement.
- Article 22 compliance. If any system makes automated decisions affecting individuals, ensure individuals can request human review and receive an explanation. This is already required under GDPR.
Check which laws apply to your AI systems
ComplianceIQ scans your AI tool inventory against 155+ jurisdictions — including GDPR, EU AI Act, and US state laws — and tells you exactly what you need to do for each.
Scan your AI systems free →