Data Protection Impact Assessment (DPIA) for AI Systems
GDPR Article 35 requires a DPIA before deploying AI that is "likely to result in high risk" to individuals. Most AI using biometric data, making automated decisions, or processing data at scale meets this threshold. Here is a practical guide to completing one.
When is a DPIA required for AI?
GDPR Article 35 requires a DPIA when processing is "likely to result in a high risk to the rights and freedoms of natural persons." The Article 29 Working Party (now EDPB) has published criteria identifying nine factors that indicate high risk. If your AI system meets two or more, a DPIA is almost certainly required:
- Evaluation or scoring — including profiling and predicting behavior, financial situation, health, preferences, or location
- Automated decision-making with legal or similarly significant effects — decisions about people that affect their rights or similarly significant interests
- Systematic monitoring — observing, monitoring, or controlling individuals including through a network
- Sensitive data — processing special categories (health, biometrics, religion, political opinions, etc.) or highly personal data
- Large scale processing — processing affecting many individuals or covering a large geographic area
- Combining or matching datasets — from different sources in ways individuals would not expect
- Data about vulnerable individuals — children, employees, patients, those with mental health conditions
- Innovative technology or novel application — new technology whose privacy implications are not fully understood
- Processing preventing individuals from exercising a right or using a service
Most substantive AI systems meet at least two of these criteria. A customer service AI using conversation history (combining datasets, large scale) needs a DPIA. A hiring AI (automated decision-making with significant effects, evaluation/scoring) definitely needs one.
The seven elements of a DPIA
Article 35(7) specifies what a DPIA must contain:
- Systematic description of the intended processing, its purposes, and the legitimate interests pursued
- Assessment of the necessity and proportionality of the processing in relation to the purposes
- Assessment of the risks to the rights and freedoms of individuals
- Measures envisaged to address the risks — safeguards, security measures, mechanisms to ensure protection of personal data
Additionally, the EDPB recommends including: a description of the data lifecycle, data flows (who receives what data), and the outcome of consultation with the data subjects or their representatives where appropriate.
DPIA for a hiring AI system — practical example
1. Description of processing
The system analyzes job application materials (CV text, cover letter text, structured responses to screening questions) using a machine learning classifier trained on historical hiring outcomes. It outputs a ranking score (0-100) and a shortlist recommendation for each applicant. The system processes: full name, professional experience, education history, and self-declared information provided by applicants. No biometric data is processed.
2. Necessity and proportionality
The processing is necessary to manage the volume of applications (200+ per role) in a time-efficient manner. The use of AI scoring is proportionate because: it uses only data applicants have voluntarily provided, it is used to assist (not replace) human judgment, and alternative manual processes would introduce their own inconsistencies. The legal basis is legitimate interests (efficient recruitment), balanced against applicant interests by our human review process.
3. Risk assessment
4. Measures to address risks
Bias mitigation: Annual bias audit by independent auditor, testing selection rates by gender and ethnicity. Published audit summary. Model retrained if significant bias found.
Transparency: All applicants informed AI screening is used. Description of what the AI assesses is available on request. GDPR Article 22 compliance: all AI rankings are reviewed by a human recruiter before final shortlist decisions. Any rejected candidate can request human review.
Data security: Applicant data encrypted at rest (AES-256) and in transit (TLS 1.3). Access restricted to recruiting team. Data deleted 6 months after position fills.
When to consult the supervisory authority
If your DPIA concludes that the residual risk is still high after implementing all planned mitigations, you must consult your national supervisory authority (e.g., ICO in the UK, CNIL in France) before proceeding. This is called "prior consultation" under Article 36.
Most well-designed AI systems with appropriate safeguards will not reach this threshold. A system with strong bias mitigation, human oversight, and data minimization should be able to document a residual risk level that does not require prior consultation.
DPIA vs. EU AI Act risk documentation
The EU AI Act requires "technical documentation" (Article 11) and a "risk management system" (Article 9) for high-risk AI. These overlap significantly with GDPR's DPIA requirements — but they are not the same document.
Best practice: create an integrated document that satisfies both requirements. Lead with the GDPR DPIA structure (required), and add the EU AI Act technical documentation elements as additional sections. Regulators from both frameworks can review one comprehensive document rather than two separate ones.
Generate your DPIA documentation
ComplianceIQ's document generator produces customized DPIA documentation for your AI systems — covering both GDPR and EU AI Act requirements.
Generate DPIA documentation →