← Blog
GDPR April 15, 2026 · 11 min read

GDPR Article 22 and AI: Automated Decision-Making Rights Explained

Article 22 of GDPR gives people a specific right regarding AI: the right not to be subject to decisions made solely by automated means when those decisions significantly affect them. Here is what it actually means, who it applies to, and what your AI must do.

Article 22 is already in force

GDPR has applied since May 2018. Article 22 is not a future deadline — it is current law. If your AI makes automated decisions about EU residents, you may already be non-compliant.

What Does Article 22 Say?

Article 22(1) states: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

That is the base rule. Break it into three requirements:

"Solely automated"

The decision must be made entirely by the AI with no meaningful human involvement. If a human reviews the AI's output and genuinely exercises independent judgment, it is not "solely" automated — even if they rarely override it. The human review must be substantive, not rubber-stamping.

"Legal effect or similarly significant"

The decision must have real-world consequences. Legal effects: visa refusal, benefit denial, credit approval. Similarly significant: mortgage rejection, insurance quote, job screening, credit limit, personalised pricing that materially differs from others.

"Including profiling"

Profiling — analysing personal data to predict future behaviour — is included even when the final decision is automated. Building a risk score from someone's browsing history and using it to decide their insurance premium is profiling that triggers Article 22.

The Three Exceptions

Article 22(2) allows automated decisions in three cases. But each exception comes with mandatory safeguards — the exception does not remove the obligation to protect people.

Exception A

Necessary for entering into a contract

You can use automated decisions if they are necessary to enter into or perform a contract. Example: automatic credit scoring for a loan application. But "necessary" is interpreted narrowly. If a human could do it just as well, automation is not "necessary." You must tell the person they can request human review.

Required safeguards:

  • Inform the person the decision was automated
  • Give them the right to request human review
  • Allow them to express their view
  • Allow them to contest the decision
Exception B

Authorised by EU or Member State law

Some national laws explicitly permit automated decisions. Tax fraud detection, social security processing. The law must include suitable measures to safeguard rights — it cannot simply say "automated decisions are fine."

Required safeguards:

  • The authorising law must include its own safeguards
  • Right to human review must still exist
  • Cannot be used more broadly than the law allows
Exception C

Based on explicit consent

You can use automated decisions with the individual's explicit consent. This must be freely given, specific, informed, and unambiguous. Pre-ticked boxes and terms-and-conditions buried consent do not qualify. And for special category data (health, race, religion), explicit consent is required even if you also have a contract.

Required safeguards:

  • Consent must be explicit (not just general)
  • Must be specific to the automated decision
  • Person can withdraw consent at any time
  • Document consent for accountability

What “Solely Automated” Actually Means in Practice

This is where most companies get it wrong. A human in the process is not enough on its own — the human involvement must be meaningful. The European Data Protection Board is clear on this.

Still “solely automated” (Article 22 applies)

  • Human receives AI decision, checks for obvious errors, rarely overrides
  • Human can theoretically override but has no information to evaluate the decision
  • Human reviews output of AI but is not reviewing the underlying data or reasoning
  • Approval process is rubber-stamping with 98%+ throughput rate

Meaningful human involvement (Article 22 may not apply)

  • Human reviews the actual data used in the decision, not just the score
  • Human has authority and training to make independent assessment
  • Override rate is not negligible — humans regularly change AI recommendations
  • Human can explain why they agreed or disagreed with the AI

Article 22 and the EU AI Act: How They Interact

These are two separate laws with overlapping but not identical coverage. Article 22 GDPR focuses on individual rights in automated decisions. The EU AI Act focuses on the safety and reliability of the AI system itself.

AspectGDPR Article 22EU AI Act
FocusIndividual rights in automated decisionsSystem safety and reliability
TriggerLegal/significant effect on personAI used in Annex III category
Who can enforceData subject (complaint to DPA)National market surveillance authority
RequirementRight to human review, explanation, contestRisk management, logging, conformity assessment
OverlapAI in credit, hiring, benefitsSame domains covered by Annex III Cat 4 & 5
Key differenceRights-based (individual remedies)Safety-based (system requirements)

What You Must Build Into Your AI System

1

Transparent notification

When an automated decision affects someone, tell them. The privacy notice must mention that automated decision-making is used, what logic is involved, and what significance it has. Article 13/14 information obligation.

2

Explanation on request

The person can request an explanation of the decision: what factors were used, how they were weighted, why the outcome was what it was. This must be a meaningful explanation — not just "our algorithm decided."

3

Right to human review

Build the mechanism for someone to request that a human reviews the automated decision. Document who does this review, what access they have to the underlying data, and what the timeline is.

4

Right to contest

The person can submit additional information and have the decision reconsidered. Your process must actually allow this — it cannot be a cosmetic review that always confirms the automated result.

5

Data accuracy

If the automated decision was based on incorrect data, the person can correct it and have the decision redone. Article 16 (right to rectification) feeds directly into Article 22.

Special category data: higher bar

Article 22(4) prohibits automated decisions based on special category data (health, racial/ethnic origin, political opinions, religion, trade union membership, genetic/biometric data, sexual orientation) unless you have explicit consent or substantial public interest under EU/member state law. If your AI uses any of these data types — even as proxies — you need to address this specifically. Using postcode as a proxy for race in credit scoring is not a workaround.

Check your AI's Article 22 compliance status

ComplianceIQ scans your AI systems for Article 22 triggers, identifies which decisions need safeguards, and generates the documentation DPAs ask for.