GDPR Legitimate Interest for AI: When It Works and When It Does Not
Article 6(1)(f) — legitimate interest — is the legal basis most organisations reach for when consent is inconvenient and contract is insufficient. For AI systems, regulators are increasingly scrutinising this choice. Here is what the law actually requires.
The Six Legal Bases — and Why AI Often Falls on Legitimate Interest
GDPR Article 6 provides six lawful bases for processing personal data. For AI systems, the choice matters enormously because it determines your obligations and your exposure if challenged:
Consent (6(1)(a))
Rarely practical for AI analytics — requires freely given, specific, informed, unambiguous consent. Cannot be bundled into T&Cs.
Contract (6(1)(b))
Valid when AI processing is strictly necessary to perform a contract. Not valid for analytics, model training, or personalisation beyond what the contract requires.
Legal obligation (6(1)(c))
Applies when a law requires AI processing — e.g., AML transaction monitoring.
Vital interests (6(1)(d))
Emergency use only. Not applicable to routine AI processing.
Public task (6(1)(e))
Government bodies only. Not available to private sector.
Legitimate interest (6(1)(f))
The "catch-all" — most-used for AI analytics, fraud detection, personalisation. Subject to a three-part test and individual override rights.
The Three-Part Legitimate Interest Test
Legitimate interest is not a rubber stamp. The EDPB and national DPAs require you to pass all three parts of a formal Legitimate Interest Assessment (LIA). Here is each part, with what passes and what fails for AI systems:
1. Purpose test
Is there a legitimate interest?
Passes
A genuine business need exists — fraud prevention, network security, improving a service. The interest must be real, specific, and not manufactured to justify the processing.
Fails
Vague interests like "improving our AI", "business development", or "personalisation" without specificity. Courts have rejected overly broad purpose descriptions.
2. Necessity test
Is the processing necessary to achieve that interest?
Passes
No less privacy-intrusive way to achieve the same goal. The processing must be proportionate — using only the data needed, for only as long as needed.
Fails
Training an AI on all customer data when a sample would suffice. Retaining data indefinitely when the purpose could be achieved with shorter retention.
3. Balancing test
Do the controller's interests override the data subject's rights?
Passes
The individual would reasonably expect this processing, the impact is low, and there are meaningful safeguards. The interest is proportionate to the privacy cost.
Fails
Processing that would surprise or disturb the individual. High-sensitivity data. Processing that has significant effects on individuals. Automated decision-making that is outcome-determinative.
Enforcement: When Legitimate Interest Failed
Meta / Irish DPC (2023)
Fined €1.2 billionLegitimate interest rejected for behavioural advertising. The balancing test failed — the scale of processing and impact on users outweighed Meta's commercial interest.
TikTok / ICO (2023)
£12.7 million fineLegitimate interest not accepted for processing children's data. Processing of minors' data faces an extremely high bar — courts apply heightened scrutiny.
Clearview AI / ICO, CNIL, Garante (2022–2023)
Fines in multiple jurisdictionsLegitimate interest rejected for scraping images to train facial recognition AI. The absence of an existing relationship and the scale of impact failed the balancing test.
Common AI Use Cases: Does Legitimate Interest Apply?
Fraud detection and prevention
Generally validWell-established legitimate interest. Must be limited to the data necessary. Cannot use LI for general risk scoring — must be tied to specific fraud prevention.
Network security and intrusion detection
Generally validSecurity of systems is a widely accepted legitimate interest. Must not involve content inspection of private communications beyond what is necessary.
Personalisation and recommendation engines
Contested — often failsRegulators have rejected this for advertising. May hold for direct service improvement with a close relationship. Fails for third-party data or where individuals would not expect it.
Training AI models on customer data
Rarely valid — high riskRegulators have rejected this. Training uses data beyond its original purpose (repurposing). Compatibility assessment under Article 6(4) is required. Likely needs consent.
HR analytics and workforce AI
Contested — sector scrutinyEmployee data is subject to heightened scrutiny. Power imbalance between employer and employee affects the balancing test. Article 88 national derogations may apply.
Automated marketing segmentation
Rarely valid for advertisingPost-Meta ruling, LI for behavioural advertising is heavily scrutinised. ePrivacy Directive requires consent for cookie-based targeting, which limits LI scope.
The Article 22 Problem
Even if legitimate interest passes Articles 5 and 6, Article 22 creates a separate hurdle for automated decision-making. Article 22 prohibits solely automated decisions with significant legal or similarly significant effects — unless:
- The decision is necessary for a contract with the individual (Article 22(2)(a))
- The decision is authorised by EU or member state law (Article 22(2)(b))
- The individual has given explicit consent (Article 22(2)(c))
Legitimate interest is not on this list. If your AI makes automated decisions with significant effects (credit scoring, insurance pricing, hiring shortlisting), legitimate interest cannot be your basis under Article 22 — regardless of what it says in Article 6. You need either contract, law, or explicit consent.
Legitimate Interest Assessment Checklist for AI Systems
If you are relying on legitimate interest for an AI system, document your LIA with at least these elements:
Document your AI legal bases in ComplianceIQ
ComplianceIQ generates LIA templates for each AI system in your inventory and tracks which systems still need a completed assessment.
Start free