GDPR and ChatGPT: Is Using ChatGPT for Customer Data Legal?
Thousands of SaaS companies now feed customer data into ChatGPT, Claude, and other AI APIs. Most have not checked whether this is GDPR-compliant. The short answer: it can be legal, but only if you take specific steps that most companies skip.
Italy blocked ChatGPT in March 2023 for GDPR violations
Italy's Garante (data protection authority) temporarily banned ChatGPT citing lack of legal basis and age verification. OpenAI had to implement changes before Italy lifted the ban. Germany and France considered similar actions. The enforcement risk for B2B use of AI APIs is real, not theoretical.
The Core Question: Are You a Controller or Processor?
When your application sends customer data to the ChatGPT API, you are a data controller (you decide to process the data for your purpose) and OpenAI is a data processor (they process the data on your behalf, under your instructions).
This matters because GDPR Article 28 requires you to have a Data Processing Agreement (DPA) with every processor you use. Without a DPA, the relationship is unregulated — a GDPR violation independent of anything else.
OpenAI, Anthropic, and Google all offer DPAs — but you must request and sign them
Using the API without a DPA does not mean you automatically have one. The API terms of service are not a DPA. You need to specifically request and execute the DPA. OpenAI offers one at platform.openai.com (go to Settings > Organization > Privacy). Anthropic has a DPA available on request. Google Cloud's DPA is part of their terms for paying customers.
The 5 GDPR Requirements for Using AI APIs with Personal Data
Signed Data Processing Agreement (Article 28)
A DPA must be in place before you send any personal data. The DPA must specify: what data is processed, for what purpose, how long it is retained, what security measures apply, and what happens on termination.
Action: Sign the OpenAI/Anthropic/Google DPA before processing any EU customer data. This is a legal prerequisite, not optional.
Legal basis for processing (Article 6)
You need a legal basis to send customer personal data to an AI API. The most common: (a) legitimate interests — you have a business need to use AI to provide your service, and the customer's interests do not override yours. (b) performance of a contract — the AI processing is necessary to deliver what the customer signed up for. (c) consent — the customer explicitly agreed to AI processing of their data.
Action: Legitimate interests is the most practical basis for most B2B SaaS use cases. Document your LIA (Legitimate Interests Assessment). Do not rely on "legitimate interests" without documenting it.
Transparency in your privacy notice (Articles 13/14)
Your privacy notice must disclose that AI APIs are used to process personal data. It must describe the purpose, which AI vendor is used (or the category of vendor), and the retention period. "We may use third-party services to improve our product" is not sufficient disclosure.
Action: Update your privacy policy to specifically mention AI API usage, the purpose, and the data minimisation approach you use.
International transfer safeguards (Chapter V)
OpenAI and Anthropic process data in the US. Sending EU personal data to their APIs is an international data transfer. You need a transfer mechanism: Standard Contractual Clauses (SCCs) are the most common. OpenAI's DPA includes SCCs. Verify the DPA you sign includes a valid transfer mechanism.
Action: Check that the DPA you sign includes SCCs or another valid transfer mechanism under the EU-US Data Privacy Framework.
Training opt-out (data minimisation principle)
By default, some AI providers may use your API inputs to train their models. Under GDPR, you should opt out of model training for customer personal data. OpenAI's API (not ChatGPT.com) does not train on API inputs by default. Anthropic's API does not train on API inputs with a DPA in place. Verify for each provider.
Action: Confirm model training opt-out with each AI vendor. This is usually automatic for paying API customers but worth confirming in writing.
What Personal Data Can You Send to AI APIs?
Personal data under GDPR is any information that can identify a living individual. This is broader than most people think:
Always personal data — be careful
- Customer names, email addresses, phone numbers
- IP addresses and device identifiers
- Descriptions of support tickets with customer details
- Email content forwarded to AI for summarisation
- Customer health, financial, or employment information
- Combinations of data that together identify someone
Generally safe to send
- Anonymised or synthetic data (no real identifiers)
- Your own internal business documents (no customer data)
- Public information (news articles, public documentation)
- Pseudonymised data where the key is held separately
- Aggregated statistics with no individual identifiable
ChatGPT.com vs ChatGPT API: Important Difference
ChatGPT.com (the consumer product) and the OpenAI API are different products with different privacy defaults:
| Aspect | ChatGPT.com | OpenAI API |
|---|---|---|
| DPA available | ✓ Enterprise plan only | ✓ All API customers |
| Training on inputs (default) | Yes (opt-out available) | No by default |
| Data residency options | Limited | US and EU available |
| Retention period | 30 days for abuse monitoring | 30 days for abuse monitoring (0 with DPA) |
| GDPR Article 28 compliance | Enterprise plan required | ✓ Via DPA |
| Suitable for EU personal data | Enterprise plan only | ✓ With DPA signed |
Practical bottom line
- • Using ChatGPT.com (the website) to process EU customer data: not GDPR-compliant unless on Enterprise plan with signed DPA.
- • Using the OpenAI API with personal data: compliant if (1) DPA is signed, (2) you have a legal basis, (3) your privacy notice discloses it, (4) SCCs are in place.
- • Same applies to Claude API (Anthropic), Gemini API (Google), and other AI API providers — get the DPA before sending personal data.
- • Employees using ChatGPT.com personally to process work data: a compliance risk that requires a clear AI acceptable use policy.
Get your AI data processing compliance checklist
ComplianceIQ identifies every AI tool in your stack, which ones process personal data, whether DPAs are in place, and what your legal basis needs to be.