← Blog
Updated: April 2026 April 15, 2026 · 11 min read

AI Compliance Checklist for SaaS Companies 2026

SaaS companies face AI regulations from multiple directions: GDPR if you have EU customers, EU AI Act if you build or use AI features, US state laws if you handle hiring or financial data. This checklist covers what actually applies to you.

Most SaaS AI is minimal risk — but not all

The vast majority of SaaS AI features (content generation, smart search, recommendations, summarisation) fall into “minimal risk” under the EU AI Act with few requirements. The specific categories that require action: AI that scores people in hiring, credit, or healthcare contexts. GDPR Article 22 applies broadly regardless of risk level.

Step 1: Figure Out What Applies to You

Answer these four questions to determine your obligation level:

Do you have customers or users in the EU?

If yes: GDPR applies to any personal data you process. If your AI processes EU user data, GDPR's AI-specific rules (Article 22 for automated decisions, DPIAs for high-risk processing) apply.

Do you build AI features that make decisions about people (hiring, credit, health, education, law)?

If yes: You are likely an EU AI Act "provider" of a high-risk AI system. Annex III requirements apply from August 2, 2026.

Do you use AI tools to process data about your customers' end-users in those same categories?

If yes: You may be a "deployer" of high-risk AI under the EU AI Act. Deployer obligations apply from August 2, 2026.

Do you serve US customers with AI features in HR, finance, or healthcare verticals?

If yes: Various US state laws apply: NYC LL144 (hiring AI), Colorado AI Act (employment, credit, health, education), Illinois AIVIA (video interview AI), California CPRA (automated decision-making).

The Compliance Checklist

A

GDPR — Applies if you have EU users

AI inventory in your privacy notice

List every AI system that processes EU user data. Your privacy policy must describe what AI is used, what data it processes, and on what legal basis.

Article 22 check for each AI feature

For each AI feature: does it make automated decisions with legal or significant effects on individuals? If yes — implement notification, explanation, human review, and contest rights.

Data processing agreements with AI vendors

If you use OpenAI, Claude API, Google Vertex, or any AI vendor that processes EU personal data, you need a DPA with them. OpenAI and Anthropic both offer DPAs — get them signed.

DPIA for high-risk AI processing

A Data Protection Impact Assessment is required before using AI that processes personal data in a way that poses high risk to individuals. Automated profiling, systematic processing of sensitive data, large-scale monitoring.

Data minimisation in AI inputs

Do not send more personal data to AI models than necessary. Anonymise or pseudonymise where possible. Instruct AI vendor not to use your data for training (OpenAI and Anthropic both offer this contractually).

B

EU AI Act — Applies if your AI is in an Annex III category

Annex III high-risk categories relevant to SaaS: employment AI (hiring, promotion, performance monitoring), credit/financial services (loan scoring, insurance), education (admissions, assessment), healthcare (diagnostics, treatment recommendations).

Classify your AI systems

Map each AI feature to the Annex III list. If you build a resume screener, a loan scoring API, or a patient risk predictor — these are high-risk.

Determine: provider or deployer?

If you built the AI and sell/license it: you are a provider — stricter requirements. If you use a vendor's AI in your product for your customers: you are a deployer — lighter requirements.

Providers: complete conformity assessment

Document technical specifications, training data, testing methodology, accuracy metrics, bias testing, human oversight provisions. For most Annex III categories: self-assessment. For biometrics and law enforcement: third-party auditor.

Providers: register in EU AI database

High-risk AI providers must register their systems in the EU AI Office database before August 2, 2026. The database is public.

Providers: affix CE marking

Once conformity assessment is complete, affix CE marking on the product. Include EU AI Act regulation number in declaration of conformity.

Deployers: verify vendor compliance

Check that your AI vendor (if you use a third-party Annex III AI) is registered and compliant. Get a copy of their technical documentation. You cannot comply by relying on an unregistered vendor.

Deployers: implement human oversight

Build mechanisms for human review of AI outputs. Train staff on how to use, oversee, and override the AI. Human oversight must be substantive — not rubber-stamping.

C

US State Laws — Applies if you have US customers in covered verticals

NYC LL144 — hiring AI for NYC positions

If your platform helps companies screen job candidates and any of those positions are in NYC: annual bias audit required. Independent auditor. Results publicly posted.

Colorado AI Act — employment, credit, health, education

If your AI makes consequential decisions about Colorado residents in these domains: risk assessment required, consumer notice, right to appeal. Deadline: June 30, 2026.

Illinois AIVIA — video interview AI

If your product uses AI to analyze video interviews for any Illinois candidate: written consent, written explanation of characteristics assessed, deletion on request.

California CPRA — opt-out of profiling

California residents have the right to opt out of automated decision-making that produces legal or significant effects. Build the opt-out mechanism if you profile California users.

D

Always-on — Regardless of jurisdiction

Tell users when AI makes decisions about them

Regardless of which law applies: transparency is expected everywhere. If your AI influences a decision about a user, tell them. Build this into your product notifications.

Chatbot identification

The EU AI Act requires AI chatbots to identify themselves as AI. Required from August 2026. Best practice now. If your product has an AI assistant, it must say it is AI.

AI governance policy

Have a documented internal policy for how AI is selected, evaluated, and monitored. Even if not legally required in your jurisdiction, regulators interpret the absence of a policy as a risk signal.

Monitor for bias

Periodically test your AI outputs for disparate impact across demographic groups. Document the testing. This demonstrates good faith across all jurisdictions.

The most common SaaS compliance mistake

Sending customer data to AI APIs without a signed Data Processing Agreement. OpenAI, Anthropic, Google — they all offer DPAs, but you have to request and sign them. Using their API without a DPA while processing EU personal data is a GDPR violation. Many SaaS companies discover this when their enterprise customer's legal team asks for a DPA and you have to explain you never got one.

Get your personalised SaaS AI compliance report

Answer 4 questions about your SaaS product and AI usage. ComplianceIQ generates a prioritised compliance checklist specific to your situation — not a generic list.