AI Transparency Requirements: Country-by-Country Guide 2026
Two distinct transparency obligations are emerging globally: disclosure that AI was used (labeling), and explanation of how it worked (explainability). The requirements differ significantly by country, sector, and the type of decision involved. Here is the complete picture.
Two types of AI transparency
Before mapping specific requirements, it helps to distinguish between two fundamentally different transparency obligations:
- Disclosure transparency: You must tell someone that AI was used — in a decision, in content generation, in interaction. The user does not need to know how it worked, just that AI was involved.
- Explanatory transparency: You must explain why the AI made a specific decision — what factors influenced the outcome, what data was used, why the result was what it was. This is much harder and more expensive to implement.
Most regulations currently require disclosure transparency. Explanatory transparency is required in specific high-stakes contexts, primarily credit and hiring decisions.
European Union — The most comprehensive requirements
EU AI Act: interaction transparency (Article 52)
The EU AI Act Article 52 requires disclosure in three specific scenarios:
- Chatbots and AI personas: If your AI system is designed to interact with humans, you must inform users they are interacting with AI — unless it is obvious from the context
- Emotion recognition and biometric categorization: You must inform people when such systems are used on them
- Deep synthesis (deepfakes): AI-generated content that depicts real people, places, or events must be labeled as artificially generated or manipulated — with exceptions for legitimate creative or satirical work
Article 52 obligations apply immediately when the EU AI Act transparency provisions entered into force in August 2025.
EU AI Act: high-risk AI transparency (Articles 13 and 14)
For high-risk AI systems (Annex III), the transparency obligations are more extensive:
- The AI system must be interpretable and explainable to the extent necessary for users to exercise oversight
- Outputs must be labeled as AI-generated where relevant
- Users (deployers) must receive sufficient information to understand the system's capabilities, limitations, and appropriate conditions of use
- The AI must be able to explain, at a level appropriate to the context, the factors that contributed to its output
GDPR Article 22: automated decision explanation
Under GDPR, individuals subject to automated decisions with legal or significant effects have the right to:
- Know that an automated decision was made
- Request meaningful information about the logic involved
- Request human review of the decision
- Contest the decision
"Meaningful information about the logic involved" does not require disclosure of the entire model — but it must be more than a black-box answer. The EDPB has stated that explanations must be clear, comprehensible, and specific enough for the individual to understand the basis of the decision.
United States — Sector-specific and state-level requirements
Federal: FCRA adverse action notices for credit AI
The US has no federal AI disclosure law, but sector-specific laws create effective transparency requirements. The FCRA requires adverse action notices when AI-driven credit decisions go against applicants. The CFPB has clarified that these notices must identify specific factors from the AI model — "score too low" is insufficient if the model used hundreds of variables. This creates a de facto explainability requirement for consumer credit AI.
FTC guidance on AI deception
The FTC has issued guidance stating that using AI in ways that deceive consumers — including deploying AI personas that claim to be human, using AI-generated reviews without disclosure, or using AI to create false impressions — is deceptive under Section 5 of the FTC Act. This applies to any company doing business in the US.
State laws: Colorado, Illinois, New York, California
Several US states have enacted disclosure requirements:
- Colorado SB 205: Employers using AI employment decision tools must disclose use to applicants and employees and explain what the tool assessed
- Illinois AIVIA: Employers using AI video interview analysis must disclose AI use before interviews and obtain consent
- New York City LL144: Automated employment decision tools must undergo annual bias audits and employers must notify candidates of AI use
- California SB 942: Providers of AI systems capable of generating text, audio, or video must label content as AI-generated (effective 2026)
China — Mandatory labeling requirements
China has the world's strictest AI content labeling requirements. Under the Generative AI Regulations (effective August 2023) and the Deep Synthesis Regulations (effective January 2023):
- All AI-generated content must be watermarked or labeled
- Generative AI services must clearly indicate when content is AI-generated
- Deep synthesis (deepfake) content depicting real people requires consent + labeling
- Algorithm recommendation services must disclose that recommendations are AI-driven
- Price personalization using AI must be disclosed to consumers
These requirements apply to any company serving Chinese users, regardless of where the company is based.
United Kingdom — Post-Brexit alignment with EU
The UK has taken a sector-specific, principles-based approach to AI regulation. The UK AI framework from DSIT emphasizes transparency as a key principle but does not currently have mandatory disclosure rules equivalent to the EU AI Act.
Key UK transparency requirements:
- UK GDPR Article 22 (equivalent to EU GDPR) — automated decision explanation rights
- ICO (Information Commissioner's Office) guidance on AI and GDPR requires meaningful explanations for automated decisions
- FCA guidance on AI in financial services — explainability expectations for credit and underwriting AI
- The UK Artificial Intelligence (Regulation) Bill (draft) — expected to introduce sector-specific AI transparency requirements, timing unclear
Canada — AIDA and CPPA
Canada's Artificial Intelligence and Data Act (AIDA), part of Bill C-27, introduces transparency requirements for high-impact AI systems:
- High-impact AI systems must provide plain language explanations of their decisions when requested
- People must be informed when AI makes decisions that affect them in significant ways
- Generative AI systems capable of creating deepfakes must label their content
AIDA is still working through the legislative process as of early 2026. The Consumer Privacy Protection Act (CPPA), also in Bill C-27, has automated decision provisions similar to GDPR Article 22.
Australia — AI Ethics Framework and sector guidance
Australia has voluntary AI ethics principles that include transparency as a core value, but no mandatory AI transparency law as of early 2026. The Privacy Act 1988 (amended) includes provisions that create transparency obligations when AI processes personal data for consequential decisions — similar to GDPR Article 22 but with different specifics.
The Australian government has signaled its intention to regulate "high risk AI" in ways aligned with the EU AI Act. Watch for legislation in 2026–2027.
Quick reference: transparency obligations by jurisdiction
| Jurisdiction | Disclose AI use | Label AI content | Explain decisions | Hard law |
|---|---|---|---|---|
| EU | ✓ Article 52 | ✓ Deepfakes | ✓ GDPR + AI Act | ✓ Yes |
| UK | ✓ Sectors | Guidance only | ✓ UK GDPR | ✓ GDPR |
| US (federal) | ✓ FTC/FCRA | FTC guidance | ✓ FCRA (credit) | ✗ No AI law |
| US (states) | ✓ CO/IL/NY | ✓ CA SB 942 | ✓ CO/NY hiring | Varies |
| China | ✓ Required | ✓ Required | ✓ Algorithms | ✓ Yes |
| Canada | ✓ AIDA (pending) | ✓ Deepfakes | ✓ CPPA | Bill C-27 |
| Australia | Voluntary | Voluntary | Privacy Act | ✗ No AI law |
| Singapore | IMDA guidance | IMDA guidance | IMDA guidance | ✗ Voluntary |
| Japan | Voluntary | Voluntary | Voluntary | ✗ No AI law |
| South Korea | ✓ AI Act (2024) | ✓ Required | ✓ PIPA | ✓ Yes |
| India | Emerging | Emerging | ✓ DPDPA | DPDPA |
| Saudi Arabia | ✓ Required | Emerging | ✓ PDPL | ✓ PDPL |
| UAE | ✓ Gov entities | Emerging | Framework | Framework |
Practical implementation of AI transparency
Minimum viable disclosure (most jurisdictions)
- Add disclosure text to any automated decision notification: "This decision was made using automated processing of your information."
- For chatbots: add "You are speaking with an AI assistant" to the chat interface
- For AI-generated marketing content: add an AI-generated content label (required in China, California; best practice elsewhere)
- For AI hiring tools: notify candidates before the assessment that AI analysis is used
Explanatory AI (required in credit, hiring, high-risk)
- Document the top factors in every automated decision at inference time — not just possible factors, but the specific factors for this specific decision
- Build a "decision explanation" API endpoint that retrieves the stored factors for a given decision ID
- For consumer credit decisions: use SHAP values or similar to generate human-readable factor explanations at the time of decision
- Create a process for responding to explanation requests within regulatory timeframes (usually 30 days)
Deepfake labeling (increasingly mandatory)
- Embed C2PA metadata (Content Credentials) in all AI-generated images, video, and audio you produce commercially
- Add visible text labeling for AI-generated content in marketing materials
- For video: embed watermarks in the video stream
- Document your labeling process for regulatory demonstration
Generate your transparency compliance checklist
ComplianceIQ identifies every transparency disclosure obligation for your AI systems across 108+ jurisdictions and generates the documentation you need.
Map your transparency obligations →