← All articles
Transparency·April 2026·10 min read

AI Transparency Requirements: Country-by-Country Guide 2026

Two distinct transparency obligations are emerging globally: disclosure that AI was used (labeling), and explanation of how it worked (explainability). The requirements differ significantly by country, sector, and the type of decision involved. Here is the complete picture.

Two types of AI transparency

Before mapping specific requirements, it helps to distinguish between two fundamentally different transparency obligations:

  1. Disclosure transparency: You must tell someone that AI was used — in a decision, in content generation, in interaction. The user does not need to know how it worked, just that AI was involved.
  2. Explanatory transparency: You must explain why the AI made a specific decision — what factors influenced the outcome, what data was used, why the result was what it was. This is much harder and more expensive to implement.

Most regulations currently require disclosure transparency. Explanatory transparency is required in specific high-stakes contexts, primarily credit and hiring decisions.

European Union — The most comprehensive requirements

EU AI Act: interaction transparency (Article 52)

The EU AI Act Article 52 requires disclosure in three specific scenarios:

Article 52 obligations apply immediately when the EU AI Act transparency provisions entered into force in August 2025.

EU AI Act: high-risk AI transparency (Articles 13 and 14)

For high-risk AI systems (Annex III), the transparency obligations are more extensive:

GDPR Article 22: automated decision explanation

Under GDPR, individuals subject to automated decisions with legal or significant effects have the right to:

"Meaningful information about the logic involved" does not require disclosure of the entire model — but it must be more than a black-box answer. The EDPB has stated that explanations must be clear, comprehensible, and specific enough for the individual to understand the basis of the decision.

United States — Sector-specific and state-level requirements

Federal: FCRA adverse action notices for credit AI

The US has no federal AI disclosure law, but sector-specific laws create effective transparency requirements. The FCRA requires adverse action notices when AI-driven credit decisions go against applicants. The CFPB has clarified that these notices must identify specific factors from the AI model — "score too low" is insufficient if the model used hundreds of variables. This creates a de facto explainability requirement for consumer credit AI.

FTC guidance on AI deception

The FTC has issued guidance stating that using AI in ways that deceive consumers — including deploying AI personas that claim to be human, using AI-generated reviews without disclosure, or using AI to create false impressions — is deceptive under Section 5 of the FTC Act. This applies to any company doing business in the US.

State laws: Colorado, Illinois, New York, California

Several US states have enacted disclosure requirements:

China — Mandatory labeling requirements

China has the world's strictest AI content labeling requirements. Under the Generative AI Regulations (effective August 2023) and the Deep Synthesis Regulations (effective January 2023):

These requirements apply to any company serving Chinese users, regardless of where the company is based.

United Kingdom — Post-Brexit alignment with EU

The UK has taken a sector-specific, principles-based approach to AI regulation. The UK AI framework from DSIT emphasizes transparency as a key principle but does not currently have mandatory disclosure rules equivalent to the EU AI Act.

Key UK transparency requirements:

Canada — AIDA and CPPA

Canada's Artificial Intelligence and Data Act (AIDA), part of Bill C-27, introduces transparency requirements for high-impact AI systems:

AIDA is still working through the legislative process as of early 2026. The Consumer Privacy Protection Act (CPPA), also in Bill C-27, has automated decision provisions similar to GDPR Article 22.

Australia — AI Ethics Framework and sector guidance

Australia has voluntary AI ethics principles that include transparency as a core value, but no mandatory AI transparency law as of early 2026. The Privacy Act 1988 (amended) includes provisions that create transparency obligations when AI processes personal data for consequential decisions — similar to GDPR Article 22 but with different specifics.

The Australian government has signaled its intention to regulate "high risk AI" in ways aligned with the EU AI Act. Watch for legislation in 2026–2027.

Quick reference: transparency obligations by jurisdiction

JurisdictionDisclose AI useLabel AI contentExplain decisionsHard law
EU✓ Article 52✓ Deepfakes✓ GDPR + AI Act✓ Yes
UK✓ SectorsGuidance only✓ UK GDPR✓ GDPR
US (federal)✓ FTC/FCRAFTC guidance✓ FCRA (credit)✗ No AI law
US (states)✓ CO/IL/NY✓ CA SB 942✓ CO/NY hiringVaries
China✓ Required✓ Required✓ Algorithms✓ Yes
Canada✓ AIDA (pending)✓ Deepfakes✓ CPPABill C-27
AustraliaVoluntaryVoluntaryPrivacy Act✗ No AI law
SingaporeIMDA guidanceIMDA guidanceIMDA guidance✗ Voluntary
JapanVoluntaryVoluntaryVoluntary✗ No AI law
South Korea✓ AI Act (2024)✓ Required✓ PIPA✓ Yes
IndiaEmergingEmerging✓ DPDPADPDPA
Saudi Arabia✓ RequiredEmerging✓ PDPL✓ PDPL
UAE✓ Gov entitiesEmergingFrameworkFramework

Practical implementation of AI transparency

Minimum viable disclosure (most jurisdictions)

Explanatory AI (required in credit, hiring, high-risk)

Deepfake labeling (increasingly mandatory)

Generate your transparency compliance checklist

ComplianceIQ identifies every transparency disclosure obligation for your AI systems across 108+ jurisdictions and generates the documentation you need.

Map your transparency obligations →

Further reading