The AI Compliance Landscape 2026: Who Is Regulating What
Fifty-plus jurisdictions have passed, are enforcing, or are actively developing AI regulations. This is the map: what is in force today, what is coming, and which laws actually have teeth.
The landscape changed completely in 2025–2026
The EU AI Act, Brazil's AI bill, and nine US state AI laws became binding law within an 18-month window. Companies that built compliance programs on 2024 research are already out of date.
Three tiers of regulatory activity
Not all 50+ jurisdictions carry the same weight. Before mapping the landscape, the most useful framework is a simple three-tier model based on enforceability and maturity:
Tier 1: In force with active enforcement
Laws with enforcement bodies, published fines, and documented enforcement actions. Non-compliance has immediate financial consequences. This tier includes GDPR Article 22 (automated decisions), NYC Local Law 144, the EU AI Act's prohibited practices (in force February 2025), Illinois AIVIA, and China's AI regulations.
Tier 2: Passed, with enforcement coming
Laws that are enacted but with compliance deadlines ahead. You need to be ready — not compliant yet necessarily, but building toward it. The EU AI Act's high-risk obligations (August 2, 2026), Colorado SB 24-205 (June 30, 2026), and Canada's AIDA (timeline uncertain) sit here.
Tier 3: Developing or soft law
Guidance documents, frameworks, and bills in progress. These signal direction but create no immediate liability. NIST AI RMF, Singapore's Model AI Governance Framework, and the EU AI Liability Directive (still in draft) are examples. Monitor these — they become Tier 2 with a year's notice.
The European Union: the world's strictest framework
The EU's approach is the most comprehensive anywhere in the world. It layers three regulations that interact with each other and together cover nearly every AI system used by or affecting EU residents.
GDPR — automated decision-making (Tier 1, in force since 2018)
Article 22 gives EU residents the right not to be subject to solely automated decisions with significant legal or similarly significant effects. This already applies to credit scoring, automated hiring, insurance underwriting, and content moderation that affects user accounts. Enforcement is active: the Irish DPC fined Meta €1.2B for cross-border data processing violations in 2023, and automated decision challenges are increasing across EU data protection authorities.
Three exceptions let you proceed with automated decisions: explicit consent, necessary for a contract, or authorised by law. If you use one, you must still allow the person to contest the decision and request human review.
EU AI Act — prohibited practices (Tier 1, in force February 2025)
Eight categories of AI are now completely banned in the EU with no transition period: social scoring by governments, subliminal manipulation, exploitation of vulnerabilities, real-time remote biometric identification for law enforcement (with narrow exceptions), emotion recognition in workplaces and education, biometric categorisation to infer sensitive characteristics, and AI that predicts crime based on profiling.
Fines for using prohibited AI: up to €35 million or 7% of global annual turnover, whichever is higher. The EU's AI Offices have already begun accepting reports.
EU AI Act — high-risk AI obligations (Tier 2, August 2, 2026)
If your AI system is classified as high-risk under Annex III (biometric systems, critical infrastructure, education AI, employment AI, essential services AI, law enforcement AI, migration AI, or administration of justice AI), you must complete a conformity assessment, prepare technical documentation, implement human oversight, log system activity, conduct bias testing, and register in the EU AI database.
Fines for high-risk non-compliance: up to €15 million or 3% of global annual turnover. The August 2, 2026 deadline is 15 months away as of this writing. Conformity assessments for complex systems take 3–6 months. The window to act is narrowing.
GDPR + EU AI Act interaction
Both laws apply simultaneously and reinforce each other. A high-risk AI system processing personal data must comply with both the EU AI Act's technical requirements and GDPR's data minimisation, DPIA requirements, and lawful basis rules. The AI Act's transparency obligations overlap with GDPR's right to explanation. Most legal teams are treating them as a single compliance project.
The United States: state-by-state fragmentation
The US has no federal AI law as of April 2026. Congress debated the American Privacy Rights Act in 2024, which included AI provisions, but it did not pass. What exists instead is a growing patchwork of state laws, sector-specific federal guidance, and a handful of active enforcement frameworks.
Active US AI enforcement (Tier 1)
NYC Local Law 144 — Automated Employment Decision Tools
In force since July 2023. Employers using automated tools for NYC hiring decisions must conduct annual independent bias audits, publish summaries on their websites, and notify candidates 10 days before assessment. Fine: $1,500 per day, per violation. The NYC Department of Consumer and Worker Protection has sent enforcement letters.
Illinois AI Video Interview Act (AIVIA)
In force since 2020, amended in 2024. Employers using AI to analyze video interviews in Illinois must disclose AI use, obtain written consent, and explain which characteristics the AI evaluates. The 2024 amendments added restrictions on sharing video with third parties.
EEOC Title VII guidance on AI hiring
Not a new law, but newly applied. The EEOC's 2023 technical guidance clarifies that employers are responsible for disparate impact caused by third-party AI hiring tools, even if they didn't build them. Employers cannot outsource liability to their vendors.
FCRA/ECOA — AI credit decisions
Long-standing laws newly applied to AI credit models. CFPB guidance published in 2023 clarified that "the model said so" is not a valid adverse action reason. AI factors must be identified specifically. Disparate impact testing across protected classes is required under ECOA.
Coming US state laws (Tier 2)
| State | Law | Effective | Scope |
|---|---|---|---|
| Colorado | SB 24-205 | June 30, 2026 | High-risk AI in consequential decisions (hiring, credit, housing, education, healthcare) |
| California | AB 2013, SB 942 | January 2026 | Training data transparency; AI content labeling |
| Texas | Texas AI Act (pending) | 2026 (if passed) | High-risk AI; mirrors Colorado structure |
| Virginia | HB 2094 | July 2026 | High-risk AI impact assessments |
| New York State | Multiple bills (SAFE Act, etc.) | Pending | Automated employment decisions statewide |
Asia: divergent models, highest stakes
Asia has produced the most divergent approaches: China with the world's strictest rules for AI services, Singapore with the most business-friendly soft-law approach, and Japan, India, and South Korea each occupying different points on the spectrum.
China: the strictest regime (Tier 1)
China operates three overlapping AI regulations: the Algorithmic Recommendations Regulation (in force March 2022), the Deep Synthesis Regulation covering generative AI content (January 2023), and the Interim Measures for Generative AI Services (August 2023). Any AI service available in China — including apps from foreign companies — must comply. Requirements include security assessments for services with "public opinion-forming" potential, content moderation, and mandatory watermarking for AI-generated content. Non-compliant services face blocked distribution.
Singapore: governance-first, enforcement-light (Tier 3)
Singapore's Model AI Governance Framework (2019, updated 2020) and its AI Governance Testing Framework and Toolkit (AI Verify) are the most used AI governance tools in Southeast Asia — but they are voluntary. Singapore has signaled it will introduce binding rules for high-impact sectors (finance, healthcare) by 2026. MAS, the financial regulator, has issued expectations for AI use by financial institutions that carry some supervisory weight.
Japan: light-touch but tightening (Tier 3 → Tier 2)
Japan released revised AI Guidelines in August 2024, moving from purely voluntary to what it calls "soft law" with government backing. Japan's Act on Protection of Personal Information covers AI data processing. The government has signaled sector-specific binding rules for critical infrastructure AI are coming. Japanese companies face obligations primarily through GDPR when serving EU customers.
India: Digital Personal Data Protection Act (Tier 2)
India's DPDPA (August 2023) regulates automated processing of personal data, but implementing rules have been delayed. The Act includes rights regarding automated decision-making. India's AI regulatory framework is still developing — the Ministry of Electronics and Information Technology has signaled a risk-based approach similar to the EU's is planned. Companies serving Indian users should prepare for DPDPA compliance when rules are finalised.
Middle East and Africa
The Gulf states have moved faster on AI strategy than on AI regulation. The UAE's national AI strategy is one of the world's most ambitious, but binding AI law is still emerging. Saudi Arabia's PDPL (2023) has strong provisions on automated decision-making that parallel GDPR Article 22.
UAE
The UAE Personal Data Protection Law (2021) includes automated decision rights. Dubai's DIFC has separate data protection rules. Sector regulators (CBUAE for finance, DOH for health) have issued AI guidance. A federal AI Law is expected in 2025–2026.
Saudi Arabia
PDPL (2023) applies to AI processing of personal data. The National Data Management Office has issued binding governance frameworks for government AI. Data localisation requirements apply to sensitive data including health and financial data.
Africa
The African Union adopted its Continental AI Strategy in 2024. Several countries (Kenya, Nigeria, South Africa, Rwanda) have national AI strategies. South Africa's POPIA (2021) contains data protection provisions applicable to AI. Binding AI-specific law remains rare on the continent, though this will change.
Latin America
Brazil leads Latin America in AI regulation. Its AI Bill passed the Senate in December 2024 and awaits House approval. Colombia and Mexico have issued guidelines. Argentina is developing a national AI framework.
Brazil AI Bill (PL 2338/2023)
Brazil's AI Bill mirrors the EU AI Act's risk-based approach with high-risk classification for AI in employment, education, healthcare, and justice decisions. It includes transparency obligations, algorithmic explainability rights, and a Data Protection Authority oversight role. Brazil already has the LGPD (2020), which applies to AI processing of personal data and includes automated decision rights similar to GDPR Article 22.
The global pattern: what regulators agree on
Despite the geographic and political diversity, AI regulators worldwide have converged on a remarkably consistent set of concerns. Understanding what they agree on helps predict where legislation is heading even where it hasn't arrived yet.
Transparency
People must know when AI is making or influencing decisions that affect them. Virtually every jurisdiction requires disclosure.
Explainability
People have a right to know why an AI reached a particular outcome, especially in high-stakes decisions.
Human oversight
High-stakes automated decisions must have a meaningful human review option. Fully automated life-altering decisions are prohibited or restricted everywhere.
Fairness and non-discrimination
AI systems must not discriminate on protected characteristics. Disparate impact testing is emerging as a universal requirement for consequential AI.
Risk-based approach
Higher-risk AI faces stricter obligations. Low-risk AI (e.g., spam filters) faces minimal requirements. The EU AI Act's risk tiers are being copied worldwide.
Developer accountability
Companies that build AI systems bear compliance obligations, not just companies that use them. The EU AI Act and Colorado both explicitly regulate "developers."
What this means for your compliance strategy
Given the global pattern, a compliance strategy built around only one jurisdiction (e.g., "we'll just do GDPR") is increasingly fragile. The practical approach most legal teams are taking is a risk-based tiered program:
- 1Map your AI systems against Tier 1 obligations first — active enforcement with real fines. Address these before anything else.
- 2For Tier 2 (passed but not yet enforced), build toward compliance. The EU AI Act Aug 2026 deadline is the most urgent: start conformity assessment for high-risk systems now.
- 3Use EU AI Act compliance as the global baseline — it is the strictest and most comprehensive. If you meet EU requirements, you will be 70–80% of the way to most other jurisdictions.
- 4Monitor Tier 3 developments in your key markets (US federal, Australia, Japan sector rules). These move to Tier 2 with 12–18 months' notice.
- 5Document everything. Regulators across all jurisdictions want to see that you have an AI governance process, not just a compliance checklist.
Check which laws apply to your business — free
ComplianceIQ covers 108+ jurisdictions and tells you exactly which laws apply based on where you operate, what your AI does, and who it affects.
Get my free compliance report