AI Regulation Finder
Which AI regulations apply to your business? Find your obligations by country and industry — from the 155+ regulations we monitor globally.
How to use this guide
Look up your country or region below. Every country where you have customers or employees may be relevant.
Some industries (healthcare, finance, HR) have additional sector-specific rules beyond general AI laws.
For a precise list with deadlines and required actions, use ComplianceIQ's 15-question assessment (free).
By Country / Region
AI regulations apply based on where your customers and employees are located — not just where your company is headquartered.
European Union
Germany, France, Netherlands, Spain, Italy, Sweden + 12 more
All companies with EU customers or employees
Any automated decision-making affecting EU individuals
Online platforms and marketplaces with EU users
Financial sector companies in EU
United Kingdom
England, Scotland, Wales, Northern Ireland
Any automated decisions affecting UK individuals
Sector-based (ICO, FCA, Ofcom guidance)
United States
All 50 states — federal and state-level laws apply
High-risk AI affecting Colorado residents
AI hiring tools used for NYC candidates
AI video interview tools used in Illinois hiring
Companies with 100K+ CA consumers or $25M+ revenue
Companies deploying AI systems in California
AI content generators with 1M+ CA users
Canada
All provinces and territories
Companies processing Canadian personal data with AI
High-impact AI systems in Canada
Companies with Quebec customers
Asia-Pacific
China, Japan, South Korea, Australia, Singapore, India
AI-generated content services in China
Recommendation algorithms in China
Companies processing Indian citizen data with AI
Companies using AI for Australian citizen data
Companies with Singapore operations
AI systems processing Japanese personal data
Automated processing of Korean personal data
Middle East & Africa
UAE, Saudi Arabia, South Africa, Kenya
Companies processing Saudi citizen data
AI processing of SA personal data
By Industry
Some industries face additional, sector-specific AI regulations on top of general laws. These are the most heavily regulated sectors:
Healthcare & Medical
Medical AI is explicitly listed as high-risk — requires conformity assessment, clinical validation, post-market monitoring
AI/ML-based Software as Medical Device requires FDA pre-market review
AI must not expose PHI; Business Associate Agreements required with AI vendors
Health data is special category — higher protection standard applies to AI processing
HR, Recruiting & Employment
Annual bias audit required. Must post results publicly. $500/day fine for violations.
Written consent + reasonable precaution required before AI video interview analysis
Impact assessment required for high-risk AI employment decisions affecting Colorado residents
AI in employment, worker management, and access to self-employment = explicitly high-risk
AI tools must comply with ADA, Title VII — disparate impact from AI is an employer liability
Financial Services & Fintech
AI credit scoring, insurance risk assessment = high-risk. Conformity assessment required.
ICT risk management framework for AI systems in financial services
Adverse action notices required when AI denies credit. Explainability required.
Automated credit decisions must allow human review + explanation for EU customers
Financial Conduct Authority expects firms to document AI decisions and test for bias
E-Commerce & Retail
Recommender systems must be transparent and offer non-profiling alternative for EU users
Automated personalization with significant effects requires disclosure + opt-out
AI processing of California consumer data requires privacy notice and opt-out
Customer-facing chatbots must disclose they are AI (Art. 52 transparency)
SaaS & Technology
AI providers must register systems, conduct conformity assessments, maintain technical documentation
AI SaaS processing EU data must have DPAs with customers + data minimization in AI training
GPAI models (like GPT-4, Claude) face new transparency + copyright compliance rules
AI content generation tools with 1M+ CA users must enable AI content provenance detection
Education
AI determining access to education, grading, or assessing learning = high-risk
AI tools processing student records must comply with FERPA disclosure + parental rights
AI processing data of children under 13 requires parental consent
Children's data in AI systems requires parental consent (under 16 in most EU states)
What Always Applies (Regardless of Country)
Some regulations apply to any company in the world if certain conditions are met:
If you have EU customers or employees
- → EU AI Act applies — regardless of where your company is based
- → GDPR applies to any personal data processing
- → DSA applies if you run a platform or marketplace
If you have US employees or candidates
- → NYC LL144 applies if any candidates are in New York City
- → Colorado AI Act applies if any consumers are in Colorado (from June 2026)
- → Illinois AAIA applies if using AI video interviews in Illinois
- → EEOC guidance applies to all US hiring AI
If you use AI for consequential decisions
- → EU AI Act Annex III high-risk classification may apply
- → Industry-specific regulations (FDA, FCA, EEOC) may apply
- → GDPR Art. 22 human review right is triggered
If you use chatbots / AI-generated content
- → EU AI Act Art. 52: must disclose the user is talking to AI
- → California SB 942: AI content provenance (from Jan 2026)
- → FTC guidance: deceptive AI personas = unfair trade practice
Get your exact regulation list
This guide covers general patterns. ComplianceIQ asks 15 questions about your specific business and generates a precise list of applicable regulations — with deadlines, required actions, and all needed documents.
Find my regulations — freeNo credit card. 15 questions. 30 minutes.