AI Hiring Tool Compliance: NYC, Colorado, Illinois, and the EU AI Act
AI used in hiring decisions is the most regulated AI use case in 2026. Four different laws apply, with different requirements, different deadlines, and different penalties. Here is what you need to do.
Why hiring AI has more regulation than any other AI use case
Legislators focused on hiring AI first because the stakes are clear: a biased algorithm can systematically exclude qualified candidates based on gender, race, age, or disability. The harm is measurable (denied employment), the cause is traceable (the algorithm), and historical precedent is extensive (decades of employment discrimination law).
If your company uses any AI tool in hiring — CV screening, interview analysis, assessment scoring, resume ranking, or automated shortlisting — you are likely subject to at least one of these laws.
NYC Local Law 144 (in effect since July 2023)
The most prominent hiring AI law in the US. It applies to employers and employment agencies that use "automated employment decision tools" (AEDTs) to screen candidates or employees in New York City roles.
What it requires:
- Bias audit: An independent bias audit of the AEDT must be conducted at least annually by an independent auditor. The audit must calculate selection rates by gender and race/ethnicity.
- Public summary: The audit summary must be published on your website.
- Candidate notice: You must notify candidates or employees at least 10 business days before using an AEDT that an AEDT will be used, what job qualifications it will assess, and how to request an alternative process.
- Alternative process: Candidates who do not live in NYC or who request it can access an alternative evaluation process.
Who is an AEDT? NYC's definition is broad: "any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions."
If you use any scoring tool for candidates — even if a human makes the final call — it likely qualifies.
Penalties: Civil penalties of $375–$1,500 per violation. The NYC Department of Consumer and Worker Protection enforces this.
Colorado AI Act — SB 205 (effective February 2026)
Colorado's law applies to "deployers" of high-risk AI systems — companies that use AI systems that make consequential decisions about employment, education, housing, credit, healthcare, and civil rights. AI used in hiring is explicitly listed as consequential.
What it requires for hiring AI:
- Impact assessment: Deployers must annually conduct an impact assessment evaluating potential algorithmic discrimination.
- Disclosure: Notify individuals when an AI system is used for a consequential decision affecting them.
- Appeal right: Individuals must be able to appeal an adverse consequential decision and receive a human review.
- Incident reporting: If you discover algorithmic discrimination, notify the Colorado AG and affected individuals within 90 days.
Penalty: Colorado Attorney General can seek civil penalties up to $20,000 per violation (up to $500,000 per pattern of violations).
Illinois AI Video Interview Act — AIVIA (in effect since 2020, amended 2026)
One of the oldest AI hiring laws. The original AIVIA requires employers using AI to analyze video interviews to: notify candidates AI will be used, explain how the AI works and what traits it evaluates, obtain consent before analysis, and limit who can view the interviews.
The 2024 amendment (effective 2026) expands this: employers must disclose all characteristics and traits the AI evaluates, must request consent in English and Spanish, and must give candidates the right to opt out and have their interview reviewed by a human. Employers using a vendor for AI video interviews must require the vendor to comply with AIVIA as a contract condition.
Penalty: The Illinois Department of Labor enforces this. Violations can result in civil penalties.
EU AI Act — High-risk AI in employment (applies August 2026)
The EU AI Act classifies "AI systems used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates" as high-risk AI under Annex III. If you deploy hiring AI to EU residents, you face the EU AI Act's full high-risk requirements:
- Conformity assessment before deployment
- Technical documentation (purpose, training data, accuracy, limitations)
- Comprehensive logging of system behavior for audit purposes
- Human oversight mechanism that allows a human to monitor, intervene, and override
- Bias testing across protected characteristics before and during deployment
- Registration in the EU AI database
- Clear instructions for use provided to deployers
Comparing the laws
| Requirement | NYC LL144 | CO SB 205 | IL AIVIA | EU AI Act |
|---|---|---|---|---|
| Bias audit | ✓ Annual | ✓ Annual | — | ✓ Ongoing |
| Candidate notice | ✓ 10 days prior | ✓ At time of use | ✓ Before analysis | ✓ Required |
| Human review right | ✓ Alternative process | ✓ Appeal right | ✓ Opt-out right | ✓ Human oversight |
| Technical documentation | — | — | — | ✓ Required |
| Public disclosure | ✓ Audit summary | ✓ Impact assessment | — | ✓ AI database |
| Consent required | — | — | ✓ Explicit | — |
| Max penalty | $1,500/violation | $20K/violation | Civil penalties | €35M or 7% |
What to do if you use AI in hiring
- Identify every AI tool in your hiring process. This includes ATS platforms with AI scoring (Greenhouse, Lever, Workday), video interview tools (HireVue, Spark Hire), LinkedIn Recruiter AI features, and any custom scoring systems.
- Check if the vendor is compliant. Your vendor (HireVue, Workday) may handle some requirements on your behalf — but you, as the deployer, are still responsible. Get compliance documentation from every vendor.
- Implement candidate notice. Add disclosure language to your job applications and hiring communications — required in NYC, Colorado, and Illinois.
- Set up human review. Ensure every AI screening decision can be reviewed and overridden by a human recruiter.
- Commission a bias audit. If you are in NYC, this must be done by an independent auditor and published. For other jurisdictions, internal bias testing and documentation is a starting point.
- Update your contracts. Illinois requires vendor compliance in contracts. Colorado requires you to document your deployer obligations. Update vendor agreements accordingly.
Generate your AI hiring compliance checklist
ComplianceIQ identifies which hiring AI laws apply to your locations and generates a specific checklist for NYC, Colorado, Illinois, and EU requirements.
Get hiring AI compliance checklist →