AI Compliance for Financial Services: Banks, Fintech, and Insurance
Financial services AI is subject to more overlapping regulations than any other sector. EU AI Act high-risk requirements stack on top of existing US fair lending laws, DORA operational resilience requirements, and consumer financial protection rules. Here is a complete picture.
Why financial AI is particularly complex
Financial AI uniquely sits at the intersection of sector-specific financial regulation (which has existed for decades) and new AI-specific regulation (which is brand new). A credit scoring AI in the US must comply with ECOA and the FCRA before it even gets to the EU AI Act. A trading algorithm must comply with MiFID II before EU AI Act. Each layer adds requirements.
EU AI Act: financial services AI is high-risk
The EU AI Act Annex III explicitly classifies several financial AI use cases as high-risk:
- AI used in creditworthiness assessment and credit scoring
- AI used in insurance risk assessment and pricing (life and health insurance)
- AI used in fraud detection that affects individuals' financial interests
- AI used in financial market surveillance that can affect market participants
High-risk classification means full conformity assessment, technical documentation, human oversight, logging, bias testing, and EU AI database registration before deploying to EU users. The August 2, 2026 deadline applies.
US: Fair Lending and Consumer Protection
Equal Credit Opportunity Act (ECOA) and Fair Housing Act
US credit AI must comply with ECOA, which prohibits discrimination in lending on the basis of race, color, religion, national origin, sex, marital status, age, or because you receive public assistance. The Fair Housing Act adds similar protections for mortgage lending.
For AI specifically: if your credit model has a disparate impact on a protected class — even without discriminatory intent — it may violate ECOA. This is called disparate impact liability. You must test for it and document that your model is justified by business necessity.
Fair Credit Reporting Act (FCRA)
If your AI uses consumer reports in credit decisions, the FCRA applies. Key requirements:
- Permissible purpose — you can only pull credit reports with a legally permissible reason
- Adverse action notices — if you deny credit based on a consumer report, you must notify the applicant with specific reasons
- Adverse action and AI: explaining an AI decision to a consumer is a known challenge. The CFPB has issued guidance that "the model said so" is not a sufficient adverse action reason — you must identify specific factors
CFPB guidance on AI in credit
The Consumer Financial Protection Bureau has issued guidance clarifying that lenders using AI credit scoring must provide specific reasons for adverse actions — not vague or generic reasons. "Credit score too low" is not sufficient if the AI used 500 variables. You must identify the most significant factors in the AI's decision. This requires explainable AI (XAI) approaches for consumer-facing credit decisions.
EU DORA — Digital Operational Resilience Act
DORA (applicable from January 2025) requires financial entities in the EU to manage the operational risks of their technology — including AI systems. Key DORA provisions for AI:
- ICT risk management: Comprehensive framework for identifying, classifying, and managing technology risks — AI must be included
- Third-party risk: If you use an AI vendor (cloud AI provider, model vendor), DORA requires contractual protections, audit rights, and exit strategies
- Incident reporting: Major AI failures affecting operations must be reported to your national financial regulator
- Testing: Advanced DORA testing requirements include threat-led penetration testing for significant firms
Insurance AI — actuarial fairness requirements
Insurance regulators in the US (state-by-state) and EU have specific concerns about AI-based actuarial models. The core tension: insurers want to use data to price risk accurately, but regulators want to prevent unfair discrimination in insurance access and pricing.
US insurance AI guidance varies by state. Colorado passed one of the most comprehensive insurance AI regulations, requiring insurers to monitor AI models for unfair discrimination and make models available for review. California, New York, and Illinois have similar initiatives at various stages.
In the EU, the AI Act classifies insurance risk assessment AI as high-risk (life and health insurance). This means EU insurers using AI to price individual risk face full high-risk requirements.
AML and fraud detection AI
AI is widely used in anti-money laundering (AML) transaction monitoring and fraud detection. These systems are generally not high-risk under the EU AI Act — they are designed to protect people, not make decisions about them. However, they interact with financial crime regulations:
- AML AI must be validated and documented for regulatory examinations
- False positive rates affect customer experience and must be monitored
- If AI-based fraud detection leads to account freezes or closures affecting individuals, adverse action requirements may apply
Practical compliance roadmap for financial services AI
Immediate (if not already done)
- ECOA and FCRA compliance for all credit AI — adverse action reason generation, disparate impact testing
- DORA risk management framework — AI included as ICT risk
- DORA third-party contracts for all AI vendors
By August 2026 (EU AI Act)
- EU AI Act conformity assessment for all credit scoring and insurance AI deployed in EU
- Technical documentation package for EU regulators
- Human oversight mechanisms for all high-risk AI
- EU AI database registration
Ongoing
- Annual bias testing across protected classes
- Model drift monitoring — retrain or recertify when performance degrades
- Regulatory change monitoring — financial AI regulation is still developing rapidly
Generate your financial services AI compliance plan
ComplianceIQ maps your AI systems against EU AI Act, US fair lending laws, and DORA requirements — and generates prioritized compliance documentation.
Start financial services compliance →