AI Vendor Due Diligence Checklist: 50 Questions to Ask Before Signing
When you use a third-party AI system to make or assist decisions that affect your customers, you become a deployer under the EU AI Act — and you inherit compliance obligations. These 50 questions are the due diligence baseline before any AI vendor relationship.
Why vendor due diligence matters more under the EU AI Act
Under the EU AI Act, deployers (companies that use AI systems in their products or operations) share compliance responsibility with providers (companies that build AI). If your vendor's AI system violates the Act and you did not perform adequate due diligence, you face joint liability. Documented due diligence is your defence.
How to Use This Checklist
Send these questions to AI vendors before contract signature. For high-risk AI systems (EU AI Act Annex III), all sections are mandatory. For limited-risk AI (chatbots, content generation), focus on Sections 1, 3, and 8. For minimal-risk AI, use Sections 1 and 3 at minimum.
Weak or vague answers to Sections 2, 4, and 7 are red flags. A vendor that cannot answer questions about their EU AI Act compliance status or bias testing methodology either has not done the work or is not being transparent.
Section 1: Company and Product Basics
What is the intended use case for this AI system, and what decisions does it automate or assist?
What are the known limitations of the AI system — what does it not do well?
How long has this AI system been commercially deployed? How many customers use it in production?
What is your AI system versioning process? How are we notified of model updates?
Have you had any regulatory investigations or enforcement actions related to this AI system in any jurisdiction?
What is your company's approach to AI ethics — do you have a published policy?
Who internally is accountable for AI safety and compliance at your organisation?
Section 2: EU AI Act Compliance
Have you classified this AI system under the EU AI Act risk tiers? What is its classification?
If the system is high-risk under EU AI Act, is it registered in the EU AI Act database?
Have you completed a conformity assessment for this AI system under EU AI Act?
Can you provide the technical documentation required under EU AI Act Article 11?
Does your system include the technical measures for human oversight required by EU AI Act Article 14?
Do you provide instructions for deployers as required by EU AI Act Article 13?
What are your obligations as provider when we deploy your system as the deployer?
How do you handle EU AI Act Article 73 serious incident reporting — what is your process, and what do you notify us about?
Section 3: Data and Privacy
What personal data does the AI system process — at inference time and during any fine-tuning?
Where is this data processed and stored? What regions or countries?
Can you provide a Data Processing Agreement (DPA) compliant with GDPR Article 28?
Is our data used to train or improve your underlying AI models? Can we opt out?
How long is our data retained by your systems? What is your deletion process?
Who are your sub-processors for this AI service? How are we notified of changes?
Have you conducted a Transfer Impact Assessment (TIA) for data transfers outside the EEA?
What security certifications do you hold (SOC 2, ISO 27001, etc)?
Section 4: Training Data and Bias
What data was used to train this AI system? Can you provide a data sheet or model card?
How was the training data labelled, and by whom? What quality controls were applied?
Has the training data been assessed for bias? What populations are over- or under-represented?
Have you conducted bias and fairness testing on this AI system? Can you share the results?
How does the system perform across demographic subgroups (age, gender, ethnicity, geography) relevant to our use case?
If we discover the system produces biased outputs against our specific user population, what remediation process do you follow?
Do you have a published bias testing methodology or third-party bias audit results?
Section 5: Performance and Reliability
What are the published accuracy, precision, recall, and F1 metrics for this system on your validation dataset?
How do those metrics compare to performance on real-world production data?
What is your SLA for AI system availability and response time?
How do you monitor for model drift post-deployment? What triggers retraining?
What is your process for notifying customers when model performance degrades?
What is the system's behaviour when it cannot make a confident prediction — does it abstain or produce a low-confidence output?
Have you conducted adversarial testing on this system? What attack vectors were tested?
Section 6: Explainability and Transparency
Can the AI system explain its decisions in human-readable form? To what level of detail?
If a customer requests an explanation of a decision made using your AI (GDPR Article 22, EU AI Act Article 13), what can you provide?
Do you publish a model card or system card for this AI system?
What are the key factors that influence the AI's outputs — what inputs does it weight most heavily?
Can outputs be traced back to the specific model version and configuration that produced them?
Section 7: Audit Rights and Compliance Documentation
Will you provide audit rights allowing us or a designated third party to assess your AI system's compliance?
Can you provide compliance documentation relevant to our jurisdiction (EU AI Act technical documentation, NIST AI RMF mapping, ISO 42001 certification)?
Do you participate in any industry AI safety or ethics certification programmes?
How frequently is your AI system independently audited? Can we receive audit reports?
What is your process for responding to regulatory enquiries about your AI system on our behalf?
Section 8: Contract and Liability
What representations and warranties do you make about the AI system's compliance with applicable law?
Who bears liability if your AI system produces an output that causes harm to our customers?
What is your indemnification coverage for AI-related regulatory fines or claims?
What are your liability caps, and do they apply to regulatory fines (often excluded)?
What are the contract termination provisions — can we exit if the system is found to be non-compliant?
What happens to our data if we terminate the contract?
Red Flags in Vendor Responses
Cannot provide EU AI Act classification
HIGHIf a vendor cannot tell you whether their AI system is high-risk under EU AI Act, they have not done the analysis. You cannot comply as a deployer without knowing the system's classification.
"Our legal team handles compliance" — no technical documentation
HIGHCompliance documentation under EU AI Act Article 11 must be technically substantive. Lawyers cannot substitute for actual model documentation.
No bias testing results, or results only for one demographic
HIGHSingle-demographic testing is not sufficient for EU AI Act Article 10 data governance. Ask for subgroup breakdowns by age, gender, and geography at minimum.
Liability caps exclude regulatory fines
MEDIUMEU AI Act fines flow to the deployer, not the provider. If the vendor's liability cap excludes regulatory penalties, you bear the full regulatory risk from their system's failures.
No audit rights or DPA available
HIGHInability to provide a GDPR Article 28-compliant DPA or audit rights is a legal blocker for EU deployment. Do not proceed.
Our data is used for model training — opt-out not available
MEDIUMUsing customer data to train AI models requires a specific lawful basis. If opt-out is unavailable and you process personal data, this creates GDPR exposure.
Contract Clauses You Should Insist On
Beyond the due diligence questions, ensure your AI vendor contract includes:
- Compliance representation: Vendor warrants that the AI system complies with applicable law in the jurisdictions you operate
- Change notification: Vendor must notify you at least 30 days before making material changes to the AI system (model updates, changes to training data)
- Incident notification: Vendor must notify you within 48 hours of becoming aware of any incident that may affect your compliance obligations
- Audit rights: You (or a designated auditor) may request documentation and system access to assess compliance
- Regulatory cooperation: Vendor will cooperate with and provide documentation to regulators investigating your deployment
- Data deletion: Upon contract termination, all customer data destroyed within 30 days with written confirmation
- Sub-processor controls: Vendor may not add new sub-processors without prior written consent
Manage Your AI Vendor Risk Register
ComplianceIQ tracks all your AI vendors, their compliance status, and flags gaps in documentation — so you know which vendor relationships carry EU AI Act risk before regulators ask.
Start Your Vendor Assessment