← Blog
Vendor RiskEU AI Act Deployer April 17, 2026 · 12 min read

Third-Party AI Risk Management: A Practical Framework

Most companies using AI are not building it themselves — they are buying it from vendors. Under the EU AI Act, the company deploying a third-party AI system is the "deployer" and bears compliance obligations. Third-party AI risk management is not optional.

The Deployer Responsibility Problem

The EU AI Act creates two key roles for AI supply chains:

Provider

The company that develops the AI system or model. Responsible for: technical documentation, CE marking (high-risk), GPAI obligations, Declaration of Conformity.

Deployer (that's often you)

The company that integrates the AI system into a product or service, or uses it to make decisions. Responsible for: human oversight, use within intended purpose, incident reporting, log maintenance.

The critical point: If a third-party AI system you deploy makes a flawed decision that harms a customer, both you and the provider may be liable — you as the deployer for insufficient oversight, the provider for inadequate compliance documentation. Your contract must clearly allocate these risks.

Three-Tier Risk Classification for Third-Party AI

Tier 1 — High risk

AI systems making or significantly influencing decisions that affect individuals in employment, credit, healthcare, insurance, or legal contexts.

Examples

AI hiring screening toolsCredit scoring AIHealthcare diagnostic AIBenefits eligibility AI

Required diligence

Full due diligence: technical documentation, bias testing evidence, EU AI Act compliance status, independent audit results, contract with all required clauses.

Tier 2 — Medium risk

AI systems with meaningful business impact but not directly affecting individual rights or significant life decisions.

Examples

Customer service chatbotsContent recommendation enginesFraud detection (non-credit)Internal process automation

Required diligence

Standard due diligence: DPA, security assessment, model card review, contractual SLAs, incident notification requirements.

Tier 3 — Lower risk

AI tools used internally for productivity, coding assistance, or general operations with no direct customer impact.

Examples

AI writing assistantsCode generation toolsInternal summarisation toolsAI-enhanced search

Required diligence

Lightweight: DPA if processing personal data, privacy review, approved-tools policy compliance.

8 Contract Clauses Every AI Vendor Agreement Needs

Standard SaaS contracts are not designed for AI. These eight clauses address the gaps created by EU AI Act and GDPR requirements:

1

EU AI Act deployer support

Provider must supply technical documentation, instructions for use, and Declaration of Conformity required under Articles 13 and 47. Provider must notify deployer of updates that affect compliance status.

2

Data Processing Agreement

Full GDPR Article 28 compliant DPA. Subprocessor list with right to object. Data deletion/return at contract end. Right to audit.

3

Bias and fairness

Annual bias testing results to be provided. Notification within 30 days if bias testing reveals material disparity for protected characteristics.

4

Incident notification

Provider notifies deployer of serious incidents within 24 hours. EU AI Act Article 73 requires notification within 72 hours — so you need provider notification in time to meet this.

5

Model changes

30-day advance notice of material changes to model. Right to test changes in staging before production deployment. Right to continue using prior version for 90 days.

6

Availability and continuity

Minimum uptime SLA. Right to export data and model outputs in portable format. Data export within 30 days of contract termination.

7

Liability allocation

Clear allocation of liability between provider and deployer for AI decisions. Indemnification for provider-caused compliance failures. Cap on liability for AI output errors.

8

Audit rights

Right to audit provider's compliance with EU AI Act obligations annually or on reasonable notice. Right to request independent third-party audit evidence.

Ongoing Monitoring Programme

Third-party AI risk management is not a one-time procurement exercise. AI systems change, regulations evolve, and incidents happen. Here is the monitoring cadence:

Continuous

Performance dashboards — accuracy, error rates, latency. Alert on significant deviations from baseline.

Monthly

Review provider changelog and release notes. Assess any model updates for compliance impact.

Quarterly

Review bias testing data from provider. Assess whether new regulatory developments affect the AI system's risk classification.

Annually

Full third-party AI review: re-run due diligence questionnaire, review updated technical documentation, assess whether contract terms remain adequate.

On trigger

Any serious incident, regulatory investigation of the provider, ownership change, or material service change triggers immediate review.

Third-Party AI and GDPR: The Data Processor Chain

Third-party AI systems that process personal data create a GDPR data processor chain. Common points of failure:

No DPA signed before the AI system processes personal data — a GDPR Article 28 violation
AI provider uses subprocessors not disclosed in their DPA — violates Article 28(2)
AI system transfers data to third countries without adequacy decision or SCCs — violates Chapter V
AI system retains training data beyond what was agreed — violates Article 5(1)(e) storage limitation
Provider uses your data to improve their AI models without your agreement — potentially a new processing purpose requiring new legal basis

Manage your AI vendor risk in ComplianceIQ

ComplianceIQ's vendor management module tracks third-party AI risk scores, contract review dates, and EU AI Act deployer obligations for every vendor in your inventory.

Start free