← All articles
Risk Assessment·April 2026·10 min read

How to Do an AI Risk Assessment (Step-by-Step Guide)

The EU AI Act requires a risk management system for high-risk AI. Colorado and several other jurisdictions require impact assessments. This guide walks through a practical AI risk assessment process that satisfies all of them.

What is an AI risk assessment?

An AI risk assessment is a structured process for identifying potential harms from an AI system, evaluating their likelihood and severity, and documenting how you will mitigate them. It is not a one-time exercise — it should be updated when the system changes, when it is deployed in new contexts, and periodically as a routine check.

The EU AI Act uses the term "risk management system" (Article 9) rather than "risk assessment," but the substance is similar to what most compliance frameworks call a risk assessment. GDPR uses "Data Protection Impact Assessment" (DPIA) for data risks specifically. Colorado SB 205 uses "impact assessment." NIST calls it "risk mapping and measurement." These are variations on the same concept.

Who needs to do an AI risk assessment

Under the EU AI Act: every provider of high-risk AI must maintain a risk management system. Deployers of high-risk AI (companies that use, not build, the AI) have lighter but still real obligations.

Under Colorado SB 205: every deployer of high-risk AI in consequential decisions must conduct an annual impact assessment.

Under GDPR: a DPIA is required for AI processing that is "likely to result in high risk" to individuals' rights — this includes most AI using biometric data, sensitive categories, or making automated decisions affecting many people.

If your AI is minimal risk (productivity tools, content creation, recommendation without significant individual consequences), no formal risk assessment is required — but documenting why you classified it as minimal risk is prudent.

Step 1: Define the AI system (30 minutes)

Write a one-page description covering:

Step 2: Classify the risk level

Apply the EU AI Act risk classification. Ask: does this system make decisions that significantly affect individuals in these domains?

Step 3: Identify potential harms (1 hour)

For each potential harm, document: what is the harm, who could experience it, how likely is it, and how severe would it be. Use this structure:

Harm identification template
Harm 1: Biased outputs — system produces systematically less accurate results for a protected group (e.g., women, people of color)
Affected: Job applicants from protected groups
Likelihood: Medium (common in AI systems trained on historical employment data)
Severity: High (denied employment opportunity)
Mitigation: Annual bias audit; accuracy metrics by demographic group; human review of all rejections
Harm 2: Over-reliance — hiring managers accept AI ranking without review
Affected: All candidates, hiring managers
Likelihood: Medium (common when AI outputs feel authoritative)
Severity: Medium (qualified candidates excluded without adequate human consideration)
Mitigation: Training for HR staff; UI design that presents AI as "suggested ranking" not "final list"; mandatory human review step
Harm 3: Data breach — candidate personal data exposed
Affected: Job applicants
Likelihood: Low (with proper security)
Severity: High
Mitigation: Encryption, access controls, retention limits (GDPR requirement)

Step 4: Assess existing controls

For each harm identified, document what controls you currently have in place. Be honest — "none yet" is a valid answer that tells you where to focus. Controls might include:

Step 5: Calculate residual risk and decide on actions

For each harm: after your existing controls, what is the residual risk? If it is still high, you need additional mitigation. If it is acceptably low, document why.

The EU AI Act does not require zero risk — it requires a "high level of accuracy, robustness, and security" and adequate mitigation of identified risks. What is adequate is proportionate to the severity of potential harm.

Step 6: Document and keep it alive

Write up the results in a document. For EU AI Act compliance, this document is part of your technical documentation (Article 11 requirement). For GDPR DPIA purposes, it follows a slightly different format but covers the same ground.

The document must be:

What a proportionate assessment looks like

Small business using a third-party hiring tool: A 5-page document describing the tool, your use of it, the harms you identified, the vendor's bias audit results (request these), your human review process, and how candidates can request reconsideration. Review annually. Total effort: 1 day.

Mid-market company building its own hiring AI: Technical documentation, independent bias audit, conformity assessment, logging infrastructure, and quarterly monitoring. Total effort: 4–8 weeks.

Run a structured AI risk assessment

ComplianceIQ's assessment wizard guides you through a structured risk assessment for your AI tools and generates the documentation you need.

Start your AI risk assessment →

Further reading