How to Do an AI Risk Assessment (Step-by-Step Guide)
The EU AI Act requires a risk management system for high-risk AI. Colorado and several other jurisdictions require impact assessments. This guide walks through a practical AI risk assessment process that satisfies all of them.
What is an AI risk assessment?
An AI risk assessment is a structured process for identifying potential harms from an AI system, evaluating their likelihood and severity, and documenting how you will mitigate them. It is not a one-time exercise — it should be updated when the system changes, when it is deployed in new contexts, and periodically as a routine check.
The EU AI Act uses the term "risk management system" (Article 9) rather than "risk assessment," but the substance is similar to what most compliance frameworks call a risk assessment. GDPR uses "Data Protection Impact Assessment" (DPIA) for data risks specifically. Colorado SB 205 uses "impact assessment." NIST calls it "risk mapping and measurement." These are variations on the same concept.
Who needs to do an AI risk assessment
Under the EU AI Act: every provider of high-risk AI must maintain a risk management system. Deployers of high-risk AI (companies that use, not build, the AI) have lighter but still real obligations.
Under Colorado SB 205: every deployer of high-risk AI in consequential decisions must conduct an annual impact assessment.
Under GDPR: a DPIA is required for AI processing that is "likely to result in high risk" to individuals' rights — this includes most AI using biometric data, sensitive categories, or making automated decisions affecting many people.
If your AI is minimal risk (productivity tools, content creation, recommendation without significant individual consequences), no formal risk assessment is required — but documenting why you classified it as minimal risk is prudent.
Step 1: Define the AI system (30 minutes)
Write a one-page description covering:
- What it does: The intended purpose, in plain language. "This system analyzes CV text and ranks job applicants by predicted job performance."
- Who uses it: Internal users, external users, affected individuals (people who have decisions made about them).
- How it works: Type of AI (classifier, LLM, regression model), what inputs it takes, what outputs it produces.
- Deployment context: Where it is used, in which countries, how often decisions are made.
- Training data: General description of what data the model was trained on (or, for third-party AI, what you know about its training).
Step 2: Classify the risk level
Apply the EU AI Act risk classification. Ask: does this system make decisions that significantly affect individuals in these domains?
- Employment / hiring / performance evaluation → High risk
- Credit scoring / financial services → High risk
- Healthcare diagnosis or treatment → High risk
- Biometric identification → High risk (real-time biometric in public = unacceptable risk)
- Education admissions / scoring → High risk
- Critical infrastructure → High risk
- Law enforcement, border control → High risk
- Customer chatbot → Limited risk (transparency obligation only)
- Internal productivity tool → Minimal risk
- Image generation → Minimal to limited risk
Step 3: Identify potential harms (1 hour)
For each potential harm, document: what is the harm, who could experience it, how likely is it, and how severe would it be. Use this structure:
Step 4: Assess existing controls
For each harm identified, document what controls you currently have in place. Be honest — "none yet" is a valid answer that tells you where to focus. Controls might include:
- Human oversight mechanisms (can a human review and override?)
- Bias testing and monitoring
- Logging of decisions for auditability
- Access restrictions on who can use the system
- User training
- Technical limitations (the system cannot be used in prohibited contexts)
Step 5: Calculate residual risk and decide on actions
For each harm: after your existing controls, what is the residual risk? If it is still high, you need additional mitigation. If it is acceptably low, document why.
The EU AI Act does not require zero risk — it requires a "high level of accuracy, robustness, and security" and adequate mitigation of identified risks. What is adequate is proportionate to the severity of potential harm.
Step 6: Document and keep it alive
Write up the results in a document. For EU AI Act compliance, this document is part of your technical documentation (Article 11 requirement). For GDPR DPIA purposes, it follows a slightly different format but covers the same ground.
The document must be:
- Updated when the AI system changes materially
- Reviewed at least annually (Colorado SB 205 requires annual assessment)
- Available to show regulators on request
- Acted upon — the point is not the document, it is the mitigation
What a proportionate assessment looks like
Small business using a third-party hiring tool: A 5-page document describing the tool, your use of it, the harms you identified, the vendor's bias audit results (request these), your human review process, and how candidates can request reconsideration. Review annually. Total effort: 1 day.
Mid-market company building its own hiring AI: Technical documentation, independent bias audit, conformity assessment, logging infrastructure, and quarterly monitoring. Total effort: 4–8 weeks.
Run a structured AI risk assessment
ComplianceIQ's assessment wizard guides you through a structured risk assessment for your AI tools and generates the documentation you need.
Start your AI risk assessment →