← Blog
NIST AI RMF April 17, 2026 · 13 min read

NIST AI Risk Management Framework: Practical Implementation Guide

The NIST AI RMF (NIST AI 100-1) is the most widely adopted AI governance standard in the US — and increasingly referenced internationally. This guide explains what each of the four functions actually requires, who owns each piece of work, and how it maps to EU AI Act compliance.

What Is the NIST AI RMF?

The NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0) was released by the US National Institute of Standards and Technology in January 2023. It is a voluntary framework — it is not law — but it has become the de facto US standard for AI governance. Key reasons organisations adopt it:

NIST AI RMF Playbook: NIST publishes a companion Playbook document alongside the RMF that gives specific example actions and suggested practices for each subcategory. The Playbook is available free at ai.nist.gov and is the most practical implementation starting point.

The Four Core Functions

The NIST AI RMF organises AI risk management into four functions. They are designed to be iterative, not sequential — you will revisit GOVERN decisions as MAP uncovers new systems and MEASURE reveals new risks.

GOVERNEstablish organisational culture, policies, and accountability

GOVERN sets the conditions for effective AI risk management. It covers leadership accountability, policies, processes, and the organisational culture needed to treat AI risk seriously.

Key outputs:

  • AI risk appetite statement
  • AI governance roles and responsibilities (RACI)
  • AI Acceptable Use Policy
  • AI risk management process documentation
  • Workforce AI risk awareness programme

Who owns it: Senior leadership + Legal/Compliance + HR

MAPIdentify and categorise AI risks in context

MAP involves cataloguing your AI systems, understanding the context they operate in, and identifying the categories of risk each system presents. Output: a prioritised AI risk register.

Key outputs:

  • AI system inventory (all systems used/built)
  • Risk categorisation by system and use case
  • Stakeholder impact analysis (who is affected by each AI system)
  • Dependency mapping (data sources, third-party models)
  • Applicable regulatory scope per system

Who owns it: Product/Engineering + Privacy/Data + Compliance

MEASUREAnalyse and assess AI risks quantitatively and qualitatively

MEASURE involves evaluating the identified risks: how likely are they, what is their impact, how well are they currently mitigated? This feeds the prioritisation for MANAGE.

Key outputs:

  • Risk likelihood and impact scoring per system
  • Bias and fairness testing results
  • Performance metrics with demographic subgroup breakdowns
  • Third-party AI system assessments
  • Explainability analysis for high-impact systems

Who owns it: Data Science + Legal/Compliance + External Auditors

MANAGEPrioritise, respond to, and monitor AI risks

MANAGE closes the loop: implement risk controls, monitor effectiveness, update documentation, respond to incidents. This is the operational ongoing work, not a one-time exercise.

Key outputs:

  • Risk treatment plans per high-risk system
  • Human oversight mechanisms (review, override, escalation)
  • AI incident response playbook
  • Monitoring and alerting for AI system drift
  • Periodic review schedule and responsible owner

Who owns it: Product/Engineering + Legal + Operations

GOVERN in Practice: First 90 Days

Most organisations struggle to start GOVERN because it feels abstract. Here is a concrete 90-day GOVERN programme for a mid-market company:

Week 1–2

Appoint AI accountability owner

Named person (Chief Risk Officer, DPO, or VP Engineering) with explicit responsibility for AI risk management. Announce internally.

Week 2–4

Draft AI Risk Appetite Statement

One page. What AI uses are permitted without approval? Which require review? Which are prohibited? Executive sign-off required.

Week 3–6

Publish AI Acceptable Use Policy

What employees can and cannot use AI for. Covers personal data, confidential information, customer data, output review requirements.

Week 4–8

Establish AI review process

How new AI tools are approved before use. Who reviews? What criteria? Where are approvals recorded?

Week 6–10

Run initial workforce awareness

30-minute mandatory training on AI policy. Focus: what to do when uncertain, how to report AI concerns, prohibited uses.

Week 8–12

Board/senior leadership briefing

Present AI governance structure, risk appetite, current AI inventory, and initial risk assessment. Get formal sign-off.

MAP in Practice: Building Your AI Inventory

MAP starts with knowing what AI you have. Most organisations dramatically undercount their AI systems because they only count internally-built AI — not the AI features embedded in SaaS tools they subscribe to.

For a thorough MAP exercise, systematically survey:

AI CategoryCommon ExamplesOften Missed?
Internal AI toolsChatGPT/Claude enterprise, Copilot, custom GPTsNo
HR/ATS AI featuresResume screening in Workday, Greenhouse, LeverYes — buried in ATS settings
Customer-facing AIChatbots, recommendation engines, pricing AINo
Security AIBehavioural anomaly detection, SIEM ML, fraud detectionYes — owned by security team
Marketing AIAudience targeting, content personalisation, lead scoringYes — owned by marketing
Finance/Credit AICredit decisioning in payment tools, fraud scoringYes — owned by finance
Vendor-embedded AIAI features in CRM, ERP, support toolsYes — most common gap

MEASURE in Practice: Risk Scoring AI Systems

Once you have your inventory, MEASURE requires assessing each system's risk profile. A practical scoring approach combines two dimensions:

High impact + high autonomy = highest priority for MANAGE interventions. Low impact + low autonomy = low priority. The NIST AI RMF Playbook provides a more detailed five-dimension scoring framework if you need more granularity.

How NIST AI RMF Maps to EU AI Act

If you operate in both the US and EU, you can substantially reduce duplication by aligning your NIST AI RMF implementation with EU AI Act requirements. Key mappings:

NIST AI RMFEU AI Act equivalentSame work?
GOVERN: AI Risk Appetite StatementArticle 9: Risk management systemPartial — combine into single document
MAP: AI system inventoryArticle 11: Technical documentation (system record)Strong overlap — same data
MAP: Stakeholder impact analysisAnnex III: High-risk classification criteriaStrong overlap — drives classification
MEASURE: Bias and performance testingArticle 10: Data governance + Article 15: AccuracyStrong overlap — same test results
MEASURE: Third-party assessmentsArticle 17: Quality management systemPartial overlap
MANAGE: Human oversight mechanismsArticle 14: Human oversight (mandatory)Strong overlap — same controls
MANAGE: Incident responseArticle 73: Serious incident reportingPartial — EU Act has reporting to authorities

Common Implementation Mistakes

Implement NIST AI RMF with ComplianceIQ

ComplianceIQ maps your AI systems against NIST AI RMF sub-categories and EU AI Act simultaneously — so you can work from one platform rather than maintaining two separate compliance programmes.

Start Your NIST Assessment