NIST AI Risk Management Framework: Practical Implementation Guide
The NIST AI RMF (NIST AI 100-1) is the most widely adopted AI governance standard in the US — and increasingly referenced internationally. This guide explains what each of the four functions actually requires, who owns each piece of work, and how it maps to EU AI Act compliance.
What Is the NIST AI RMF?
The NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0) was released by the US National Institute of Standards and Technology in January 2023. It is a voluntary framework — it is not law — but it has become the de facto US standard for AI governance. Key reasons organisations adopt it:
- Referenced by US federal executive orders on AI (EO 14110, 2023) and subsequent guidance
- Required or strongly preferred by US federal agency contractors and vendors
- Used as a baseline by EU AI Act conformity assessment bodies as evidence of good governance
- Recognised by ISO/IEC AI governance standards bodies as compatible with ISO 42001
- Referenced in FTC, SEC, and CFPB AI guidance as a benchmark for responsible AI
NIST AI RMF Playbook: NIST publishes a companion Playbook document alongside the RMF that gives specific example actions and suggested practices for each subcategory. The Playbook is available free at ai.nist.gov and is the most practical implementation starting point.
The Four Core Functions
The NIST AI RMF organises AI risk management into four functions. They are designed to be iterative, not sequential — you will revisit GOVERN decisions as MAP uncovers new systems and MEASURE reveals new risks.
GOVERN sets the conditions for effective AI risk management. It covers leadership accountability, policies, processes, and the organisational culture needed to treat AI risk seriously.
Key outputs:
- AI risk appetite statement
- AI governance roles and responsibilities (RACI)
- AI Acceptable Use Policy
- AI risk management process documentation
- Workforce AI risk awareness programme
Who owns it: Senior leadership + Legal/Compliance + HR
MAP involves cataloguing your AI systems, understanding the context they operate in, and identifying the categories of risk each system presents. Output: a prioritised AI risk register.
Key outputs:
- AI system inventory (all systems used/built)
- Risk categorisation by system and use case
- Stakeholder impact analysis (who is affected by each AI system)
- Dependency mapping (data sources, third-party models)
- Applicable regulatory scope per system
Who owns it: Product/Engineering + Privacy/Data + Compliance
MEASURE involves evaluating the identified risks: how likely are they, what is their impact, how well are they currently mitigated? This feeds the prioritisation for MANAGE.
Key outputs:
- Risk likelihood and impact scoring per system
- Bias and fairness testing results
- Performance metrics with demographic subgroup breakdowns
- Third-party AI system assessments
- Explainability analysis for high-impact systems
Who owns it: Data Science + Legal/Compliance + External Auditors
MANAGE closes the loop: implement risk controls, monitor effectiveness, update documentation, respond to incidents. This is the operational ongoing work, not a one-time exercise.
Key outputs:
- Risk treatment plans per high-risk system
- Human oversight mechanisms (review, override, escalation)
- AI incident response playbook
- Monitoring and alerting for AI system drift
- Periodic review schedule and responsible owner
Who owns it: Product/Engineering + Legal + Operations
GOVERN in Practice: First 90 Days
Most organisations struggle to start GOVERN because it feels abstract. Here is a concrete 90-day GOVERN programme for a mid-market company:
Appoint AI accountability owner
Named person (Chief Risk Officer, DPO, or VP Engineering) with explicit responsibility for AI risk management. Announce internally.
Draft AI Risk Appetite Statement
One page. What AI uses are permitted without approval? Which require review? Which are prohibited? Executive sign-off required.
Publish AI Acceptable Use Policy
What employees can and cannot use AI for. Covers personal data, confidential information, customer data, output review requirements.
Establish AI review process
How new AI tools are approved before use. Who reviews? What criteria? Where are approvals recorded?
Run initial workforce awareness
30-minute mandatory training on AI policy. Focus: what to do when uncertain, how to report AI concerns, prohibited uses.
Board/senior leadership briefing
Present AI governance structure, risk appetite, current AI inventory, and initial risk assessment. Get formal sign-off.
MAP in Practice: Building Your AI Inventory
MAP starts with knowing what AI you have. Most organisations dramatically undercount their AI systems because they only count internally-built AI — not the AI features embedded in SaaS tools they subscribe to.
For a thorough MAP exercise, systematically survey:
| AI Category | Common Examples | Often Missed? |
|---|---|---|
| Internal AI tools | ChatGPT/Claude enterprise, Copilot, custom GPTs | No |
| HR/ATS AI features | Resume screening in Workday, Greenhouse, Lever | Yes — buried in ATS settings |
| Customer-facing AI | Chatbots, recommendation engines, pricing AI | No |
| Security AI | Behavioural anomaly detection, SIEM ML, fraud detection | Yes — owned by security team |
| Marketing AI | Audience targeting, content personalisation, lead scoring | Yes — owned by marketing |
| Finance/Credit AI | Credit decisioning in payment tools, fraud scoring | Yes — owned by finance |
| Vendor-embedded AI | AI features in CRM, ERP, support tools | Yes — most common gap |
MEASURE in Practice: Risk Scoring AI Systems
Once you have your inventory, MEASURE requires assessing each system's risk profile. A practical scoring approach combines two dimensions:
- Impact: If this AI makes a wrong decision, how serious is the harm? (Low = minor inconvenience; High = financial, health, or legal harm to individuals)
- Autonomy: How much human review happens before the AI output affects someone? (High autonomy = fully automated; Low autonomy = AI suggestion, human decides)
High impact + high autonomy = highest priority for MANAGE interventions. Low impact + low autonomy = low priority. The NIST AI RMF Playbook provides a more detailed five-dimension scoring framework if you need more granularity.
How NIST AI RMF Maps to EU AI Act
If you operate in both the US and EU, you can substantially reduce duplication by aligning your NIST AI RMF implementation with EU AI Act requirements. Key mappings:
| NIST AI RMF | EU AI Act equivalent | Same work? |
|---|---|---|
| GOVERN: AI Risk Appetite Statement | Article 9: Risk management system | Partial — combine into single document |
| MAP: AI system inventory | Article 11: Technical documentation (system record) | Strong overlap — same data |
| MAP: Stakeholder impact analysis | Annex III: High-risk classification criteria | Strong overlap — drives classification |
| MEASURE: Bias and performance testing | Article 10: Data governance + Article 15: Accuracy | Strong overlap — same test results |
| MEASURE: Third-party assessments | Article 17: Quality management system | Partial overlap |
| MANAGE: Human oversight mechanisms | Article 14: Human oversight (mandatory) | Strong overlap — same controls |
| MANAGE: Incident response | Article 73: Serious incident reporting | Partial — EU Act has reporting to authorities |
Common Implementation Mistakes
- Treating NIST AI RMF as a documentation exercise: The framework is designed to change organisational behaviour — not generate reports. If your GOVERN outputs sit in a folder nobody reads, they are not working.
- Starting with MEASURE before completing MAP: You cannot measure risks for systems you have not inventoried. Complete your AI inventory first, even if imperfect.
- Assigning AI governance to IT alone: GOVERN requires legal, HR, and senior leadership. IT can build the infrastructure but cannot set risk appetite on behalf of the organisation.
- One-time implementation: The RMF explicitly requires continuous cycles. AI systems change, regulations change, and your risk landscape evolves. Build a review cadence from the start.
Implement NIST AI RMF with ComplianceIQ
ComplianceIQ maps your AI systems against NIST AI RMF sub-categories and EU AI Act simultaneously — so you can work from one platform rather than maintaining two separate compliance programmes.
Start Your NIST Assessment