NIST AI Risk Management Framework
GOVERN · MAP · MEASURE · MANAGE
This post is for paid subscribers of Visual Summary
Not a subscriber yet? Join Visual Summary →
Overview
Trust
GOVERN
MAP
MEASURE
MANAGE
Lifecycle

NIST AI RMF: A Common Language for AI Risk

Most organizations deploying AI have no structured way to think about its risks. The NIST AI Risk Management Framework — developed with 240+ organizations — gives them a shared vocabulary, four core functions, and 72 subcategories to identify, assess, and manage AI risks at every stage of the lifecycle.

240+
contributing orgs
4
core functions
72
subcategories
Jan 2023
v1.0 released
Click any function in the diagram to learn what it covers.
GOVERN is not just one of four functions — it wraps all the others. Without organizational culture, policies, and accountability, MAP, MEASURE, and MANAGE are activities without ownership. This is the key architectural insight of the AI RMF.
Click a milestone node to see what changed at each stage of the framework's development.
Why Voluntary?
The AI RMF is intentionally not regulation. NIST designed it to be flexible enough for organizations of any size, sector, or risk tolerance. Being voluntary means it can be adopted incrementally — start with one function, then expand. It complements (rather than replaces) existing risk frameworks like NIST CSF or ISO 31000.
Who Uses It?
Government agencies, healthcare systems, financial institutions, tech companies, and critical infrastructure operators. Any organization that develops, deploys, or procures AI systems. The framework is designed for all AI actors: developers, deployers, evaluators, and affected communities.
How It Differs from Regulation
Regulation tells you what you must do. The AI RMF tells you how to think about what you should do. It provides structure for risk conversations rather than compliance checklists. The EU AI Act and US Executive Order on AI both reference it as a baseline for responsible AI practice.

7 Characteristics Every Trustworthy AI System Needs

Trustworthiness is the foundation. NIST defines 7 characteristics that trustworthy AI systems must exhibit simultaneously — not sequentially. They guide risk identification across all four functions and define what “good” looks like for an AI system.

Click a spoke to expand it and see the full NIST definition with a real-world example.
These 7 characteristics are simultaneous design requirements — not a checklist to complete in order. A system that is safe but not fair, reliable but not explainable, or accurate but not privacy-enhanced is not fully trustworthy by NIST standards.
Accountable & Transparent
Organizations and individuals responsible for AI systems are answerable for outcomes and decisions. Transparency means stakeholders can access meaningful information about the system — its purpose, design, data, and limitations. Example: a hiring AI that discloses its decision criteria to candidates and HR.
Explainable & Interpretable
Explainable AI can provide reasons for its outputs in human-understandable terms. Interpretable AI allows users to understand, appropriately trust, and effectively manage the system. Example: a loan decision model that outputs “declined because debt-to-income ratio exceeds threshold.”
Privacy-Enhanced
AI systems should incorporate privacy protections by design — data minimization, purpose limitation, anonymization. This includes not just legal compliance (GDPR, HIPAA) but proactive respect for individuals' autonomy over their data. Example: a medical AI trained on federated data that never centralizes patient records.
Reliable
Reliable AI performs consistently and as expected across contexts and over time. This includes accuracy, precision, and stability. Reliability failures are often subtle — a model that works well in testing but degrades under distribution shift in production is unreliable. Example: a fraud detection model that maintains performance as transaction patterns evolve.
Safe
Safe AI systems do not cause unintended physical, psychological, financial, or societal harm. Safety goes beyond preventing failures — it includes considering how the system could be misused or cause harm when used correctly. Example: a clinical decision support AI that flags its own uncertainty rather than producing confident wrong answers.
Secure & Resilient
Secure AI systems resist adversarial attacks (data poisoning, model inversion, prompt injection). Resilient systems maintain core functionality under stress or attack and recover gracefully from failures. Example: a content moderation system that remains effective even when adversaries attempt to evade detection.
Fair with Bias Managed
AI systems should treat individuals and groups equitably. This requires actively identifying, measuring, and mitigating harmful biases in data, models, and outputs. Fairness is multi-dimensional — equal accuracy across groups, equal error rates, equal opportunity — and trade-offs must be made explicitly. Example: a recidivism prediction model audited for equal false positive rates across demographic groups.

GOVERN: Set the Culture Before You Set the Tools

GOVERN is the foundation layer. It establishes the organizational culture, policies, accountability structures, roles, and processes that make AI risk management actually happen. Without GOVERN, MAP, MEASURE, and MANAGE are activities without ownership — isolated audits with no organizational backing.

6
categories
~20
subcategories
Entire Org
scope
Wraps All
other functions
Hover a node to see what each GOVERN category covers and example actions organizations take.
GOVERN categories: G1 — Policies, processes, and practices across the organization related to AI risk G2 — Accountability: roles, responsibilities, and authorities are defined G3 — Workforce diversity, equity, inclusion, and AI risk competencies G4 — Organizational teams are committed to risk management practices G5 — Policies managing AI risk are defined and communicated G6 — Policies for AI risk in human oversight are maintained
What GOVERN Is Not
GOVERN is not a compliance checkbox. It's not about writing a policy document and filing it away. It's about building a living culture where risk questions are asked at every AI decision: Who is accountable if this fails? What are the acceptable risk levels? Who has the authority to pause or stop this system? These must be answered before deployment, not after an incident.
Who Owns GOVERN?
Everyone — but with clear accountability. C-suite and board: set risk appetite and resource commitment. Legal and compliance: translate risk appetite into policy. AI development teams: implement governance requirements in practice. HR and workforce: ensure staff have the skills to recognize and escalate AI risks. GOVERN fails when it's owned by one team in isolation.
What is a risk culture?
A risk culture is the set of shared values, beliefs, and norms that determine how an organization identifies, discusses, and responds to risk. In AI, a healthy risk culture means teams proactively surface concerns rather than suppress them, failures are investigated not hidden, and risk trade-offs are made explicitly with documented rationale. GOVERN G1 and G4 directly address building this culture.
Who is accountable when AI fails?
GOVERN G2 requires that accountability is defined before deployment. This includes: who owns the AI system's outcomes (the deployer, not just the developer), who monitors it in production, who has authority to pause or shut it down, and who communicates to affected parties if harm occurs. The absence of clear accountability is itself a governance failure.
How does GOVERN connect to MANAGE?
GOVERN sets the authority and policies that MANAGE uses to act. When MEASURE identifies that a system has drifted outside acceptable fairness bounds, MANAGE needs GOVERN's pre-established authority structure to decide: who escalates this, who decides to retrain or roll back, who communicates to users? Without GOVERN, MANAGE can identify the problem but has no institutional mechanism to respond.

MAP: Understand What Could Go Wrong

Before measuring or managing risk, you must identify it. MAP establishes context — what the AI system is, who it affects, what benefits it provides, and what risks it introduces. MAP outputs the risk register that MEASURE and MANAGE act on.

5
categories
Context
step 1
Risks
step 3
Impact
step 5
Click a step in the MAP pipeline to see what questions to ask at each stage.

Risk Register: Plot Risks by Likelihood × Impact

MAP produces a risk register — a catalogue of identified AI risks placed by likelihood and impact. Click any risk to see its details and which MAP category covers it.

Click a risk dot to see its name, category, likelihood, and impact.
Intended vs Unintended Use
MAP 1 requires documenting both. A medical AI intended to assist radiologists may be unintentionally used as a replacement for radiologists in under-resourced settings. The unintended use case has different risk profiles — and must be mapped even if the deployer doesn't plan for it, because users will find it.
AI Actors and Stakeholders
MAP identifies all parties: developers, operators, deployers, users, affected individuals, and society. Each actor has different risk exposure and different obligations. A facial recognition system deployed by police affects citizens who never interact with it directly — they are stakeholders that MAP 5 requires you to consider.
Cataloguing AI Risks
MAP 3 covers risks across two dimensions: technical (model failures, data quality, adversarial attacks) and societal (bias, privacy, labor displacement, concentration of power). Both must be mapped. A system with zero technical failures can still cause profound harm through systematic unfairness.
What is “context” in MAP?
Context in MAP means: what problem does this AI system solve? Who built it, for whom, under what constraints? What data does it use? In what environment will it operate? Who will use it and who will be affected? Answering these questions (MAP 1) is the prerequisite for all subsequent risk identification. Without context, you can't know what risks are plausible or significant.
How do you categorize AI harms?
NIST categorizes AI harms along several dimensions: the type of harm (physical, psychological, financial, reputational, societal), the affected party (individual, group, organization, society, environment), the severity (minor, significant, catastrophic), and the reversibility (temporary, permanent). MAP 3 requires assessing risks across all these dimensions, not just the most obvious ones.
What is a MAP 5 societal impact assessment?
MAP 5 covers impacts beyond immediate users — to communities, society, and democratic institutions. This includes: effects on labor markets (job displacement), effects on power concentration (who controls this AI?), environmental impact (compute costs), and effects on civil liberties. This is often the hardest category to assess because the impacts are diffuse and long-term.

MEASURE: Turn Risk Awareness Into Numbers

MEASURE takes the risks identified in MAP and asks: how bad are they, actually? It defines metrics, runs evaluations, sets up monitoring, and builds the feedback loops that keep measurements current. Good measurement is what separates “we know it’s risky” from “we know how risky it is.”

4
categories
Metrics
MS 1
Evaluation
MS 2
Feedback
MS 4

Interactive Risk Dashboard

Use the sliders to set measurement values for four AI trustworthiness dimensions. Watch the dashboard update live and see the composite score change.

Accuracy
Fairness
Robustness
Transparency
How do you measure AI fairness?
Fairness is multi-dimensional and context-dependent. Common metrics include: demographic parity (equal selection rates across groups), equalized odds (equal true/false positive rates), and individual fairness (similar individuals treated similarly). MEASURE requires choosing the right metric for the deployment context — there is no universal fairness metric, and optimizing one often degrades another.
What is robustness testing?
Robustness testing evaluates how AI performance degrades under challenging conditions: distribution shift (different data than training), adversarial inputs (crafted to fool the model), edge cases (rare but plausible scenarios), and noisy data. MEASURE 2 requires systematic robustness evaluation before deployment and periodic re-evaluation as the deployment environment evolves.
MEASURE 3: why internal experts matter
MEASURE 3 covers internal expert review — specifically, red-teaming and adversarial testing by people who understand both the AI system and the deployment domain. External auditors are valuable but domain context matters. A healthcare AI risk evaluator needs both ML expertise and clinical knowledge to find meaningful failure modes. NIST recommends diverse expert teams including ethicists, domain experts, and affected community representatives.

MANAGE: Turn Measurement Into Action

MANAGE closes the loop. It prioritizes risks from MEASURE, implements mitigation strategies, monitors ongoing operation, and handles residual risk. It includes the hard decision of whether to deploy, pause, or shut down an AI system entirely.

4
categories
MG1
Mitigate
MG2
Respond
MG3
Monitor
Click a card to see example actions for each MANAGE category.

Risk Response Options

Every identified risk requires a response decision. Hover each quadrant to understand when to use each strategy.

Hover a quadrant to see when to use each risk response strategy.
MANAGE explicitly includes the option to NOT deploy. If risk cannot be adequately mitigated to an acceptable level, the framework supports pausing or discontinuing an AI system. This decision requires the accountability structures established in GOVERN to be meaningful.
What is residual risk?
Residual risk is the risk that remains after mitigation measures have been applied. No mitigation eliminates risk entirely — MANAGE MG4 requires explicitly tracking, documenting, and communicating residual risks to relevant stakeholders. Residual risk must be within the organization's accepted risk tolerance (defined in GOVERN) before a system can be deployed or continue operating.
How does MANAGE connect to incident response?
MANAGE MG2 and MG3 together define the incident response pathway. MG2 covers planned responses to anticipated risks. MG3 covers monitoring to detect when something unexpected happens. When monitoring (MG3) detects an incident, the response plan (MG2) is activated. This requires pre-established escalation paths, communication protocols, and rollback procedures — all of which are governed by GOVERN's accountability structures.
When should you stop an AI system?
MANAGE provides guidance: when monitored metrics indicate unacceptable performance degradation, when a new risk is identified that cannot be mitigated within acceptable timelines, when the deployment context has changed (new regulations, new stakeholder concerns, unexpected use patterns), or when an incident has caused harm. The key is that the decision criteria for stopping should be pre-defined in GOVERN — not improvised during a crisis.

The AI Lifecycle: Where Does the RMF Apply?

The AI RMF is not a one-time audit — it applies continuously across the entire AI lifecycle. Each phase has different risk profiles, different actors, and different RMF activities. Hover each stage to see which functions apply and what risks emerge.

Hover a lifecycle stage to see which RMF functions apply, who is responsible, and the key risks at that phase.
GenAI Profile Highlights
The Generative AI Profile (NIST-AI-600-1, July 2024) extends the core RMF with 12 unique GenAI risks: hallucination, prompt injection, data provenance, copyright concerns, CBRN information hazards, and more. It maps these risks to the same GOVERN/MAP/MEASURE/MANAGE structure, making it directly compatible with the core framework.
How to Start
NIST recommends starting with GOVERN — establish accountability and risk appetite before building anything else. Then MAP one AI system: document its context, intended use, and top 5 risks. You don't need to implement all 72 subcategories immediately. The framework explicitly supports incremental adoption based on your organization's risk tolerance and resources.
RMF vs Regulation
The EU AI Act and US Executive Order on AI both reference NIST AI RMF as a baseline. Organizations implementing the RMF are better positioned for regulatory compliance — but the frameworks are complementary, not equivalent. Regulation sets minimum requirements; the RMF provides the structure to exceed them systematically.
The AI RMF is not a checklist — it’s a conversation starter. A shared vocabulary that lets engineers, ethicists, lawyers, and executives talk about AI risk in the same language. That shared language is what makes AI governance actually work in practice.
How does the Generative AI Profile differ from the core RMF?
The GenAI Profile (NIST-AI-600-1) adds 12 risks unique to generative AI systems: hallucination and fabrication, data privacy violations from training data, intellectual property and copyright issues, prompt injection and jailbreaking, homogenization of outputs, and potential for chemical/biological/radiological/nuclear (CBRN) harm. These risks don't map cleanly to traditional AI risk categories but use the same GOVERN/MAP/MEASURE/MANAGE structure.
Where do I start if my org is new to this?
Three concrete first steps: (1) GOVERN: name one person accountable for AI risk in your organization — even temporarily. (2) MAP: pick your highest-risk AI system and write a one-page context document: what it does, who it affects, top 3 risks. (3) MEASURE: define one measurable metric for each of those risks. This gets you the minimum viable implementation and reveals what else you need to build.
How does the RMF align with the EU AI Act?
The EU AI Act uses a risk-based approach similar to the RMF: systems are categorized by risk level (unacceptable, high, limited, minimal). The RMF's MAP function aligns with the EU Act's risk categorization requirements. GOVERN aligns with the Act's conformity assessment and documentation requirements. MEASURE aligns with post-market monitoring obligations. NIST has published a crosswalk document mapping RMF categories to EU AI Act articles.

Follow a Risk Through All Four Functions

Trace a single real-world risk — algorithmic bias in a hiring AI — through MAP, MEASURE, MANAGE, and GOVERN. See exactly what each function does with it, step by step.

Click a step above to trace algorithmic bias through the AI RMF.
The functions run concurrently, not sequentially. While MANAGE responds to the bias risk, MAP is cataloguing new risks, MEASURE is tracking mitigation effectiveness, and GOVERN is updating policy. This walkthrough shows each function’s role — not the order they run in.

12 Risks Unique to Generative AI

The GenAI Profile extends the core RMF with 12 risks that don’t appear in traditional AI systems. From hallucination to CBRN hazards, these require new thinking about measurement and management. Click any card to see mitigations.

Click a risk card to see description, severity, and mitigation strategies.
The GenAI Profile uses the same four-function structure as the core RMF. Organizations already implementing GOVERN/MAP/MEASURE/MANAGE can adopt the GenAI Profile by adding these 12 risks to their existing MAP risk registers — no new framework to learn.

72 Subcategories: Where the Framework Becomes Action

The 72 subcategories are the actual requirements — specific things an organization must demonstrate. Filter by function to browse. Each subcategory corresponds to concrete organizational actions.

You don’t need all 72 subcategories on day one. NIST explicitly supports incremental adoption. A practical approach: implement 10–15 subcategories covering your highest-risk AI systems in year one, then expand. The order matters less than starting.

AI Governance Maturity: Where Does Your Org Stand?

Answer 12 yes/partial/no questions to score your organization’s maturity across all four RMF functions. The radar chart updates live. Be honest — the goal is to find gaps, not pass a test.

Answer all 12 questions to see your maturity assessment.

EU AI Act Crosswalk: From Framework to Regulation

The EU AI Act, fully in effect August 2026, mandates specific obligations for high-risk AI. The NIST AI RMF maps closely to these requirements. Hover any row to see the alignment detail and identify where you need to go beyond the RMF.

Hover a row to see how the RMF subcategory aligns with the EU AI Act article and where gaps remain.
Implementing the RMF significantly reduces your EU AI Act compliance gap — but does not close it entirely. EU-specific requirements like CE marking, notified body assessments, and registration in the EU AI database have no RMF equivalent.
Strongest Alignment
GOVERN G2 (accountability) maps almost directly to EU Art. 9 (risk management system). Both require named owners, documented roles, and clear accountability for AI outcomes. MEASURE MS4 maps well to Art. 61 (post-market monitoring).
Partial Overlap
MAP 5 (societal impact) partially overlaps EU Art. 43 (conformity assessment), but the EU Act is narrower — it mandates assessment only for high-risk system categories. The RMF encourages broader societal impact assessment for all AI systems.
Gaps to Address
EU-specific: CE marking, notified body involvement, registration in the EU AI database, and Art. 65 market surveillance. Organizations need EU legal counsel alongside RMF implementation to address these gaps.

Three AI Failures the RMF Would Have Caught

Abstract frameworks become concrete through failures. These three real incidents show exactly which RMF functions were absent — and what would have changed if they had been in place.

Select a case study above to see which RMF functions were missing and what the outcome was.
These are not edge cases — they are the norm. Most AI failures trace directly to missing GOVERN policies, incomplete MAP risk identification, absent MEASURE metrics, or no MANAGE response plan. The RMF does not prevent all AI failures; it prevents the preventable ones.

12-Month Implementation Roadmap

Where do I start? Here is a realistic first-year implementation sequence for an organization new to structured AI risk management. Hover any bar to see what it involves and which RMF subcategories it covers.

Hover a roadmap item to see what it involves and which RMF subcategories it covers.
Start with GOVERN
Before mapping risks or measuring anything, name one person accountable for AI risk. Without G2 accountability, all subsequent work has no owner. This is the single most important first action and costs almost nothing.
MAP One System First
Do not try to MAP all AI systems at once. Pick your highest-risk system and complete MAP 1–5 for it. One fully-mapped system teaches more than five half-mapped ones and gives you a template for the rest.
Iterate, Don’t Perfect
A 60% implementation across all four functions beats 100% in one. The RMF is designed for iteration — revisit and deepen each function annually. Year 2 is when most organizations achieve real maturity.

Trustworthiness Trade-offs: You Cannot Maximize Everything

The 7 trustworthiness characteristics pull against each other. Select a pair to explore the tension, then use the slider to see how shifting the balance changes what you gain and lose. Understanding these trade-offs is what MEASURE and GOVERN are actually about.

Accurate Explainable
Select a pair above to explore the design tension and the sweet-spot resolution.
There is no universally correct resolution to these trade-offs. The right balance depends on deployment context and the cost of different failure modes. GOVERN G2 requires that trade-offs are made explicitly — documented with rationale — not implicitly by whoever builds the model.

Sector Risk Profiles: Same Framework, Different Priorities

The RMF applies everywhere, but dominant risks, key metrics, and regulatory context differ dramatically by sector. Select your domain to see which RMF functions matter most and which risks are most prevalent.

Select a sector above to see its dominant AI risks and RMF priorities.
Sector context determines which MAP risks to prioritize, which MEASURE metrics matter, and which regulatory frameworks apply. The RMF does not prescribe sector-specific metrics — it provides the structure to derive them. This is what that looks like in practice.

AI Actor Responsibility Map

Different actors have different obligations across the four functions. Developers, deployers, operators, affected communities, and regulators all play distinct roles. Click any actor to see their specific responsibilities — and where accountability gaps most often occur.

Click an actor to see their responsibilities across each RMF function.
The most common accountability failure: everyone assumes someone else is responsible. Developers assume deployers will address context-specific risks. Deployers assume developers handled technical risks. The RMF requires all of these to be explicitly assigned — with names attached.

RMF Self-Assessment Quiz

Answer 15 yes/no questions about your organization to see how you score across the four RMF functions. The radar chart shows where you are strong and where the gaps are — with specific subcategory recommendations.

Click Start to begin the 15-question assessment. Each question maps to a specific RMF function. Your answers generate a scored radar chart and priority recommendations.
This is a directional tool, not an audit. Use the results to prioritize where to invest next in your RMF journey. Low GOVERN scores mean governance must come first — MAP and MEASURE cannot function without it.

AI System Risk Classifier

Answer four questions about an AI system you are building or deploying. The classifier outputs a risk tier, the RMF functions to prioritize, and the subcategories most relevant to your context — giving you a starting point rather than a blank framework.

1. Deployment Domain
2. Automation Level
3. Decision Stakes
4. Data Sensitivity
Select your system characteristics above and click Classify to get a risk tier, priority functions, and relevant subcategories.
Context changes everything. A recommendation engine on a streaming platform is low risk. The same recommendation engine used to flag loan applicants is high risk. The RMF is designed to surface this distinction — the classifier applies the same logic systematically.

Risk Prioritization Sandbox

Click anywhere on the grid to place a risk. Drag existing risks to reposition them. The quadrant determines the recommended response. This is what a MAP 4 risk prioritization exercise looks like in practice.

Click the grid to place a risk dot. Select a label from the dropdown first. Risks in the top-right quadrant require immediate action.
MAP 4 is about prioritization, not exhaustiveness. You cannot address every risk. Placing risks on likelihood × impact forces the conversation about which ones actually matter this quarter — and who owns each one.

AI Incident Response Decision Tree

Walk through a real AI incident step by step. Each decision point maps to a specific MANAGE subcategory. Use this to understand how MG-2, MG-3, and MG-4 connect in practice — and to prepare your team before an incident happens.

Click Start to walk through an AI incident scenario. Each step maps to an RMF MANAGE subcategory. The path changes based on your answers.
MG-3.2 requires a pre-defined incident response process — not one improvised during the incident. Organizations that have never run through a scenario like this are not compliant with MG-3. This walkthrough is a starting point for that exercise.

AI Governance Framework Comparison

NIST AI RMF does not exist in isolation. ISO 42001, EU AI Act, OECD AI Principles, and NIST CSF each address overlapping topics with different emphases. Hover a cell to see how each framework approaches each governance dimension.

Hover a cell to see how that framework approaches the governance dimension.
NIST AI RMF
Strongest on operational risk management: risk identification, measurement, and treatment. Weakest on enforcement (by design — it is voluntary). Best overall breadth across all dimensions.
EU AI Act
Strongest on transparency and human oversight requirements, especially for high-risk AI. Legally binding within the EU. Less prescriptive on measurement methodology — leaves metrics to the operator.
ISO 42001
Management system standard (like ISO 27001 for security). Strong on governance and accountability structures. Designed for certification. More process-oriented than outcome-oriented.
Multi-framework reality: Organizations operating globally typically need NIST AI RMF for operational structure, EU AI Act compliance for European deployment, and ISO 42001 for third-party certification. The good news: substantial overlap means work done for one transfers to the others.

Key Definitions Glossary

30 core terms from the NIST AI RMF with definitions and function context. Filter by function or search by term. These are the definitions that appear in the framework itself — having a shared vocabulary is the first step to shared risk management.

Vocabulary is infrastructure. The most common failure mode in AI risk management is different teams using the same words to mean different things. The RMF glossary exists precisely to prevent this — engineers, lawyers, and executives need to agree on what “risk” means before they can manage it together.

AI Risk Scenario Simulator

Choose an AI system type and run it through a simulated 12-month operation. Random risk events occur and you respond with the correct RMF action — or miss it. Your trustworthiness score reflects how well you applied the framework under pressure.

Select a system above and click Start Simulation to begin the 12-month risk scenario. Events will occur and you will need to respond using the correct RMF action.
The goal is not a perfect score — it is to understand why each event requires a specific RMF response. Organizations that have pre-defined responses to common AI risk events score consistently higher than those improvising under pressure. That is MANAGE in action.

Subcategory Deep-Dive: 12 Most Critical

The 72 subcategories vary enormously in importance. These 12 are the ones that appear in almost every real-world RMF implementation — the backbone subcategories. Each card shows the verbatim NIST text, a plain-English translation, a real example, and which other subcategories depend on it.

GV-2.2 is the load-bearing subcategory. Without a defined and communicated risk tolerance, MAP-4 prioritization is arbitrary, MS-1 metric thresholds are undefined, and MG-2 treatment decisions have no benchmark. Everything flows from knowing how much risk the organization will accept.

AI Risk Policy Template Builder

Answer six questions about your organization and get a customized one-page AI risk policy template grounded in GOVERN subcategories. Edit the output, then copy to your document system. This implements the core requirement of GV-1.1.

Organization Name
Sector
Organization Size
Current AI Maturity
Risk Tolerance
Highest-Risk AI Use Case
GV-1.1 requires a policy — not a perfect policy. The most important step is writing something down and getting it approved. A two-page policy that exists and is followed beats a 30-page policy that lives in a SharePoint no one reads.

Risk Propagation Map

A single AI failure rarely stays contained. Click a root cause to see how it propagates through the framework — which MAP risks it activates, which MEASURE metrics it violates, and which MANAGE actions it triggers. This is what cascading AI risk looks like in practice.

Select a root cause above to see how it propagates through GOVERN, MAP, MEASURE, and MANAGE.
Root causes always live in GOVERN or early MAP. By the time you see a failure in MANAGE, it has passed through at least 3-4 upstream gaps. The framework is designed to catch failures early — before they cascade.

Maturity Progression Timeline

RMF adoption is a journey, not a switch. This animation shows how an organization progresses from Level 1 (no formal process) to Level 4 (continuous optimization) — and which subcategories become active at each stage. Click Play to watch the progression.

Click Play or a Level button to explore the maturity stages. Each level unlocks new subcategories and capabilities.
Most organizations are between Level 1 and Level 2. Getting to Level 2 (documented processes for the highest-risk systems) delivers most of the risk reduction value. Level 3 and 4 are about consistency and optimization, not survival.

GOVERN Accountability Org Chart

Click any role in the hierarchy to see which GOVERN subcategories it owns and what responsibilities that entails. The most common RMF failure is unassigned accountability — this visualization shows what a well-assigned org looks like.

Click any role to see their AI risk responsibilities and which GOVERN subcategories they own.
GV-2.2 requires that accountability be assigned to specific individuals, not teams. When AI causes harm, “the AI team is responsible” is not an accountability structure. The framework requires a named person with defined authority — and that person must know they are accountable.

Subcategory Dependency Map

Not all subcategories are equal. Some must exist before others are meaningful. Click any node to highlight what it enables downstream — and which upstream subcategories must be in place before it can work. This is the dependency graph that the RMF does not explicitly show you.

Click a subcategory node to see its upstream dependencies and downstream effects.
You cannot skip tiers. MAP-4.1 (risk prioritization) requires MAP-3.1 (risk identification) which requires MAP-1.1 (documented intended use) which requires GV-1.1 (an AI policy to exist at all). Trying to prioritize risks before documenting them is theater, not risk management.

Regulatory Requirement Mapper

Select a regulation to see exactly which NIST AI RMF subcategories satisfy each requirement — and where gaps remain. Use this to understand how RMF work transfers across compliance frameworks without duplication.

Select a regulation above, then hover a cell to see which RMF subcategory satisfies the requirement.
EU AI Act
Strongest match with RMF: Art. 9 risk management system maps almost directly to GOVERN + MAP. The RMF gives you the operational process that the Act requires but does not prescribe.
NYC Local Law 144
Narrowly focused on bias audits for automated employment tools. MS-2.5 is the core subcategory. The rest of the RMF builds the context and governance that makes the audit meaningful.
SR 11-7
The OCC model risk management guidance predates AI RMF but aligns strongly on validation (MS-2) and ongoing monitoring (MS-4, MG-3). Organizations doing SR 11-7 are already 60% of the way to RMF compliance.

AI System Inventory Builder

Track every AI system in your organization with its risk profile, lifecycle stage, and RMF coverage. A complete inventory is the prerequisite for MAP-1.1 and GV-1.1 compliance.

System Name
AI Type
Risk Level
Lifecycle Stage
Owner / Team
RMF Coverage
No systems added yet. Fill the form above and click Add System.

72-Subcategory Assessment Checklist

Track your implementation status across all four RMF functions. For each subcategory, mark Pass, Partial, Fail, or N/A. The dashboard updates in real time.

Subcategory Prioritization Advisor

Select your organization context and get a data-driven ranking of which RMF subcategories to tackle first. Scores combine regulatory risk, implementation effort, and impact leverage.

Organization Size
Sector
Primary AI Type
Current Maturity
Click a bar to see why this subcategory is prioritized for your context.

GenAI Risk Wheel

The Generative AI Profile (NIST-AI-600-1, July 2024) identifies 12 unique risks specific to generative AI systems. Click any spoke to explore the risk, its severity, and which RMF subcategories address it.

Click a spoke to explore that GenAI risk.

Trustworthiness Tension Map

The 7 trustworthiness characteristics are design requirements, not a checklist. Some pull against each other in practice. Understanding these tensions is essential for making informed trade-off decisions.

Click a tension edge (dashed line) to understand the trade-off and how the RMF approaches it.

Real-World AI Incident → RMF Gap Mapper

Each real AI incident maps to specific RMF subcategory gaps that allowed it to occur. Click an incident to see which steps were skipped and what the framework would have caught.

Select an incident above to see the RMF gap analysis.

AI Deployment Decision Tree

Answer 8 yes/no questions about your AI system and get a deployment recommendation grounded in the RMF. Each question maps to a specific subcategory that must be satisfied.

Press Start to begin the assessment.

Stakeholder Communication Generator

Different audiences need different explanations of AI risk. Select your audience and context, and get a plain-language summary tailored for that stakeholder — grounded in RMF language.

Audience
Risk Level
AI System Type
Primary Concern

RMF Cost-Benefit Calculator

Quantify the business case for implementing the AI RMF. Compare the annual cost of implementation against the expected loss from AI incidents. See the break-even point and 5-year ROI.

Organization Size
Number of AI Systems
Average Risk Level
Implementation Level

Implementation Roadmap Builder

Select your current maturity level and target state to generate a week-by-week implementation plan with milestones, owners, and RMF subcategory targets. Export and use directly in project planning.

Current Maturity
Target Maturity
Organization Size

AI Vendor Procurement Questionnaire

When procuring third-party AI systems, ask the right questions. Generate a tailored due-diligence questionnaire mapped to specific RMF subcategories. Send to vendors before contract signature.

AI System Type
Risk Level
Deployment Context

AI Ethics Committee Charter Generator

Generate a draft charter for an AI ethics or risk committee, tailored to your organization size and sector. Each clause is mapped to the RMF GOVERN subcategories it satisfies.

Organization Size
Sector
Committee Scope

AI Risk Narrative Generator

Generate a formal NIST-style risk narrative for a specific AI system, ready for internal governance review or board presentation. Based on MAP and MEASURE subcategories.

System Name
AI System Type
Overall Risk Level
Deployment Stage
Identified Risk Areas (check all that apply)

Trustworthiness Radar Chart

Rate your AI system across all 7 trustworthiness dimensions. The radar chart compares your self-assessment against the NIST-recommended minimum baseline. Identify your lowest dimensions and prioritize remediation.

NIST AI RMF vs. ISO/IEC 42001 Crosswalk

ISO/IEC 42001 (published December 2023) is the first international AI management system standard. Click any row to see how the two frameworks address the same concern — and where they diverge.

Alignment: ⬤ Full    ⬤ Partial    ⬤ Weak / Gap
Click a row to see the detailed alignment note and practical guidance.

AI Risk Scorecard

Generate an executive-level A through F report card across all four RMF functions. Aggregates your 72-subcategory checklist results, identifies top gaps, and produces a board-ready summary paragraph.

Uses results from the 72-Subcategory Checklist section above.