Changeflow GovPing AI Regulation NIST AI Risk Management Framework 1.0 Released
Priority review Guidance Added Final

NIST AI Risk Management Framework 1.0 Released

Favicon for www.nist.gov NIST AI Risk Management Framework
Published
Detected
Email

Summary

The National Institute of Standards and Technology (NIST) has released version 1.0 of its Artificial Intelligence Risk Management Framework (AI RMF). This voluntary guidance document aims to help organizations manage risks associated with AI technologies and promote trustworthiness in AI systems.

Published by NIST on nist.gov . Detected, standardized, and enriched by GovPing. Review our methodology and editorial standards .

About this source

NIST AI RMF is a voluntary framework for managing risks from AI systems, developed by the US National Institute of Standards and Technology. It structures AI risk management around four functions: govern, map, measure, manage. This feed tracks every public update: profile releases for specific domains (generative AI, critical infrastructure), playbook updates, concept notes, and the engagement calendar for working group meetings. Around 7 major publications a year. AI RMF has become the de facto US AI standard. Federal contracts and state laws increasingly reference it. Watch this if you advise on AI governance, run a model risk function, manage generative AI deployments, or write AI policy that cites a recognized framework.

What changed

NIST has published the Artificial Intelligence Risk Management Framework (AI RMF 1.0), a voluntary guidance document designed for organizations involved in the design, development, deployment, or use of AI systems. The framework provides a structured approach to identifying, assessing, and managing risks associated with AI technologies, with the goal of enhancing AI trustworthiness and promoting innovation while aligning with democratic values and protecting civil rights, civil liberties, and equity.

Organizations should review and consider adopting the AI RMF to proactively manage AI-related risks. The framework is intended to be adaptable and can be integrated into existing risk management processes. While voluntary, its release signifies a key step in the U.S. government's approach to AI governance, encouraging a culture of responsible AI development and deployment to mitigate potential harms and ensure societal benefits.

What to do next

  1. Review the NIST AI Risk Management Framework (AI RMF 1.0) for applicability to AI systems.
  2. Consider integrating AI risk management principles into existing organizational risk management processes.
  3. Evaluate AI systems for potential risks related to data, socio-technical factors, and impacts on civil rights, civil liberties, and equity.

Archived snapshot

Mar 25, 2026

GovPing captured this document from the original source. If the source has since changed or been removed, this is the text as it existed at that time.

NEWS

NIST Risk Management Framework Aims to Improve Trustworthiness of Artificial Intelligence

New guidance seeks to cultivate trust in AI technologies and promote AI innovation while mitigating risk.

January 26, 2023

WASHINGTON — The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has released its Artificial Intelligence Risk Management Framework (AI RMF 1.0), a guidance document for voluntary use by organizations designing, developing, deploying or using AI systems to help manage the many risks of AI technologies.

The AI RMF follows a direction from Congress for NIST to develop the framework and was produced in close collaboration with the private and public sectors. It is intended to adapt to the AI landscape as technologies continue to develop, and to be used by organizations in varying degrees and capacities so that society can benefit from AI technologies while also being protected from its potential harms.

“This voluntary framework will help develop and deploy AI technologies in ways that enable the United States, other nations and organizations to enhance AI trustworthiness while managing risks based on our democratic values,” said Deputy Commerce Secretary Don Graves. “It should accelerate AI innovation and growth while advancing — rather than restricting or damaging — civil rights, civil liberties and equity for all.”

Compared with traditional software, AI poses a number of different risks. AI systems are trained on data that can change over time, sometimes significantly and unexpectedly, affecting the systems in ways that can be difficult to understand. These systems are also “socio-technical” in nature, meaning they are influenced by societal dynamics and human behavior. AI risks can emerge from the complex interplay of these technical and societal factors, affecting people’s lives in situations ranging from their experiences with online chatbots to the results of job and loan applications.

The framework equips organizations to think about AI and risk differently. It promotes a change in institutional culture, encouraging organizations to approach AI with a new perspective — including how to think about, communicate, measure and monitor AI risks and its potential positive and negative impacts.

The new framework shouldaccelerate AI innovation and growth while advancing — rather than restricting or damaging — civil rights, civil liberties and equity for all.” —Deputy Commerce Secretary Don Graves

The AI RMF provides a flexible, structured and measurable process that will enable organizations to address AI risks. Following this process for managing AI risks can maximize the benefits of AI technologies while reducing the likelihood of negative impacts to individuals, groups, communities, organizations and society.

The framework is part of NIST’s larger effort to cultivate trust in AI technologies — necessary if the technology is to be accepted widely by society, according to Under Secretary for Standards and Technology and NIST Director Laurie E. Locascio.

“The AI Risk Management Framework can help companies and other organizations in any sector and any size to jump-start or enhance their AI risk management approaches,” Locascio said. “It offers a new way to integrate responsible practices and actionable guidance to operationalize trustworthy and responsible AI. We expect the AI RMF to help drive development of best practices and standards.”

The AI RMF is divided into two parts. The first part discusses how organizations can frame the risks related to AI and outlines the characteristics of trustworthy AI systems. The second part, the core of the framework, describes four specific functions — govern, map, measure and manage — to help organizations address the risks of AI systems in practice. These functions can be applied in context-specific use cases and at any stages of the AI life cycle.

Working closely with the private and public sectors, NIST has been developing the AI RMF for 18 months. The document reflects about 400 sets of formal comments NIST received from more than 240 different organizations on draft versions of the framework. NIST today released statements from some of the organizations that have already committed to use or promote the framework.

The agency also today released a companion voluntary AI RMF Playbook, which suggests ways to navigate and use the framework.

NIST plans to work with the AI community to update the framework periodically and welcomes suggestions for additions and improvements to the playbook at any time. Comments received by the end of February 2023 will be included in an updated version of the playbook to be released in spring 2023.

In addition, NIST plans to launch a Trustworthy and Responsible AI Resource Center to help organizations put the AI RMF 1.0 into practice. The agency encourages organizations to develop and share profiles of how they would put it to use in their specific contexts. Submissions may be sent to AIFramework@nist.gov.

NIST is committed to continuing its work with companies, civil society, government agencies, universities and others to develop additional guidance. The agency today issued a roadmap for that work.

The framework is part of NIST’s broad and growing portfolio of AI-related work that includes fundamental and applied research along with a focus on measurement and evaluation, technical standards, and contributions to AI policy.

Information technology, Artificial intelligence, Trustworthy and responsible AI, Standards and Frameworks

Media Contact

NIST in your inbox

Stay up to date with the latest news from NIST.

Learn More

Artificial Intelligence Risk Management Framework (AI RMF 1.0) Watch the AI Risk Management Framework Launch Event More About the AI Risk Management Framework

Related News

NIST Requests Information to Help Develop an AI Risk Management Framework

Organizations

Released January 26, 2023, Updated February 3, 2025

Named provisions

Govern Measure Manage Map

Get daily alerts for NIST AI Risk Management Framework

Daily digest delivered to your inbox.

Free. Unsubscribe anytime.

About this page

What is GovPing?

Every important government, regulator, and court update from around the world. One place. Real-time. Free. Our mission

What's from the agency?

Source document text, dates, docket IDs, and authority are extracted directly from NIST.

What's AI-generated?

The summary, classification, recommended actions, deadlines, and penalty information are AI-generated from the original text and may contain errors. Always verify against the source document.

Last updated

Classification

Agency
NIST
Published
January 26th, 2023
Instrument
Guidance
Legal weight
Non-binding
Stage
Final
Change scope
Substantive

Who this affects

Applies to
Technology companies
Industry sector
5112 Software & Technology
Activity scope
AI Risk Management
Geographic scope
United States US

Taxonomy

Primary area
Artificial Intelligence
Operational domain
Compliance
Compliance frameworks
NIST CSF
Topics
Technology Risk Management

Get alerts for this source

We'll email you when NIST AI Risk Management Framework publishes new changes.

Free. Unsubscribe anytime.

You're subscribed!