Changeflow GovPing AI Regulation ITL AI Engagement and Updates: NIST AI Risk Man...
Routine Notice Added Final

ITL AI Engagement and Updates: NIST AI Risk Management Framework

Favicon for www.nist.gov NIST AI Risk Management Framework
Detected
Email

Summary

NIST's Information Technology Laboratory (ITL) AI Program provides an informational hub for engagement with its AI Risk Management Framework (AI RMF). The page lists upcoming events including an April 2026 ITL AI Webinar Series on Building Traceability into Agentic AI Ecosystems Through Measurement Probes, recent workshops such as The International AI Standards Landscape (March 2026), and past events dating back to 2020. The ITL AI Program organizes workshops bringing together government, industry, academia, and other stakeholders from the US and around the world to advance AI standards, guidelines, and related tools. Ways to engage include the NIST AI Consortium, Requests for Information, draft report reviews, and student programs.

Published by NIST on nist.gov . Detected, standardized, and enriched by GovPing. Review our methodology and editorial standards .

About this source

NIST AI RMF is a voluntary framework for managing risks from AI systems, developed by the US National Institute of Standards and Technology. It structures AI risk management around four functions: govern, map, measure, manage. This feed tracks every public update: profile releases for specific domains (generative AI, critical infrastructure), playbook updates, concept notes, and the engagement calendar for working group meetings. Around 7 major publications a year. AI RMF has become the de facto US AI standard. Federal contracts and state laws increasingly reference it. Watch this if you advise on AI governance, run a model risk function, manage generative AI deployments, or write AI policy that cites a recognized framework.

What changed

NIST published an informational engagement page for its ITL AI Program, which drives the development of the AI Risk Management Framework (AI RMF). The page consolidates information about upcoming events (including an April 2026 webinar on agentic AI measurement probes), recent workshops (March 2026 international AI standards landscape), and past workshops dating to 2020. It also describes formal engagement mechanisms: the NIST AI Consortium for collaborative measurement science, Requests for Information to gather public input on AI issues, draft report review opportunities, and student programs.

Organizations developing or deploying AI systems that may be affected by future NIST AI RMF guidance should monitor this engagement hub. Participation in NIST workshops, RFIs, and consortium activities provides an opportunity to influence the development of AI standards and guidelines before they are finalized. Technology companies, academic researchers, and government agencies working on AI trustworthiness, bias management, or explainable AI are the primary audience for these engagement opportunities.

Archived snapshot

Mar 27, 2026

GovPing captured this document from the original source. If the source has since changed or been removed, this is the text as it existed at that time.

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock () or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.


Artificial intelligence

ITL AI Engagement

Share

Facebook Linkedin X.com Email

Credit: N. Hanacek/NIST

To foster collaboration and develop a shared understanding of what constitutes trustworthy AI, and to bolster scientific underpinning of how to assess and assure trustworthiness of AI systems, The NIST Information Technology Laboratory (ITL) AI Program organizes workshops bringing together government, industry, academia, and other stakeholders from the US and around the world. The workshops’ focus is on advancing the development of AI standards, guidelines, and related tools.

Upcoming Workshops & Events

  • ITL AI Webinar Series: Building Traceability into Agentic AI Ecosystems Through Measurement Probes. Learn More and Register.

Recent Workshops & Events

  • ITL AI Webinar Series: The International AI Standards Landscape and ITL’s Role, Priorities, and Progress (March 6, 2026) Watch Recording.

Past Workshops & Events

Ways to Engage

The ITL AI Program relies on and encourages robust interactions with industry, universities, nonprofits, and other government agencies in driving and carrying out its AI agenda. There are multiple ways to engage with NIST, including:

  • NIST AI Consortium: ITL has established the NIST AI Consortium to empower the collaborative establishment of a new measurement science that will enable the identification of proven, scalable, and interoperable techniques and metrics to promote the development and use of AI.
  • Requests for Information (RFIs): The ITL AI Program sometimes uses formal RFIs to inform the public about its AI activities and gain insights into specific AI issues. For example, an RFI was issued to help develop the AI Risk Management Framework.
  • Share  your input on draft reports: The ITL AI Program counts on stakeholders to review drafts of reports on a variety of AI issues. Drafts typically are prepared based on inputs from private and public sector individuals and organizations and then posted for broader  public review on NIST’s AI website and via email alerts. Public comments help to improve these documents.
  • Student Programs: NIST offers a range of opportunities for students to engage with NIST on AI-related work. That includes the Professional Research Experience Program (PREP), which provides valuable laboratory experience and financial assistance to undergraduate, graduate, and post-graduate students. Sign up for AI email alerts here . If you have questions or ideas about how to engage with us on AI topics or have ideas about NIST’s AI activities, send us an email: ai-inquiries [at] nist.gov (ai-inquiries[at]nist[dot]gov) .

Artificial intelligence Created June 16, 2020, Updated March 27, 2026

Was this page helpful?

Get daily alerts for NIST AI Risk Management Framework

Daily digest delivered to your inbox.

Free. Unsubscribe anytime.

About this page

What is GovPing?

Every important government, regulator, and court update from around the world. One place. Real-time. Free. Our mission

What's from the agency?

Source document text, dates, docket IDs, and authority are extracted directly from NIST.

What's AI-generated?

The summary, classification, recommended actions, deadlines, and penalty information are AI-generated from the original text and may contain errors. Always verify against the source document.

Last updated

Classification

Agency
NIST
Instrument
Notice
Branch
Executive
Legal weight
Non-binding
Stage
Final
Change scope
Minor

Who this affects

Applies to
Technology companies Government agencies Educational institutions
Industry sector
5112 Software & Technology
Activity scope
AI standards development AI risk management AI trustworthiness research
Geographic scope
United States US

Taxonomy

Primary area
Artificial Intelligence
Operational domain
Compliance
Compliance frameworks
NIST CSF
Topics
Data Privacy Intellectual Property Cybersecurity

Get alerts for this source

We'll email you when NIST AI Risk Management Framework publishes new changes.

Free. Unsubscribe anytime.

You're subscribed!