ITL AI Engagement and Updates: NIST AI Risk Management Framework
Summary
NIST's Information Technology Laboratory (ITL) AI Program provides an informational hub for engagement with its AI Risk Management Framework (AI RMF). The page lists upcoming events including an April 2026 ITL AI Webinar Series on Building Traceability into Agentic AI Ecosystems Through Measurement Probes, recent workshops such as The International AI Standards Landscape (March 2026), and past events dating back to 2020. The ITL AI Program organizes workshops bringing together government, industry, academia, and other stakeholders from the US and around the world to advance AI standards, guidelines, and related tools. Ways to engage include the NIST AI Consortium, Requests for Information, draft report reviews, and student programs.
About this source
NIST AI RMF is a voluntary framework for managing risks from AI systems, developed by the US National Institute of Standards and Technology. It structures AI risk management around four functions: govern, map, measure, manage. This feed tracks every public update: profile releases for specific domains (generative AI, critical infrastructure), playbook updates, concept notes, and the engagement calendar for working group meetings. Around 7 major publications a year. AI RMF has become the de facto US AI standard. Federal contracts and state laws increasingly reference it. Watch this if you advise on AI governance, run a model risk function, manage generative AI deployments, or write AI policy that cites a recognized framework.
What changed
NIST published an informational engagement page for its ITL AI Program, which drives the development of the AI Risk Management Framework (AI RMF). The page consolidates information about upcoming events (including an April 2026 webinar on agentic AI measurement probes), recent workshops (March 2026 international AI standards landscape), and past workshops dating to 2020. It also describes formal engagement mechanisms: the NIST AI Consortium for collaborative measurement science, Requests for Information to gather public input on AI issues, draft report review opportunities, and student programs.
Organizations developing or deploying AI systems that may be affected by future NIST AI RMF guidance should monitor this engagement hub. Participation in NIST workshops, RFIs, and consortium activities provides an opportunity to influence the development of AI standards and guidelines before they are finalized. Technology companies, academic researchers, and government agencies working on AI trustworthiness, bias management, or explainable AI are the primary audience for these engagement opportunities.
Archived snapshot
Mar 27, 2026GovPing captured this document from the original source. If the source has since changed or been removed, this is the text as it existed at that time.
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS
A lock () or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
Artificial intelligence
ITL AI Engagement
Share
Credit: N. Hanacek/NIST
To foster collaboration and develop a shared understanding of what constitutes trustworthy AI, and to bolster scientific underpinning of how to assess and assure trustworthiness of AI systems, The NIST Information Technology Laboratory (ITL) AI Program organizes workshops bringing together government, industry, academia, and other stakeholders from the US and around the world. The workshops’ focus is on advancing the development of AI standards, guidelines, and related tools.
Upcoming Workshops & Events
- ITL AI Webinar Series: Building Traceability into Agentic AI Ecosystems Through Measurement Probes. Learn More and Register.
Recent Workshops & Events
- ITL AI Webinar Series: The International AI Standards Landscape and ITL’s Role, Priorities, and Progress (March 6, 2026) Watch Recording.
Past Workshops & Events
- ARIA Workshop was held on November 12, 2024
- Unleashing AI Innovation, Enabling Trust was held on September 24-25, 2024
- Secure Software Development Framework for Generative AI and for Dual Use Foundation Models Virtual Workshop was held on January 17, 2024
- Workshop on Collaboration to Enable Safe and Trustworthy AI **** was held November 17, 2023
- Launching Publication of the AI Risk Management Framework (AI RMF) 1.0 was held January 26, 2023
- Building the NIST AI Risk Management Framework: Workshop #3 was held October 18-19, 2022
- Artificial Intelligence and the Economy Conference was held April 27, 2022
- Two-part Workshop on AI Risk Management Framework – and on Bias in AI was held March 29-31, 2022
- Kicking off NIST AI Risk Management Framework: Workshop #1 was held October 19-21, 2021
- A workshop on AI Measurement and Evaluation was held June 15-17, 2021
- National Academy of Science, Engineering and Medicine (NASEM) workshop on Assessing and Improving AI Trustworthiness: Current Contexts, Potential Paths was held on March 3-4, 2021
- A workshop on Explainable AI was held January 26-28, 2021. Workshop summary
- A workshop on Bias in AI was held on August 18, 2020. A draft report which includes information about discussions during the workshop has been published. A recording of this event can be found on the event page. The final report Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (SP 1270) was published in March 2022
- A kickoff AI Workshop, Exploring AI Trustworthiness, took place on August 6, 2020. A recording of the workshop can be found on the event page
Ways to Engage
The ITL AI Program relies on and encourages robust interactions with industry, universities, nonprofits, and other government agencies in driving and carrying out its AI agenda. There are multiple ways to engage with NIST, including:
- NIST AI Consortium: ITL has established the NIST AI Consortium to empower the collaborative establishment of a new measurement science that will enable the identification of proven, scalable, and interoperable techniques and metrics to promote the development and use of AI.
- Requests for Information (RFIs): The ITL AI Program sometimes uses formal RFIs to inform the public about its AI activities and gain insights into specific AI issues. For example, an RFI was issued to help develop the AI Risk Management Framework.
- Share your input on draft reports: The ITL AI Program counts on stakeholders to review drafts of reports on a variety of AI issues. Drafts typically are prepared based on inputs from private and public sector individuals and organizations and then posted for broader public review on NIST’s AI website and via email alerts. Public comments help to improve these documents.
- Student Programs: NIST offers a range of opportunities for students to engage with NIST on AI-related work. That includes the Professional Research Experience Program (PREP), which provides valuable laboratory experience and financial assistance to undergraduate, graduate, and post-graduate students. Sign up for AI email alerts here . If you have questions or ideas about how to engage with us on AI topics or have ideas about NIST’s AI activities, send us an email: ai-inquiries [at] nist.gov (ai-inquiries[at]nist[dot]gov) .
Artificial intelligence Created June 16, 2020, Updated March 27, 2026
Mentioned entities
Related changes
Get daily alerts for NIST AI Risk Management Framework
Daily digest delivered to your inbox.
Free. Unsubscribe anytime.
About this page
Every important government, regulator, and court update from around the world. One place. Real-time. Free. Our mission
Source document text, dates, docket IDs, and authority are extracted directly from NIST.
The summary, classification, recommended actions, deadlines, and penalty information are AI-generated from the original text and may contain errors. Always verify against the source document.
Classification
Who this affects
Taxonomy
Browse Categories
Get alerts for this source
We'll email you when NIST AI Risk Management Framework publishes new changes.
Subscribed!
Optional. Filters your digest to exactly the updates that matter to you.