NIST AI Risk Management Framework Engagement and Updates
Summary
NIST's Information Technology Laboratory (ITL) AI Program is organizing workshops and webinars to foster collaboration and advance the development of AI standards and guidelines. The page lists upcoming, recent, and past events related to AI trustworthiness and risk management, including updates to the AI Risk Management Framework (AI RMF 1.0).
What changed
NIST's ITL AI Program is actively engaging stakeholders through workshops and webinars to promote a shared understanding of trustworthy AI and to bolster the scientific underpinnings for assessing AI systems. The program focuses on advancing AI standards, guidelines, and related tools, with recent and upcoming events covering topics such as traceability in agentic AI ecosystems and the international AI standards landscape. The AI Risk Management Framework (AI RMF) 1.0 was launched in January 2023, and ongoing engagement aims to support its practical application and evolution.
While this page primarily serves as an informational hub for NIST's AI engagement activities, it highlights NIST's commitment to developing non-binding guidance and frameworks for AI risk management. Organizations involved in AI development or deployment should monitor these events and resources to stay informed about best practices, emerging standards, and NIST's ongoing contributions to trustworthy AI. No immediate compliance actions are mandated by this notice, but awareness of these initiatives is crucial for entities seeking to align with evolving AI governance and risk management principles.
Source document (simplified)
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS
A lock () or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
Artificial intelligence
ITL AI Engagement
Share
Credit: N. Hanacek/NIST
To foster collaboration and develop a shared understanding of what constitutes trustworthy AI, and to bolster scientific underpinning of how to assess and assure trustworthiness of AI systems, The NIST Information Technology Laboratory (ITL) AI Program organizes workshops bringing together government, industry, academia, and other stakeholders from the US and around the world. The workshops’ focus is on advancing the development of AI standards, guidelines, and related tools.
Upcoming Workshops & Events
- ITL AI Webinar Series: Building Traceability into Agentic AI Ecosystems Through Measurement Probes. Learn More and Register.
Recent Workshops & Events
- ITL AI Webinar Series: The International AI Standards Landscape and ITL’s Role, Priorities, and Progress (March 6, 2026) Watch Recording.
Past Workshops & Events
- ARIA Workshop was held on November 12, 2024
- Unleashing AI Innovation, Enabling Trust was held on September 24-25, 2024
- Secure Software Development Framework for Generative AI and for Dual Use Foundation Models Virtual Workshop was held on January 17, 2024
- Workshop on Collaboration to Enable Safe and Trustworthy AI **** was held November 17, 2023
- Launching Publication of the AI Risk Management Framework (AI RMF) 1.0 was held January 26, 2023
- Building the NIST AI Risk Management Framework: Workshop #3 was held October 18-19, 2022
- Artificial Intelligence and the Economy Conference was held April 27, 2022
- Two-part Workshop on AI Risk Management Framework – and on Bias in AI was held March 29-31, 2022
- Kicking off NIST AI Risk Management Framework: Workshop #1 was held October 19-21, 2021
- A workshop on AI Measurement and Evaluation was held June 15-17, 2021
- National Academy of Science, Engineering and Medicine (NASEM) workshop on Assessing and Improving AI Trustworthiness: Current Contexts, Potential Paths was held on March 3-4, 2021
- A workshop on Explainable AI was held January 26-28, 2021. Workshop summary
- A workshop on Bias in AI was held on August 18, 2020. A draft report which includes information about discussions during the workshop has been published. A recording of this event can be found on the event page. The final report Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (SP 1270) was published in March 2022
- A kickoff AI Workshop, Exploring AI Trustworthiness, took place on August 6, 2020. A recording of the workshop can be found on the event page
Ways to Engage
The ITL AI Program relies on and encourages robust interactions with industry, universities, nonprofits, and other government agencies in driving and carrying out its AI agenda. There are multiple ways to engage with NIST, including:
- NIST AI Consortium: ITL has established the NIST AI Consortium to empower the collaborative establishment of a new measurement science that will enable the identification of proven, scalable, and interoperable techniques and metrics to promote the development and use of AI.
- Requests for Information (RFIs): The ITL AI Program sometimes uses formal RFIs to inform the public about its AI activities and gain insights into specific AI issues. For example, an RFI was issued to help develop the AI Risk Management Framework.
- Share your input on draft reports: The ITL AI Program counts on stakeholders to review drafts of reports on a variety of AI issues. Drafts typically are prepared based on inputs from private and public sector individuals and organizations and then posted for broader public review on NIST’s AI website and via email alerts. Public comments help to improve these documents.
- Student Programs: NIST offers a range of opportunities for students to engage with NIST on AI-related work. That includes the Professional Research Experience Program (PREP), which provides valuable laboratory experience and financial assistance to undergraduate, graduate, and post-graduate students. Sign up for AI email alerts here . If you have questions or ideas about how to engage with us on AI topics or have ideas about NIST’s AI activities, send us an email: ai-inquiries [at] nist.gov (ai-inquiries[at]nist[dot]gov) .
Artificial intelligence Created June 16, 2020, Updated March 27, 2026
Related changes
Source
Classification
Who this affects
Taxonomy
Browse Categories
Get AI Regulation alerts
Weekly digest. AI-summarized, no noise.
Free. Unsubscribe anytime.
Get alerts for this source
We'll email you when NIST AI Risk Management Framework publishes new changes.