Changeflow GovPing Data Privacy & Cybersecurity NIST CAISI Seeks Input on Securing AI Agent Sys...
Priority review Consultation Added Consultation

NIST CAISI Seeks Input on Securing AI Agent Systems

Favicon for www.nist.gov NIST News
Published January 12th, 2026
Detected March 28th, 2026
Email

Summary

NIST's Center for AI Standards and Innovation (CAISI) has issued a Request for Information (RFI) to gather insights on securing AI agent systems. The RFI seeks input on unique security threats, methods for improvement, and measurement approaches for these autonomous systems.

What changed

NIST's CAISI has published a Request for Information (RFI) concerning the security of AI agent systems, which are capable of planning and taking autonomous actions. The RFI aims to collect input from industry, academia, and the security community on unique security threats, methods for enhancing security during development and deployment, existing cybersecurity approaches, risk measurement, and deployment environment interventions. This initiative is driven by the recognition that AI agent systems, while promising, present distinct security challenges beyond standard software vulnerabilities, including risks from adversarial data, insecure models, and misaligned objectives.

Organizations involved in the development, deployment, or security of AI agent systems are encouraged to respond to the RFI. The input received will inform NIST's future development of voluntary guidelines and best practices for AI agent security and contribute to ongoing research. While no specific compliance deadline is mentioned, the RFI indicates that the feedback will shape future work, implying a need for stakeholders to engage proactively. Respondents are asked to provide concrete examples, best practices, case studies, and actionable recommendations.

What to do next

  1. Review the NIST RFI regarding AI agent system security.
  2. Consider submitting input on unique security threats, mitigation methods, and risk measurement for AI agent systems.
  3. Provide concrete examples, best practices, case studies, and actionable recommendations based on experience.

Source document (simplified)

NEWS

CAISI Issues Request for Information About Securing AI Agent Systems

January 12, 2026

Share

Facebook Linkedin X.com Email The Center for AI Standards and Innovation (CAISI) at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has published a Request for Information (RFI) seeking insights from industry, academia, and the security community regarding the secure development and deployment of AI agent systems.

AI agent systems are capable of planning and taking autonomous actions that impact real-world systems or environments. While these systems promise significant benefits for productivity and innovation, they present unique security challenges.

AI agent systems face a range of security threats and risks. Some risks overlap with other software systems, such as exploitable authentication or memory management vulnerabilities. This RFI, however, focuses on distinct risks that arise when combining AI model outputs with the functionality of software systems. This includes risks from models interacting with adversarial data (such as in indirect prompt injection), risks from the use of insecure models (such as models that have been subject to data poisoning), and risks that models may take actions that harm security even in the absence of adversarial inputs (such as models that exhibit specification gaming or otherwise pursue misaligned objectives). These security challenges not only hinder adoption today but may also pose risks for public safety and national security as AI agent systems become more widely deployed.

The RFI poses questions on topics including:

  • Unique security threats affecting AI agent systems, and how these threats may change over time.
  • Methods for improving the security of AI agent systems in development and deployment.
  • Promise of and possible gaps in existing cybersecurity approaches when applied to AI agent systems.
  • Methods for measuring the security of AI agent systems and approaches to anticipating risks during development.
  • Interventions in deployment environments to address security risks affecting AI agent systems, including methods to constrain and monitor the extent of agent access in the deployment environment. Input from AI agent deployers, developers, and computer security researchers, among others, will inform future work on voluntary guidelines and best practices related to AI agent security. It will also contribute to CAISI’s ongoing research and evaluations of agent security. Respondents are encouraged to provide concrete examples, best practices, case studies and actionable recommendations based on their experience with AI agent systems. The full RFI can be found here.

The comment period closes on March 9, 2026, at 11:59 PM Eastern Time. Comments can be submitted online at www.regulations.gov, under docket no. NIST-2025-0035.

Artificial intelligence

Media Contact

NIST in your inbox

Stay up to date with the latest news from NIST. Enter Email Address

Learn More

Center for AI Standards and Innovation (CAISI)

Released January 12, 2026

Source

Analysis generated by AI. Source diff and links are from the original.

Classification

Agency
NIST
Published
January 12th, 2026
Instrument
Consultation
Legal weight
Non-binding
Stage
Consultation
Change scope
Substantive
Document ID
https://www.federalregister.gov/public-inspection/2026-00206/request-for-information-security-considerations-for-artificial-intelligence-agents
Docket
2026-00206

Who this affects

Applies to
Technology companies
Industry sector
5112 Software & Technology
Activity scope
AI Agent Development AI Agent Deployment Cybersecurity
Geographic scope
United States US

Taxonomy

Primary area
Artificial Intelligence
Operational domain
IT Security
Compliance frameworks
NIST CSF
Topics
Cybersecurity Technology

Get Data Privacy & Cybersecurity alerts

Weekly digest. AI-summarized, no noise.

Free. Unsubscribe anytime.

Get alerts for this source

We'll email you when NIST News publishes new changes.

Optional. Personalizes your daily digest.

Free. Unsubscribe anytime.