US standards body seeks input on securing AI agent systems

The US National Institute of Standards and Technology has opened a public request for information on how to securely develop and deploy AI agent systems. The initiative reflects growing concern about the security and safety risks posed by increasingly autonomous AI tools.

US standards body seeks input on securing AI agent systems

The Center for AI Standards and Innovation (CAISI), based at the US Department of Commerce’s National Institute of Standards and Technology (NIST), has issued a Request for Information (RFI) to gather views on the security of AI agent systems. The call, published on 12 January 2026, invites contributions from industry, academia, and the computer security community.

AI agent systems are a class of AI systems designed not only to generate outputs, but also to plan and take autonomous actions that can affect real-world digital or physical environments. These systems are increasingly used in areas such as software automation, decision support, and complex workflow management. While they offer potential gains in efficiency and innovation, CAISI notes that they also introduce new and distinct security risks.

According to the RFI, some risks facing AI agent systems resemble those found in traditional software, such as vulnerabilities in authentication mechanisms or memory management. However, CAISI’s focus is on risks that emerge when AI models are tightly integrated with software systems that can act on the world. These include situations where models interact with adversarial data, such as indirect prompt injection, or where models themselves have been compromised, for example, through data poisoning during training.

The RFI also highlights concerns that AI agents may cause harm even without external attacks. Examples include systems that pursue poorly specified objectives, engage in so-called specification gaming, or take actions that conflict with human intent or security requirements. CAISI warns that as these systems become more widely deployed, such failures could have implications not only for organisations but also for public safety and national security.

To address these issues, CAISI is seeking input on several areas. These include identifying security threats unique to AI agent systems, understanding how those threats may evolve over time, and assessing how well existing cybersecurity practices apply to autonomous AI. The RFI also asks for ideas on how to measure the security of AI agents, how to anticipate risks during development, and how deployment environments can be designed to constrain and monitor agent behaviour.

Responses to the RFI will inform CAISI’s future work on voluntary guidelines and best practices for AI agent security. They will also feed into ongoing research and evaluations carried out by the centre. Contributors are encouraged to provide practical examples, case studies, and actionable recommendations based on real-world experience.

The consultation is open until 9 March 2026, with submissions accepted through the US government’s regulations portal.

Go to Top