The era of AI is shifting from models that simply "chat" to agents that "act." As we move toward systems capable of planning, executing tasks, and interacting with the real world autonomously, a critical question emerges: How do we keep these agents secure? To answer this, the Center for AI Standards and Innovation (CAISI) at NIST in Gaithersburg has issued a Request for Information (RFI). This is a call to action for the tech community to help shape the security standards for the next generation of AI.
Unlike traditional AI, agentic systems don't just provide information; they take actions. They can navigate software environments, manage files, or even interact with physical infrastructure. While this autonomy promises a massive leap in productivity, it also introduces a new "attack surface" that goes beyond traditional software vulnerabilities.
The RFI highlights that while agents share some common risks with standard software (like memory leaks or authentication bugs), they also face unique AI-driven threats:
Indirect Prompt Injection: Where an agent processes data from the web or an email that contains hidden instructions, tricking the agent into performing unauthorized actions.
Data Poisoning: Using insecure or manipulated models that have been "trained" to behave maliciously under specific conditions.
Alignment Risks: "Specification gaming," where a model achieves its goal in a way that is technically correct, but is at the same time harmful or dangerous to the computer network or software system it is working within.
NIST is looking for data and insights across several topics:
Threat Landscape: How do agent-specific threats evolve over time?
Development Best Practices: How can we build security into the agent's "brain" from day one?
Cybersecurity Gaps: Where do current security protocols fall short when applied to autonomous agents?
Measurement & Monitoring: How do we quantify the "safety" of an agent before it's deployed?
Guardrails: What interventions can limit an agent’s access to sensitive environments?
The responses NIST receives from industry leaders, researchers, and developers will directly inform voluntary guidelines and best practices used by organizations worldwide. As these systems become integrated into national security and public safety infrastructure, establishing a baseline for "what good looks like" is essential. "The security challenges not only hinder adoption today but may also pose risks for public safety and national security as AI agent systems become more widely deployed," NIST/CAISI warned in a press release announcing the RFI this week.
If you are a developer, security researcher, or deployer of AI systems, NIST wants your case studies, actionable recommendations, and technical insights. The submission deadline is March 9, 2026, at 11:59 PM ET. To submit any materials in response to this RFI, go to www.regulations.gov and search for docket no. NIST-2025-0035.

Free advertisement for the autocrat in charge.
ReplyDeleteThere is an autocrat in charge at NIST?
Delete