The Cybersecurity Coalition and the Better Identity Coalition (BIC) - submitted comments to the National Institute of Standards and Technology (NIST) Center for AI Standards and Innovation (CAISI) request for information regarding security considerations for artificial intelligence agents

Agentic AI marks a significant leap forward in how AI is used – integrating generative AI systems into personal and enterprise environments to autonomously handle complex tasks. This is especially useful in cybersecurity contexts given the growing threats organizations are facing on a daily basis, and the speed and scale that agents operate within. 

The comments submitted aim to provide NIST with information regarding agentic AI security use cases, from the perspective of companies who are implementing and securing agentic AI.  

Below are some of the key themes from each set of comments:

Cybersecurity Coalition:

In its submission, the Coalition highlights that agentic AI is an evolution of AI, not a new paradigm. NIST already provides a solid baseline of standards for protecting software and AI, and new work on agentic AI should focus on what makes it unique. Instead of working on entirely new standards, NIST’s work should align with and build upon other established and ongoing work regarding security. The Coalition’s comments focused on several key themes:

  • Where Agentic Systems Break Traditional Assumptions: Agentic systems operate across diverse IT and OT environments, integrating tools, APIs, vendors, and data sources. Human oversight of autonomous agents that operate at scale and speed, will become even more crucial in agentic security moving forward. In addition, the explainability of models and the actions taken by systems will be increasingly necessary. “Black box” models and outputs should not be acceptable when agents are making impactful decisions that affect the physical world.
  • Context, Segmentation, and Environmental Risk: One of the defining features of the agentic AI threat landscape is the context in which agents are deployed. Unlike traditional software, agentic systems may plan, execute, and chain actions across multiple systems, tools, and trust domains rather than operating within a well-scoped environment. 

As a result, their risk profile is highly dependent on their environment, and may differ from more traditional software or AI systems. Agentic systems cannot be appropriately managed without understanding their permissions, connections, and potential movement pathways. Guidance should therefore emphasize maintaining meaningful human visibility and oversight of agent activity. This includes logging, monitoring, and the ability to review and intervene in agent-initiated actions where appropriate. Agentic systems are not isolated applications, but as actors embedded within a broader enterprise ecosystem, and risk management approaches must account for these attributes.

  • The Threat Landscape: The threat landscape for agentic AI systems includes the established risks associated with AI systems generally, including direct and indirect prompt injection, goal manipulation, training data poisoning, token manipulation, model extraction, excessive privilege, and related attack vectors. However, agentic AI applications are evolving and risks will similarly evolve. Policymaking should remain adaptable and responsive to evolving evidence, supported by continued threat research and iterative updates to risk management practices as new information becomes available.
  • Evaluation and Assessment of Agentic AI Security: At a baseline, organizations deploying agentic systems should conduct internal assessments addressing threats, system vulnerabilities, and contextual risk, including how autonomy, tool use, and operational environment shape potential impact. Adversarial testing should account for agent-specific attack vectors, including goal manipulation, multi-step exploitation of autonomous decision-making, and attacks that leverage tool access or cross-system chaining. Industry experience demonstrates that enterprise AI systems, like other software, can exhibit critical vulnerabilities when subjected to adversarial testing, with failures often surfacing within minutes of engagement. 
  • Continuous Evolution and Iteration: Both deployment applications and the underlying models behind AI agents are advancing at a pace with which governance efforts will struggle to keep up. In this environment, standards and guidance must be developed early, iterated frequently, and structured to adapt alongside technological change. Static or infrequently updated frameworks will struggle to keep pace with increasingly autonomous and interconnected systems. NIST plays a critical role as a global leader in AI security standards. By providing principled, technology-neutral, and risk-based guidance for agentic AI systems, NIST can help ensure that agentic AI innovation continues in line with security, accountability, and public trust.

Better Identity Coalition:

The Better Identity Coalition’s comments highlight the importance of starting any discussion on AI agent security with identity. Figuring out standards-based ways to manage the intersection between human identities and software identities is going to be critical to creating a foundation for safe and secure agentic commerce. 

The comments outline a number of critical challenges to be addressed:

  • Differentiating Humans, Authorized Agents, and Malicious Bots: The growth of agentic commerce requires reliable mechanisms to distinguish among human users, agents legitimately authorized to act on a human’s behalf, and malicious or unauthorized bots. Existing bot-detection systems are designed primarily to block automation, not to validate trusted delegation. As a result, organizations will need standards-based approaches that cryptographically and operationally bind an agent to the human who has granted it authority. Given NIST’s leadership in digital identity, authentication, and biometrics, it is well positioned to help establish guidance that enables verifiable, auditable human-agent linkage while preserving security and scalability.
  • Secure Human-to-Agent Delegation and Scoped Authorization: Delegation of authority from humans to agents must be precise, limited, and enforceable. Individuals should be able to authorize agents to perform specific tasks without granting broad or unrestricted access. This requires robust human-to-machine authorization models, including mechanisms that allow agents to authenticate on a person’s behalf without exposing that individual’s primary credentials. Standards and best practices should emphasize least privilege, constrained delegation, and cryptographically verifiable proof of authorization to reduce the risk of misuse or credential compromise.
  • Accountability, Auditability, and Liability in Agentic Transactions: When agentic systems fail — whether due to compromised identity, misconfiguration, or actions exceeding delegated authority — organizations must be able to reconstruct events and assign responsibility. This requires durable logging, transparent attribution of actions, and clear differentiation between human intent and agent execution. Governance frameworks should ensure that identity binding, delegation parameters, and transaction context are auditable and reviewable. Without strong accountability mechanisms, trust in agentic commerce will erode, particularly in high-value or high-impact use cases.

Both coalitions are grateful for the opportunity to provide comments to NIST, and would like to express their willingness to work together on future projects related to agentic AI security standards.

Andy Kotz

Read Next

CISA Shifts Federal Cyber Security Landscape with Sweeping Mandate to Replace End-Of-Life Network Devices

CISA issued a compulsory directive to all federal agencies targeting boundary network devices that are “end-of-service."

Event Recap: Secure DNS and the Evolution of NIST SP 800-81

The Center for Cybersecurity Policy and Law held an event with industry and government stakeholders to discuss the importance of securing the Domain Name System (DNS) to combat increasing global cybersecurity threats.

Fighting the Adversarial Use of AI: Innovation in Cyber Insurance, Incident Response

The rise of AI is reshaping every aspect of cybersecurity. While AI holds promise for automating defenses, it also empowers threat actors. This is driving an AI arms race with placing the cyber insurance market in the middle.