Two Center initiatives - the Cybersecurity Coalition (the Coalition) and the Hacking Policy Council (HPC) - submitted comments to the National Institute of Standards and Technology (NIST) in response to the initial public draft of the Cybersecurity Artificial Intelligence Community Profile (Cyber AI Profile).  Both the Coalition and HPC support the creation of a Cyber AI Profile to help organizations to identify and manage cybersecurity risks associated with the use and implementation of artificial intelligence technologies. 

The intersection of AI and cybersecurity is a timely and significant topic for guidance from NIST that integrates and builds on the widely accepted, and widely adopted, Cybersecurity Framework (CSF). The Cyber AI Profile represents an opportunity to help organizations secure their AI systems as they work to deploy more, and more varied, AI systems. 

Below, the key themes from each set of comments.

Cybersecurity Coalition 

In its submission, the Coalition states that it supports the overall direction of the Profile and believes it reflects many of the principles the Coalition advanced in earlier comments. While the Profile and the principles guiding its development generally align with the Coalition’s goal of improving the cybersecurity ecosystem, the Coalition offers recommendations to strengthen the Profile and better ensure that it fits within the regulatory environment companies face. The Coalition’s comments centered on several key themes:

  • Lifecycle Alignment: AI cybersecurity risk management should extend beyond initial development and deployment. AI systems evolve through updates, retraining, and changes in operational context, all of which can introduce new risks. The Coalition encourages NIST to highlight the importance of ongoing monitoring, reassessment, and integration with existing vulnerability management and incident response processes. 
  • Framework Alignment: The Coalition also encouraged continued alignment between the Cyber AI Profile and existing NIST frameworks, including the AI Risk Management Framework and the Secure Software Development Framework, to reduce implementation friction and support broader adoption. The Profile could also be more intentionally aligned with other NIST and internationally recognized standards. While other NIST frameworks such as the AI RMF, SP 800-53, and the Privacy Framework are referenced, the Cyber AI Profile should more explicitly explain how these frameworks work together.
  • Governance: Governance could be better specified throughout the draft Profile. While individual controls are mapped to the CSF 2.0 Govern function, governance as a cross-cutting capability that should be better explored within the Profile narrative. Effective governance is particularly critical for AI systems given the pace of technological change, the nondeterministic nature of some AI systems, and their increasing integration. Governance can enable organizations to adapt their risk management over time, and as technology and threats advance, and the Profile should place greater emphasis on high-level governance objectives and strategies, in addition to specific controls, and more closely align them with the CSF 2.0 Govern function, the AI RMF, and other recognized standards.
  • Autonomy and Agents: AI systems have increasing autonomy, with autonomous AI agents integrating with digital and physical systems. While NIST has other agentic AI workstreams in process, the Profile should more clearly define and explore security and risk management for these systems, especially with regards to each CSF 2.0 function. The Profile should work hand-in-hand with other NIST efforts such as the ongoing RFI Regarding Security Considerations for Artificial Intelligence Agents and otherNAtional Cybersecurity Center of Excellence (NCCoE) projects. Organizations implementing agentic AI need to reduce risk without becoming overburdened, even for complex systems with agentic AI.
  • Identity and Authentication: AI agents and other systems perform a broad range of tasks, including high-risk and real time activities that require robust identity, authentication, and access control. The rise of autonomous agents and agent-to-agent interactions will place new demands on identity management that are not addressed in the Profile, particularly in situations without a “human-in-the-loop”. Aligning AI identity management with zero-trust principles and traceability can help. Additionally, the challenge of distinguishing between human and AI identities should be covered within the Profile.

Hacking Policy Council

HPC is focused on improving the legal, policy, and business environment for vulnerability management and disclosure, good-faith security research, bug bounty programs, and independent security testing. Many HPC members are directly involved in the deployment, testing, and evaluation of AI systems, including AI red teaming and other AI security activities HPC emphasized the importance of ensuring that the Cyber AI Profile meaningfully incorporates established cybersecurity practices while recognizing the unique risks introduced by AI technologies. HPC’s comments focused on several key themes:

  • Clarifying Scope and Terminology: NIST should clearly distinguish between different categories of AI-related risk. In particular, the comments stress the importance of differentiating between cybersecurity risks affecting AI systems and broader trustworthiness concerns like bias or hallucinations. While both categories warrant careful management, HPC noted that they are identified, validated, mitigated, and disclosed in different ways. Clear terminology will help organizations integrate the Cyber AI Profile into existing cybersecurity programs and align it with other NIST frameworks and international standards.
  • Vulnerability Management for AI Systems: A central theme of HPC’s comments is the critical role of vulnerability management as AI becomes more deeply embedded in software ecosystems. HPC highlighted the value of vulnerability disclosure policies (VDPs) in identifying and mitigating risks before they are exploited at scale. As AI systems introduce new classes of security and non-security issues, HPC encouraged NIST to recognize that organizations may need tailored disclosure and handling processes for different types of AI-related risks, while remaining aligned with coordinated vulnerability disclosure best practices.
  • AI Red Teaming: HPC also underscored the importance of testing and evaluation, including AI red teaming, as a core component of cybersecurity risk management. AI red teaming can help identify vulnerabilities, misuse pathways, and other behaviors with potential cybersecurity implications. HPC encouraged NIST to continue recognizing the value of red teaming and to emphasize the need for appropriate protections for researchers conducting authorized, good-faith testing.
  • Bug Bounties and Incentivized Research: HPC’s submission supports the use of bug bounty programs as a tool to encourage independent research into AI system vulnerabilities and flaws. Expanding bounty programs beyond traditional software vulnerabilities to include AI-specific risks can help organizations identify and address issues earlier in the lifecycle. 

Frances Shroeder & Andy Kotz

Read Next

Special Episode: A Look at Cyber Policy in 2026 (DCP S2 E12)

For our second special episode of the Distilling Cyber Policy podcast, Alex and Jen from the Center are joined by experts to try and predict the future of cyber policy in the coming year, while reflecting on some of their previous prediction.

Offensive Cyber Operations with Stacy O’Mara and Leonard Bailey (DCP S2 11)

Offensive cyber activity has become a central policy conversation as governments worldwide rethink what tools are necessary to counter increasingly sophisticated threats.

2025 Year in Review: Advancing Cybersecurity Through Collaboration

In 2025, the cybersecurity ecosystem became more complex and we’ve seen governments rethink critical policy frameworks. Nonetheless, the Center has remained steadfast in strengthening cybersecurity through policy, collaboration, and education.