As artificial intelligence (AI) becomes increasingly embedded in critical systems, governments are accelerating efforts to integrate cybersecurity into designing, managing, and deploying AI technologies. 

A recent multi stakeholder workshop—convened by the Center for Cybersecurity Policy & Law under Chatham House Rule—offered insight into the UK’s evolving approach to this challenge and brought stakeholders from government, academia, and industry together to discuss the UK’s new AI Cyber Security Code of Practice. The discussion highlighted a clear trajectory toward clearer security expectations for AI and a shared interest in alignment in the AI cybersecurity landscape.

The UK’s broader 2025 cyber policy landscape aims to secure new technologies from development through deployment. The UK government’s cybersecurity strategy for 2025 and beyond emphasizes secure-by-design principles across the technology lifecycle, including for AI systems.

The Code considers cybersecurity risks across the full AI lifecycle, from initial consideration to decommissioning. It applies to all AI systems and emphasizes responsibility across the supply chain, especially for developers and deployers -- while also considering responsibility across diverse ecosystem roles. The Code has been updated based on feedback, including additions around decommissioning of AI systems and more contextualized guidance on AI security.

Global Harmonization

One topic of conversation was the importance of harmonizing the Code with international standards, including the NIST AI Risk Management Framework and emerging work at ETSI. The goal is to avoid duplicative efforts while fostering an interoperable ecosystem. The government plans to update the Code and its accompanying Implementation Guide to mirror future ETSI standards, ensuring relevance beyond UK borders.

Participants voiced concerns over consultation fatigue and the overwhelming volume of concurrent global standards initiatives. The consensus is that more lead time and clarity around how feedback will influence implementation would help streamline industry engagement.

Implementation

While the draft Code received broad support in public consultations, the implementation gap remains a concern, particularly for organizations already deploying AI. Many lack internal awareness of AI-specific security risks or the technical capacity to assess and mitigate them. Many organizations, particularly those already deploying AI, reported minimal awareness of AI-specific security risks during the feedback period for the Code. 

Looking Ahead

The workshop made clear that maintaining relevance in this rapidly evolving space requires continuous iteration, where risk assessments from even a year ago may already be outdated. Sustained collaboration with industry, regular updates to the Code, and transparency in the standards development process will be key aspects to long-term success.

The UK’s AI Cyber Security Code of Practice represents a meaningful step toward defining practical, scalable, and internationally harmonized expectations for AI systems. Adoption hinges on ensuring that organizations understand the risks and have the tools and guidance needed to manage them.

Grace O'Neill

Read Next

Cybersecurity Coalition, Hacking Policy Council Comment on NIST Cyber AI Profile

The Cybersecurity Coalition and the Hacking Policy Council submitted comments to NIST in response to the initial public draft of the Cybersecurity Artificial Intelligence Community Profile.

Special Episode: A Look at Cyber Policy in 2026 (DCP S2 E12)

For our second special episode of the Distilling Cyber Policy podcast, Alex and Jen from the Center are joined by experts to try and predict the future of cyber policy in the coming year, while reflecting on some of their previous prediction.

Offensive Cyber Operations with Stacy O’Mara and Leonard Bailey (DCP S2 11)

Offensive cyber activity has become a central policy conversation as governments worldwide rethink what tools are necessary to counter increasingly sophisticated threats.