As artificial intelligence (AI) becomes increasingly embedded in critical systems, governments are accelerating efforts to integrate cybersecurity into designing, managing, and deploying AI technologies.
A recent multi stakeholder workshop—convened by the Center for Cybersecurity Policy & Law under Chatham House Rule—offered insight into the UK’s evolving approach to this challenge and brought stakeholders from government, academia, and industry together to discuss the UK’s new AI Cyber Security Code of Practice. The discussion highlighted a clear trajectory toward clearer security expectations for AI and a shared interest in alignment in the AI cybersecurity landscape.
The UK’s broader 2025 cyber policy landscape aims to secure new technologies from development through deployment. The UK government’s cybersecurity strategy for 2025 and beyond emphasizes secure-by-design principles across the technology lifecycle, including for AI systems.
The Code considers cybersecurity risks across the full AI lifecycle, from initial consideration to decommissioning. It applies to all AI systems and emphasizes responsibility across the supply chain, especially for developers and deployers -- while also considering responsibility across diverse ecosystem roles. The Code has been updated based on feedback, including additions around decommissioning of AI systems and more contextualized guidance on AI security.
Global Harmonization
One topic of conversation was the importance of harmonizing the Code with international standards, including the NIST AI Risk Management Framework and emerging work at ETSI. The goal is to avoid duplicative efforts while fostering an interoperable ecosystem. The government plans to update the Code and its accompanying Implementation Guide to mirror future ETSI standards, ensuring relevance beyond UK borders.
Participants voiced concerns over consultation fatigue and the overwhelming volume of concurrent global standards initiatives. The consensus is that more lead time and clarity around how feedback will influence implementation would help streamline industry engagement.
Implementation
While the draft Code received broad support in public consultations, the implementation gap remains a concern, particularly for organizations already deploying AI. Many lack internal awareness of AI-specific security risks or the technical capacity to assess and mitigate them. Many organizations, particularly those already deploying AI, reported minimal awareness of AI-specific security risks during the feedback period for the Code.
Looking Ahead
The workshop made clear that maintaining relevance in this rapidly evolving space requires continuous iteration, where risk assessments from even a year ago may already be outdated. Sustained collaboration with industry, regular updates to the Code, and transparency in the standards development process will be key aspects to long-term success.
The UK’s AI Cyber Security Code of Practice represents a meaningful step toward defining practical, scalable, and internationally harmonized expectations for AI systems. Adoption hinges on ensuring that organizations understand the risks and have the tools and guidance needed to manage them.
Read Next
Center for Cybersecurity Policy and Law Announces 2025 Cybersecurity Fellows
Today, the Center for Cybersecurity Policy and Law (CCPL) announced its second annual class of fellows, which includes four career industry security experts.
Cyber Spring Clean
Challenge yourself with this puzzle designed to test and expand your knowledge of cybersecurity concepts, trends, and terminology.
FedRAMP 20x: A New Era of Cloud Security and Industry Collaboration
FedRAMP is undergoing changes and a recent event with the director of the program informed on how the new initiative will improve cloud security for government agencies, with a focus on transparency, collaboration, and continuous improvement.