Executive Summary
Governments and standards bodies worldwide are developing frameworks to help ensure that AI systems are safe and secure. These efforts have resulted in multiple frameworks with varying levels of specificity, significant commonalities in how they approach these risks, and how they provide risk mitigation guidance.
A meaningful gap exists in the literature that compares these commonalities, particularly for technical controls outlined in operational frameworks. This Crosswalk Analysis seeks to understand these commonalities by comparing them across the four core functions in the National Institute of Standards and Technology (NIST) AI Risk Management Framework: governance, map, measure, and manage. The Center for Cybersecurity Policy and Law compared several of these frameworks, at distinct levels of depth, to help understand these frameworks.
The micro-level operational frameworks — ISO/IEC 42001, Singapore’s AI Verify, and NIST AI RMF — differ from the macro-level governance frameworks — Bletchley Declaration, White House executive orders and administration AI governance actions, and Secure by Design principles — in their focus, scope, and audience. These frameworks complement each other, addressing the diverse needs of organizations, with guidance ranging from ethical governance to risk management, system testing, and formal certification. Together, they contribute to a robust ecosystem for AI governance.
In this paper, we provide recommendations to encourage stakeholders to continue working on the development of these, and other, frameworks to drive alignment of AI safety and security efforts globally. These recommendations are:
- Build on Established Principles to reinforce the goals and values across frameworks.
- Address Emerging Gaps to tackle novel risks in both frontier and mainstream AI.
- Encourage Multi Stakeholder Collaboration with diverse stakeholder input and international alignment.
- Address the Lifecycle of AI Systems to manage risks as systems evolve over time.
- Anticipate Technological Evolution to remain relevant as innovations emerge.
- Provide Flexibility with scalable and tiered guidance.
- Promote Usability by avoiding overly technical language and offering actionable recommendations.
Read Next
Fighting the Adversarial Use of AI: Innovation in Cyber Insurance, Incident Response
The rise of AI is reshaping every aspect of cybersecurity. While AI holds promise for automating defenses, it also empowers threat actors. This is driving an AI arms race with placing the cyber insurance market in the middle.
Brussels’ Regulatory Assertiveness Collides With Standards Development Process, Diplomacy
While standards are not the most exiting topic in the world, they are critical in many respects, and the development of AI standards in the EU is causing some consternation among many.
Cybersecurity Coalition Shares Views on EU Roadmap on Post-Quantum Cryptography
The Cybersecurity Coalition responded to the European Union Network and Information Systems Cooperation Group’s Survey on the EU Roadmap on Post-Quantum Cryptography.
