Today, the White House and seven companies developing sophisticated artificial intelligence (AI) technologies - including generative AI, the technology behind some of the most impressive AI today - announced a series of voluntary commitments to develop new models that will drive safety, security, and trust.
AI has captured the imagination of, well, everyone: newly available capabilities have vaulted AI to superstar status, with news stories, legislative proposals, and government strategies released on a daily basis. Cybersecurity is critical for artificial intelligence, and artificial intelligence is critical for cybersecurity. As our whitepaper discussed earlier this year, AI is a critical partner for cybersecurity professionals working to secure infrastructure, data, and people around the world. Regulators need to carefully ensure that their proposals protect AI, so that AI can protect us. The Center applauds the approach in this announcement: focusing on safety, security, and trustworthiness is a key element to ensuring that sophisticated AI remains a core tool for cyber defenders and becomes a trusted part of our daily lives.
Unsurprisingly, it didn’t take long for both new and well-established cybersecurity considerations for AI to enter the conversation, with concerns about models being hacked, poisoned, or otherwise corrupted, or that AI components might be entries into other systems. Indeed, there is a history of malicious actors attacking the AI that defenders use to protect digital and physical systems. There are also instances of more complex and capable AI providing a larger attack surface - and becoming a threat.
The announcement centers on two kinds of AI governance work:
- Ensuring that AI products that these companies develop are safe and secure, and earning the public’s trust. Some of these commitments further develop previous commitments around testing and red teaming for AI.
- Others focus on sharing information, lessons, and best practices among companies and governments - including using AI for vulnerability management and reporting vulnerabilities in AI models and systems. Additionally, there are commitments around transparency and AI labeling that can help earn public trust.
Taken together, this package of announcements, while voluntary, represents a potentially powerful set of work from these seven companies. This follows a summer of work on AI by the White House and others - with meetings between the President, Vice President and experts accompanying the National Standards Strategy for Critical and Emerging Technology to bolster investment, cross-sector participation, and workforce training for several technologies, including AI and machine learning.
At the same time, the administration announced new efforts to invest in responsible AI and assess existing generative AI models for vulnerabilities, with companies committing to opening their models to red-teaming at DEF CON (I’ll be there!). The event will enable security researchers access to AI models to identify vulnerabilities, allowing developers to take steps to improve those models. Today’s announcement also promises that the administration is working on an Executive Order, pursuing legislation, and that OMB’s policy guidance around AI will be coming soon as well.
Read Next
What States Can Learn from North Carolina’s Approach to Securing Government
As states across the country grapple with how to adopt AI responsibly, North Carolina offers a compelling case study - not because it has all the answers, but because it has built the institutional muscle to learn, adapt, and lead.
Developing a National Cybersecurity Strategy
Developing a national cybersecurity strategy is a critical investment a government can make to secure its future. This paper outlines the components and offers a framework with the tools to design, implement, and improve their strategies.
FedRAMP Signals Acceleration of Requirements for Machine-Readable Packages in the Rev5 Process
FedRAMP has proposed modifications to the Rev5 process in the newly published RFCs that could enact major changes and require Cloud Service Offerings to provide authorization packages in a “machine-readable format.”
