The expansive artificial intelligence (AI) Executive Order (EO) signed by President Biden – Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence – brings together many actions that the Administration will take over the next year to guide the U.S. government’s use and regulation of the technology. The EO also includes efforts to develop best practices and standards for AI safety and security, and then plans to take these practices and standards to international partners through diplomacy. This post only examines the cybersecurity aspects of the EO, but there are wide-ranging additional actions, from immigration and international diplomacy, to potential disparate impact of AI in housing and education.
An EO serves many purposes, it can force the many separate government agencies to focus on specific priorities and issues, find consensus and common equities across agencies, develop shared understanding across agencies, and identify where new approaches are needed.
It can also serve to bring together the actions of all these agencies. This order does all that, while also exploring many topics related to the use of AI, including safety, responsibility, labor, equity, consumer protection, privacy, workforce, and furthering the U.S.’s global leadership. Cybersecurity and risk management play a core role throughout these issues and the actions in the order, shaping the approach that the Administration will take to encourage, protect, and manage risk.
Of course, this isn’t the Administration's first foray into AI security, but it is the most broad. The EO focuses on processes for risk management throughout, and some of the best developed elements are actions around AI cybersecurity and safety, and mitigation of security threats.
Standards for AI Development and Use
The EO directs the National Institute of Standard and Technology (NIST) to further develop standards for AI intended to guide government use of the technology and to shape industry practices and global frameworks, incorporating and iterating work from the Artificial Intelligence Risk Management Framework (AI RMF). It calls for material specific to generative AI and “dual-use foundation models” in standards, including the Secure Software Development Framework (SSDF).
NIST is also tasked with developing guidance and benchmarks for AI red-teaming and auditing AI capabilities. While these standards will be voluntary for most organizations, they will be required in some contexts, particularly for dual-use foundation models that are trained on large amounts of data and suitable for a broad set of tasks, some of which may pose a threat to national security. This focus on standards for risk management also permeates the entire EO, with the NIST AI RMF featuring throughout as a basis for future standards for government use, critical infrastructure, and in particular regulated sectors.
Tracking AI Development
The EO will require AI companies to report how they train their models, and the results of their red-teaming and testing, before the release of advanced AI systems. In order to do this, the Department of Commerce will define both technical conditions for AI models and computing clusters subject to reporting requirements under Defense Production Act and new recordkeeping obligations under EO 13694. Put simply, these requirements will seek to ensure that the government understands who is training advanced AI, what protections are in place for the training, and whether that AI is adequately reliable and safe.
It will give visibility into the results of safety evaluations and testing, and outline the measures taken to meet safety and security criteria, as well as the measures taken to protect training, models, model weights, and other aspects of the AI itself. The Order also directs the Commerce Department to propose new recordkeeping obligations on foreign Infrastructure-as-a-Service (IaaS) transactions, particularly those that could create AI models that “have potential capabilities that could be used in malicious cyber-enabled activity,” including a Know Your Customer requirement for those transactions.
Creating Unintended Threats
There have been many concerns that advanced AI may help malicious actors, particularly as they develop malicious, offensive cybersecurity capabilities. The EO is cognizant that the government holds significant information, and that data used in the aggregate can have different threats than it may in isolation. Using government data to train AI models could create additional risk; therefore, the Chief Data Officer Council will develop guidelines for security reviews that identify the potential risk of releasing Federal data that could create new threats, including offensive cyber threats. These guidelines will inform agency review of their data assets and steps to address the potential risk of releasing that data.
The Order also calls for the consideration of whether AI may be misused to enable creation of chemical, biological, radiological, and nuclear (CBRN) threats, and how to mitigate or counter those threats. The EO particularly focuses on the risk of using pathogen and “omics” data sets to increase biosecurity risks and ways to mitigate that risk.
Critical Infrastructure
Our country’s Critical Infrastructure (CI) – the systems, networks and public works that a government considers essential to its functioning and safety of its citizens, such as energy, agriculture, health, and communications – are subject to heightened security requirements. The protections layed out in the EO are no exception, and it directs Sector Risk Management Agencies (SRMAs) to assess potential risks related to the use of AI, including how the technology may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyber attacks. The EO then calls for SRMAs to incorporate and mandate appropriate elements of the NIST AI RMF and other security guidance in CI management.
Procurement, FedRAMP
One of the most notable elements of the EO is not a call for action, but instead a call against a particular kind of action: the EO cautions against outright bans and blocks against the use of generative AI. Instead, it points agencies to risk management practices, use guidelines, employee training, and secure tools. Additionally, these guidelines, best practices, and evaluations will be required for government procurement. In order to enable this, it also calls for GSA to issue a framework for prioritizing critical and emerging technologies offerings in FedRAMP, the certification and compliance program for cloud products and services used by the Federal government, starting with generative AI offerings. Both government agencies and Federal contractors will need to abide by minimum risk management practices when using safety – or rights-impacting AI – including impact assessments, evaluation of the quality and appropriateness of training data, ongoing monitoring, and human oversight.
Is that all?
The EO calls for many additional reports and publications exploring aspects of AI use that the Administration would like more clarity on before considering additional action. Some of the more interesting reports will focus on the risk of widely available model weights for dual-use foundation models, and investigating standards for authenticating and labeling content. The EO also directs the creation of testbeds, evaluation tools, and other resources. We can expect a wealth of material coming out about AI standards, capabilities, and risks over the next year.
Lastly, while the EO touches on the defensive use of generative AI, it does not cover national security uses. However, the EO directs the creation of a separate memo on AI governance, assurance, and risk management in national security, military, and intelligence applications, and an exploration of the potential use by adversaries that threaten national security.
Conclusion
The Administration is touting this as the largest government action on AI ever taken – and they’re not wrong, either by the number of actions or the potential for cybersecurity impact. The EO marks a critical step in the U.S. approach to AI – particularly in terms of risk management and cybersecurity. It also serves to mark the importance of AI in every sector, and highlights the Administration’s tricky job balancing the risks and opportunities of the technology, both as it is used in government and as it is developed in the private sector.
Read Next
The U.S. Data Security EO with Lee Licata and Grant Dasher (Part 2)
For the first time in the Distilling Cyber Policy podcast, Alex and Jen are re-joined by guests from earlier this season: Lee Licata, from the Department of Justice, and Grant Dasher, from CISA.
The U.S. and UN Cybercrime Convention: Progress, Concerns, and Uncertain Commitments
The U.S. issued an updated position seeking to move forward the UN Convention Against Cybercrime, a treaty intended to improve the global community’s ability to combat evolving cybercrime threats.
The Counter Ransomware Initiative with Hamish Hansford (DCP S2 E8)
In the latest Distilling Cyber Policy, Alex Botting and Jen Ellis are joined by our second-ever Australian guest: Hamish Hansford, the Deputy Secretary of Cyber and Infrastructure Security Group at the Australian Department of Home Affairs.