It’s fun to use the latest artificial intelligence (AI) chatbot to create a lullaby for your dog or asking a voice assistant to tell a joke. And there are other useful applications too, like the AI systems that make my house more efficient and pleasant to live in.
And while these use cases are entertaining or helpful, there are more serious applications for AI too and we need to make sure these use cases are protected. To help with this effort, the Center for Cybersecurity Policy and Law is releasing “Cybersecurity and AI in Policymaking: Protecting the use of artificial intelligence in cybersecurity.”
The use cases for AI in cybersecurity are numerous. AI is well suited to assist in cybersecurity, going through a huge amount of data to find the malicious needle in a haystack. AI is uniquely suited to lend a metaphorical hand to security processes where complexity is high and speedy response is critical. AI can often find patterns in network traffic much more quickly than traditional analytics or human analysis, identifying threats and malicious activity based on many different interactions across a global network or large set of infrastructure. And indeed, AI is used across sectors to secure, protect, and harden digital and physical systems against malicious actors.
But governance of any new technology can be challenging. Policymakers must approach AI regulation in the same way that they approach cybersecurity, with a thoughtful and deliberate approach that assesses and mitigates risks while protecting and enabling the development of new, beneficial applications of AI for security.
Like any rapid technological innovation, AI presents new and unique challenges for policymakers and regulators. In global conversations on how best to guide and regulate technology that uses AI, we must keep in mind the important role that AI plays in protecting our digital and physical infrastructure and operations in order to protect our ability to protect ourselves with AI.
To start with, it’s important to remember what developers need to create AI for cybersecurity, including large volumes of high-quality data, data science teams to train and oversee systems, and the ability to customize systems for deployment. The paper discusses what AI is, how it’s used in cybersecurity, and what teams need to develop and deploy effective AI systems to advance cybersecurity.
The paper discusses potential regulation of AI and how policymakers should approach this topic. It is important to ensure that rules and regulations enable the positive contributions that a technology can make while curtailing the behaviors and outcomes that society seeks to minimize, especially for applications as important as cybersecurity.
The paper recommends regulations to maximize AI’s security role, including:
- Basing regulations on the potential for risk, and scoping rules to outcomes rather than tools used, while considering security exceptions to broader rules where appropriate.
- Clarity in definitions and scope to ensure that it’s easy to understand when it does and does not apply.
- Data collection and analysis guardrails, including around privacy and data protection, data quality, and the need for comprehensive and unbiased data.
- Protection of the robustness, accuracy, and security of AI systems, especially for high-risk applications.
- Clear guidelines around scoring and discrimination rather than bans.
- Human oversight requirements that reflect the risk posed by a given system.
- Documentation and recordkeeping requirements that help people understand without undermining security goals.
- Oversight of national security and law enforcement use that have the potential to significantly impact people’s lives.
The full paper can be found here.
Read Next
Meeting the Homeland Drone Threat: A Table-Top Exercise Exposes the Gaps in Authorities and Resources
A tabletop exercise explored the threat posed by the malicious use of drones to the homeland, involving public and private sector participants responding to hypothetical attacks on an air base, electricity grid, and a local hockey game.
Cybersecurity Coalition Shares Views on EU Roadmap on Post-Quantum Cryptography
The Cybersecurity Coalition responded to the European Union Network and Information Systems Cooperation Group’s Survey on the EU Roadmap on Post-Quantum Cryptography.
State, Fed Cyber Leaders Discuss Resilience in Light of Evolving Threat, Budget Landscapes
State and federal cyber leaders convened in Austin to discuss the Texas Cyber Command, utilizing Zero Trust strategies in an era of AI, and improving Federal to State cyber cooperation in an era of constricting resources and increased threats.
