It’s fun to use the latest artificial intelligence (AI) chatbot to create a lullaby for your dog or asking a voice assistant to tell a joke. And there are other useful applications too, like the AI systems that make my house more efficient and pleasant to live in. 

And while these use cases are entertaining or helpful, there are more serious applications for AI too and we need to make sure these use cases are protected. To help with this effort, the Center for Cybersecurity Policy and Law is releasing  “Cybersecurity and AI in Policymaking: Protecting the use of artificial intelligence in cybersecurity.

The use cases for AI in cybersecurity are numerous. AI is well suited to assist in cybersecurity, going through a huge amount of data to find the malicious needle in a haystack. AI is uniquely suited to lend a metaphorical hand to security processes where complexity is high and speedy response is critical. AI can often find patterns in network traffic much more quickly than traditional analytics or human analysis, identifying threats and malicious activity based on many different interactions across a global network or large set of infrastructure. And indeed, AI is used across sectors to secure, protect, and harden digital and physical systems against malicious actors.

But governance of any new technology can be challenging. Policymakers must approach AI regulation in the same way that they approach cybersecurity, with a thoughtful and deliberate approach that assesses and mitigates risks while protecting and enabling the development of new, beneficial applications of AI for security. 

Like any rapid technological innovation, AI presents new and unique challenges for policymakers and regulators. In global conversations on how best to guide and regulate technology that uses AI, we must keep in mind the important role that AI plays in protecting our digital and physical infrastructure and operations in order to protect our ability to protect ourselves with AI.

To start with, it’s important to remember what developers need to create AI for cybersecurity, including large volumes of high-quality data, data science teams to train and oversee systems, and the ability to customize systems for deployment. The paper discusses what AI is, how it’s used in cybersecurity, and what teams need to develop and deploy effective AI systems to advance cybersecurity.

The paper discusses potential regulation of AI and how policymakers should approach this topic. It is important to ensure that rules and regulations enable the positive contributions that a technology can make while curtailing the behaviors and outcomes that society seeks to minimize, especially for applications as important as cybersecurity.

The paper recommends regulations to maximize AI’s security role, including:

  • Basing regulations on the potential for risk, and scoping rules to outcomes rather than tools used, while considering security exceptions to broader rules where appropriate.
  • Clarity in definitions and scope to ensure that it’s easy to understand when it does and does not apply.
  • Data collection and analysis guardrails, including around privacy and data protection, data quality, and the need for comprehensive and unbiased data.
  • Protection of the robustness, accuracy, and security of AI systems, especially for high-risk applications.
  • Clear guidelines around scoring and discrimination rather than bans.
  • Human oversight requirements that reflect the risk posed by a given system.
  • Documentation and recordkeeping requirements that help people understand without undermining security goals.
  • Oversight of national security and law enforcement use that have the potential to significantly impact people’s lives.

The full paper can be found here.

Heather West

Read Next

FedRAMP Finalizes Emerging Technology Prioritization Framework

The GSA FedRAMP PMO released the final version of its Emerging Technology Prioritization Framework that seeks to expedite FedRAMP authorizations for select cloud offerings with emerging technology features, such as generative AI.

PQC: Lead the Way or Fall Behind

NIST has selected the Post-Quantum Cryptography algorithms and now is the time for organizations to decide to lead or get left behind. Establishing a foundation of trust and protecting information and infrastructure with these standards is crucial.

Research Needed for the Good and Bad AI Cybersecurity Use Cases

When implemented properly, artificial intelligence is a vital tool for cybersecurity but more public research is essential to understand and monitor a diverse array of AI systems and their potential – for good and bad.