Introduction

Artificial intelligence (AI), automated decision making (ADM), and machine learning (ML) have hit the big time: academics, journalists, policymakers, and pretty much everyone else is talking about it, generally collectively referring to it as AI. From voice assistants in living rooms to chatbots that write songs, from intelligent mapping software to intelligent security that protects our digital platforms and infrastructure, AI is pervasive in our lives and - as a result - in our policy work. AI is already significantly impacting our day to day lives and is increasingly used to secure both digital services and physical infrastructure. AI is uniquely well suited to assist in the realm of cybersecurity, given that it looks for patterns and aberrations in large amounts of data in order to identify, predict, or mitigate outcomes - the same work often necessary for cybersecurity - particularly with the ever-growing scale of the systems in need of defense.

Cyberattacks continue to increase in volume and sophistication, with the potential to cause huge digital, financial, or physical damage. AI is already helping to address the shortage of qualified members of the cybersecurity workforce by automating detection and response to threats and doing work that is hard to do without automation. Policymakers must approach AI regulation in the same way that they approach cybersecurity, with a thoughtful and deliberate approach that assesses and mitigates risks while enabling the development of new, beneficial applications of AI for security. Research has shown that a large majority of executives believe that AI is necessary for effective response to cyberattacks, and that they have been able to respond faster to incidents and breaches when they use AI. While there is speculation about the role that AI may take in malicious cyberactivity, we are addressing the regulation of legitimate actors in this paper.

AI governance is notoriously tricky. AI deals in huge amounts of data and computing power to detect threats and potential risks in real-time, learning while it works. AI is made of behavioral models that can detect and even predict attacks as they develop. Pattern recognition and real-time mapping of fraud and cybercrime can allow another AI to bolster defenses where it’s needed and prevent privacy breaches, identity theft, business disruption, and financial loss. Detection and mitigation of cyberattacks against the operations of critical infrastructure ensure that public necessities like water and electricity remain available, often using AI in ways that are increasingly sophisticated. But despite its sophistication and appearing to work like magic (or like a person’s brain), AI is simply a set of computing techniques used in a vast number of ways to accomplish myriad goals.

Like any rapid technological innovation, AI presents new and unique challenges for policymakers and regulators. It is increasingly incorporated across the digital and physical operations of industrial and consumer facing sectors. In global conversations on how best to guide and regulate technology that uses AI, we must keep in mind the important role that AI plays in protecting our digital and physical infrastructure and operations to protect our ability to protect ourselves with AI.

Heather West

Read Next

FedRAMP Finalizes Emerging Technology Prioritization Framework

The GSA FedRAMP PMO released the final version of its Emerging Technology Prioritization Framework that seeks to expedite FedRAMP authorizations for select cloud offerings with emerging technology features, such as generative AI.

PQC: Lead the Way or Fall Behind

NIST has selected the Post-Quantum Cryptography algorithms and now is the time for organizations to decide to lead or get left behind. Establishing a foundation of trust and protecting information and infrastructure with these standards is crucial.

Research Needed for the Good and Bad AI Cybersecurity Use Cases

When implemented properly, artificial intelligence is a vital tool for cybersecurity but more public research is essential to understand and monitor a diverse array of AI systems and their potential – for good and bad.