Introduction
Artificial intelligence (AI), automated decision making (ADM), and machine learning (ML) have hit the big time: academics, journalists, policymakers, and pretty much everyone else is talking about it, generally collectively referring to it as AI. From voice assistants in living rooms to chatbots that write songs, from intelligent mapping software to intelligent security that protects our digital platforms and infrastructure, AI is pervasive in our lives and - as a result - in our policy work. AI is already significantly impacting our day to day lives and is increasingly used to secure both digital services and physical infrastructure. AI is uniquely well suited to assist in the realm of cybersecurity, given that it looks for patterns and aberrations in large amounts of data in order to identify, predict, or mitigate outcomes - the same work often necessary for cybersecurity - particularly with the ever-growing scale of the systems in need of defense.
Cyberattacks continue to increase in volume and sophistication, with the potential to cause huge digital, financial, or physical damage. AI is already helping to address the shortage of qualified members of the cybersecurity workforce by automating detection and response to threats and doing work that is hard to do without automation. Policymakers must approach AI regulation in the same way that they approach cybersecurity, with a thoughtful and deliberate approach that assesses and mitigates risks while enabling the development of new, beneficial applications of AI for security. Research has shown that a large majority of executives believe that AI is necessary for effective response to cyberattacks, and that they have been able to respond faster to incidents and breaches when they use AI. While there is speculation about the role that AI may take in malicious cyberactivity, we are addressing the regulation of legitimate actors in this paper.
AI governance is notoriously tricky. AI deals in huge amounts of data and computing power to detect threats and potential risks in real-time, learning while it works. AI is made of behavioral models that can detect and even predict attacks as they develop. Pattern recognition and real-time mapping of fraud and cybercrime can allow another AI to bolster defenses where it’s needed and prevent privacy breaches, identity theft, business disruption, and financial loss. Detection and mitigation of cyberattacks against the operations of critical infrastructure ensure that public necessities like water and electricity remain available, often using AI in ways that are increasingly sophisticated. But despite its sophistication and appearing to work like magic (or like a person’s brain), AI is simply a set of computing techniques used in a vast number of ways to accomplish myriad goals.
Like any rapid technological innovation, AI presents new and unique challenges for policymakers and regulators. It is increasingly incorporated across the digital and physical operations of industrial and consumer facing sectors. In global conversations on how best to guide and regulate technology that uses AI, we must keep in mind the important role that AI plays in protecting our digital and physical infrastructure and operations to protect our ability to protect ourselves with AI.
Read Next
A Partial Win for AI Red-Teaming from the Copyright Office
The U.S. Copyright Office clarified legal rules for AI trustworthiness research and red-teaming under Section 1201 of the Digital Millennium Copyright Act and AI red-teamers have cause to celebrate, however, there is some not-so-great news too.
Building PQC and Crypto Resiliency Across the Public and Private Sectors
A webinar that featured industry leaders from AT&T, the National Institute of Standards and Technology (NIST), InfoSec Global, The White House, and Venable LLP, focused on cryptographic resilience and post-quantum transition.
Securing the Future of AI: What’s Next?
The intersection of AI and security is a hot topic but we find that people haven’t spent time to understand what is truly new about cybersecurity, and where organizations need to bolster defenses as AI use cases promulgate.