Artificial intelligence (AI), automated decision making (ADM), and machine learning (ML) have hit the big time: academics, journalists, policymakers, and pretty much everyone else is talking about it, generally collectively referring to it as AI. From voice assistants in living rooms to chatbots that write songs, from intelligent mapping software to intelligent security that protects our digital platforms and infrastructure, AI is pervasive in our lives and - as a result - in our policy work. AI is already significantly impacting our day to day lives and is increasingly used to secure both digital services and physical infrastructure. AI is uniquely well suited to assist in the realm of cybersecurity, given that it looks for patterns and aberrations in large amounts of data in order to identify, predict, or mitigate outcomes - the same work often necessary for cybersecurity - particularly with the ever-growing scale of the systems in need of defense.
Cyberattacks continue to increase in volume and sophistication, with the potential to cause huge digital, financial, or physical damage. AI is already helping to address the shortage of qualified members of the cybersecurity workforce by automating detection and response to threats and doing work that is hard to do without automation. Policymakers must approach AI regulation in the same way that they approach cybersecurity, with a thoughtful and deliberate approach that assesses and mitigates risks while enabling the development of new, beneficial applications of AI for security. Research has shown that a large majority of executives believe that AI is necessary for effective response to cyberattacks, and that they have been able to respond faster to incidents and breaches when they use AI. While there is speculation about the role that AI may take in malicious cyberactivity, we are addressing the regulation of legitimate actors in this paper.
AI governance is notoriously tricky. AI deals in huge amounts of data and computing power to detect threats and potential risks in real-time, learning while it works. AI is made of behavioral models that can detect and even predict attacks as they develop. Pattern recognition and real-time mapping of fraud and cybercrime can allow another AI to bolster defenses where it’s needed and prevent privacy breaches, identity theft, business disruption, and financial loss. Detection and mitigation of cyberattacks against the operations of critical infrastructure ensure that public necessities like water and electricity remain available, often using AI in ways that are increasingly sophisticated. But despite its sophistication and appearing to work like magic (or like a person’s brain), AI is simply a set of computing techniques used in a vast number of ways to accomplish myriad goals.
Like any rapid technological innovation, AI presents new and unique challenges for policymakers and regulators. It is increasingly incorporated across the digital and physical operations of industrial and consumer facing sectors. In global conversations on how best to guide and regulate technology that uses AI, we must keep in mind the important role that AI plays in protecting our digital and physical infrastructure and operations to protect our ability to protect ourselves with AI.
Prioritizing cybersecurity for state government: How a ‘whole of government’ approach benefits all
As cybersecurity concerns are front and center for state technology leaders, some jurisdictions are looking at a "whole of government" approach that would enable them to help locals and school districts.
Report: How a ‘whole of government’ approach to cybersecurity can help states
A look at how a "whole of government" Approach to cybersecurity can help states, locals and school districts.
Center for Cybersecurity Policy and Law Launches Initiatives To Support Detection and Remediation of Security Vulnerabilities
Hacking Policy Council and Security Research Legal Defense Fund Will Advance Security Research Protections and Awareness