The rise of artificial intelligence is reshaping every aspect of the cybersecurity landscape, both for attackers and defenders. While AI holds tremendous promise for automating much of the manual technical threat hunting, it also empowers threat actors to scale operations, automate attacks, and evade detection at a pace that traditional defenses struggle to match. This is driving an AI arms race with companies, incident responders, and the cyber insurance market straight in the middle.
To address this dual-use dilemma, the Center for Cybersecurity Policy and Law, Infoblox, and Verizon convened a virtual roundtable in late September from the Center’s ongoing series on Innovation in Cyber Insurance and Incident Response. The discussion brought together leaders across the cyber insurance, reinsurance, digital forensics, and incident response (DFIR) communities to examine how their industries can adapt to this fast-evolving threat environment.
This was the second in the continuing series of policy and industry roundtables designed to strengthen coordination between cybersecurity practitioners and risk professionals. The session was held under Chatham House Rule, encouraging open, candid dialogue among participants.
Understanding the Threat: How AI is Changing the Game
The event began with a series of short threat briefings from Infoblox and Verizon highlighting how adversaries are using AI to expand the reach and speed of their operations.
Infoblox, a provider of secure DNS services, described how analysis of internet infrastructure traffic is revealing that nearly a quarter of the 100 million observed queries show signs of malicious activity, and that the overwhelming majority of these attacks are targeted at specific companies rather than random victims. These trends demonstrate how attackers are leveraging automation and AI-driven pattern recognition to zero in on enterprise networks. To keep pace with the expansion of AI created infrastructure, DNS intelligence was useful in predicting if newly created, unseen, domains were malicious. Helping block attempts from adversaries to phish users or communicate with malware already resident on a network.
Our Verizon briefer was a senior leader on their incident response team and focused on how AI is being integrated into ransomware and phishing campaigns, as well as the concerns of exploding shadow-AI. The FBI has warned that AI will make these threats more sophisticated and more personalized. Attacks that once required advanced coding skills can now be launched by individuals using generative tools available online.
The result is a widening “AI adoption gap,” where attackers are innovating faster than many organizations are adapting their defenses. At the same time, the recent DBIR cited the potential for corporate-sensitive data leakage to the GenAI platforms themselves. Fifteen percent of employees surveyed were routinely accessing GenAI systems on their corporate devices with 72% “using non-corporate emails as the identifiers of their accounts,... most likely suggesting use outside of corporate policy.”
Industry Perspectives: Three Questions at the Center of the Discussion
The main group consisted of over two dozen members of the cyber insurance and incident response community. Following the threat updates, participants turned to a structured discussion exploring how the cyber insurance and incident response sectors should evolve in response to these changes. Three key questions framed the debate.
1. What AI-Enabled Threats Are Emerging in Claims and Incidents?
Participants observed that AI is beginning to surface indirectly in incident reports and insurance claims, particularly through compromised advertising technologies, automated phishing campaigns, and data misuse by embedded tracking tools.
From an insurance standpoint, most current cyber policies do not explicitly exclude AI-enabled incidents. AI is viewed not as a distinct class of threat, but as a technology that can amplify existing risks. The conversation emphasized that this distinction matters less for claims adjudication and more for understanding how AI-driven tools affect the frequency and scale of loss events.
The consensus: AI is not a new risk, but a powerful accelerator. As open-source and unregulated AI systems proliferate, their use in malicious campaigns is expected to increase in both frequency and sophistication.
2. How Do AI-Scaled Attacks Change Breach Modeling and Portfolio Liability?
Insurance and reinsurance professionals agreed that current models are not yet calibrated to account for AI’s impact on loss patterns. Traditional cyber risk models still treat incidents as isolated or statistically independent events. AI, however, changes the calculus enabling adversaries to scale similar tactics across multiple victims simultaneously. We have long seen the serious third-party vendor impact of SaaS service providers impacting outage or leading to data loss. GenAI tools primarily delivered through a SaaS model offer a new dimension to this risk.
Some participants raised the concern that this scaling effect could increase the systemic nature of cyber losses. Others noted that, so far, these risks are manifesting more as attritional losses – the accumulation of smaller, frequent claims – rather than catastrophic ones.
Beyond modeling, the discussion touched on the importance of monitoring data sovereignty and cross-border data flows. The ability of AI models to analyze or export sensitive data across jurisdictions introduces new exposures for multinational clients and raises complex questions around regulatory compliance.
3. What Controls Improve Underwritability for AI Risks?
The group emphasized that underwriting standards must evolve in parallel with technology adoption. A recurring theme was the need for organizations to demonstrate that they are “AI ready” — not simply in how they use AI internally, but in how they protect themselves from its misuse.
Key controls discussed included:
- Monitoring and controlling data leaving the network, to prevent unauthorized training or inference activity through controls like DNS security and network monitoring.
- Establishing policies for acceptable AI use within the enterprise, especially regarding detection and blocking of “shadow AI” unsanctioned employee use of generative tools.
- Integrating AI threat modeling into standard risk assessments, ensuring visibility into how models are developed, deployed, and maintained.
Participants agreed that insurers are increasingly likely to reward organizations that can clearly articulate their AI governance frameworks, data handling practices, and third-party dependencies. Over time, such measures may directly influence insurability and premium levels.
4. How Should Incident Response and Attribution Evolve in AI-Driven Breaches?
The final discussion focused on the operational challenges of incident response in an AI-augmented environment. Forensic investigations must now consider whether an attacker used AI to create, disguise, or accelerate the breach and whether internal AI systems may have contributed to it through error or misuse.
Participants emphasized that AI-related breaches should still follow established response playbooks: contain, preserve evidence, and assess data exposure. However, attribution becomes more complicated when AI systems generate synthetic content, act as intermediaries, or obscure human involvement.
Legal and insurance considerations are also evolving. Questions remain about where responsibility lies when AI systems themselves are implicated whether the fault lies with the enterprise using the AI, the vendor providing it, or the cloud infrastructure hosting it. The group noted that liability trends appear to be following the same path as early cloud computing: distributed responsibility, with limited precedent for direct accountability.
Key Takeaways: Aligning Policy, Technology, and Risk
Across all segments of the discussion, several common themes emerged:
- Blurring Boundaries of Risk: AI cuts across traditional silos, by blending cyber, software liability, and data protection concerns.
- Modeling Gaps: The industry’s current actuarial models do not fully capture AI’s scaling effects or the potential for correlated losses.
- Governance as Differentiator: Insurers are beginning to see AI governance as a measurable indicator of cyber maturity.
- Shared Responsibility: Both AI vendors and enterprise users must clarify their respective roles in protecting data and managing model risk.
- Next Frontier for Policy Language: As AI becomes embedded in every layer of the digital ecosystem, future insurance products will likely include explicit provisions for AI-related incidents.
Looking Ahead
The roundtable underscored a shared recognition that the cyber insurance and incident response communities are entering a pivotal phase. As adversaries harness AI to scale attacks, defenders must respond with equal speed and coordination not only in technical defenses but in how risk is modeled, transferred, and mitigated. New technologies that can scale with the attacks should be further explored and transitioned to “best practice”, just as controls like MFA or immutable backups have become in response to ransomware
Future sessions in this series will continue to explore these intersections, including emerging approaches to AI liability, reinsurance modeling, and collaborative frameworks for intelligence sharing. The ability to fight adversarial AI will depend as much on cross-sector partnerships as on technological innovation.
In the end, the group agreed on one essential point: AI is not just another tool in the threat actor’s arsenal, it is a force multiplier. Understanding and mitigating that reality will define the next chapter of cyber resilience.
Read Next
Brussels’ Regulatory Assertiveness Collides With Standards Development Process, Diplomacy
While standards are not the most exiting topic in the world, they are critical in many respects, and the development of AI standards in the EU is causing some consternation among many.
Cybersecurity Coalition Shares Views on EU Roadmap on Post-Quantum Cryptography
The Cybersecurity Coalition responded to the European Union Network and Information Systems Cooperation Group’s Survey on the EU Roadmap on Post-Quantum Cryptography.
DNS Security in Focus: A Multistakeholder Path Forward under NIS2
Last month in Brussels, the Center brought together experts to chart a course toward more resilient DNS infrastructures globally and across the EU.
