The reporting around Anthropic’s Glasswing program and Mythos’ ability to find vulnerabilities and exploits has been interesting. On the one hand, some of the mainstream reporting on the potential for a “vulnpocolypse” has been better than I would have expected. 

While these stories are not perfect, they do a pretty good job of describing the high-level issues involved and response in a way that non-technical leadership can understand. On the other hand, there has understandably been a bit of simplicity in what the response needs to look like. The problem here goes beyond getting patches in the right place and testing them quickly. We need to get more information out there focusing on all of the areas where immediate resources are needed.

In discussing the issues involved with a range of companies and hearing how they are responding, I’ve put the types of issues into four buckets and provided some definition of what I’m talking about; the risks faced; and possible solutions to dealing with the increased number of vulnerabilities and exploits in these areas. None of these issues are going to be easy to deal with, but I’m listing them in the order of least to most concern:

  1. Known critical/high risk vulnerabilities that have previously been patched or otherwise mitigated:   

These are the main kinds of cybersecurity risks that we hear about every day. Based on what I’ve heard from those using Mythos, these known exploits are still the most likely to be found.  Let’s include in this new exploits for a known vulnerability that can be mitigated through known existing means. 

What problem does this cause if AI starts finding 1000 times more of them?

As all defenders know, we are not very good at patching or implementing mitigations now.  There are a lot of reasons why this is the case. If you patch poorly, you could break networking functions making the cure worse than the disease. The scale that we will be seeing them come in for the foreseeable future makes this problem much more difficult and more noticeable in every way. 

How do we solve this:

  • We just have to get better at patching – it is hard to develop a workable patching plan for large diverse enterprises, but it is essential. 
  • We need more information shared about what is being found. 
  • We need to act faster on that information.
  • We need to test patches quickly.
  • We need to distribute patches quickly.  
  1. Vulnerabilities or exploits in end of life/end of warranty products:

Nation states have been increasingly targeting hardware and software that is past its prime. So far past its prime, in fact, that the companies that built them have warned those using them to stop using them and then stopped supporting and patching them altogether. For the most part, organizations do not switch off of these systems for a lot of reasons, including: 

  1. They do not know that the out of date product is running on their network.
  2. Switching away from the out of date product will break important functions, they need to make sure that switching to the new product will work, they have to train staff on the new product, all of this takes time and resources that are often not a priority.
  3. The cost of the new product is high and they do not have the budget to switch to the new product. 

What problem does this cause if AI starts finding 1000x more of them?

Today, these exploits are somewhat rare and mostly executed by nation states as the attacker is likely building the exploits to get into some very specific types of systems. Concern about them has been rising as it has been a major means of attack in cases, like Salt Typhoon. If an attacker can have an AI agent build the exploit, these attacks will no longer be rare as the time and effort will be removed. In other words, AI would make these attacks available for all types of attackers for a range of purposes.

How do we solve this?

Solutions to this problem have been noted for several years. The Center for Cybersecurity Policy and Law developed the Network Resilience Coalition to provide a comprehensive set of solutions for this problem in telecommunications networks. All of these solutions are still valid for telecommunications and beyond. But the simple answer here is:

  • Know what is on your network 
  • Prioritize moving away from end of life products.  

It might be expensive but less expensive than dealing with a hack which is much more likely now. 

  1. Known vulnerabilities that have previously been considered low risk that are being used in a new way that might make them critical/high risk:

This is the most interesting of the increased types of vulnerabilities we have been hearing about. AI has sometimes been able to chain together what has previously been thought of as a set of known lower risk exploits to create something that we would now think of as high risk or critical. This is not a phenomenon that is new to AI, but it does demonstrate a new level of sophistication and raises different issues. 

What problem does this cause if AI starts finding 1000x more of them?

If the exploit chaining is really being done through all lower risk exploits, they can be set as low priority to patch or not be mitigated at all if the mitigation might also remove or interfere with functionality.  

How do we solve this?

There are some existing cybersecurity tools aimed specifically at stopping this type of attack.  We need to evaluate how they handle these new versions of chaining. Cybersecurity researchers can also track these attacks and determine those that are being used most commonly or those where the mitigation is easiest to deploy and then consider that mitigation to be critical and get the word out. In any case, it is not necessarily as easy or fast as simply implementing known patches or upgrading from outdated products. 

  1. Previously unknown vulnerabilities or exploits that have not been patched

From what we are hearing, like most AI, at this early stage Mythos exhibits a bit of a lack of creativity. It doesn’t yet find a lot of new very creative vulnerabilities (aka “zero-days”).  However it is finding new exploits for existing vulnerabilities that need to be mitigated and new vulnerabilities that look like older vulnerabilities but are just different enough to need new mitigations. This is always going to be the most problematic issue because of the amount of time needed to create and test patches in some types of products.  

What problem does this cause if AI starts finding 1000x more of them?

This is where we could be in big trouble. If creating patches and patch testing remains at the current pace, there is going to be a long gap between when an organization is vulnerable and when it can protect itself.  

How do we solve this?

We just need to get much faster at creating patches. We need to use AI to model and create the patches and virtual environments to test them and increase the speed at which they get out the door. Then on top of that, we will once again need to solve the reasons people don’t patch when they have the patch in hand (see #1). 

Conclusion

As some commentators have noted, the cybersecurity community is handling the unprecedented issues here very professionally. We knew this day was coming someday and we have the ability to respond, but we need to do it. There is not going to be a single simple answer but we need to make sure there is a dedicated response to updating equipment and software, speeding up patching, breaking new exploit chains, as well as making sure known patches and mitigations are all up-to-date.  

Notably, all of these solutions will require greater resources. While AI can help to increase the rate of patching, updating, and identifying out of date products, the rate that new issues will be identified will still take more knowledgeable professionals to oversee the process in the short term. 

We can all hope that the end result of this process is less patching and fewer vulnerabilities because the products put into the ecosystem will begin getting tested by ever improving AI before they come to market. 

Ari Schwartz

Read Next

S3 EP01: Digital Sovereignty, CSA 2.0, and PQC with MEP Bart Groothuis

In our latest Distilling Cyber Policy podcast, hosts Alex Botting and Jen Ellis kick off the season with a wide-ranging conversation on some of the biggest issues shaping cyber policy.

The Role of Cybersecurity Information-Sharing Amid Geopolitical Tensions

Recent military actions have heightened geopolitical tensions and with that comes an elevated cyber threat landscape. In this context, one reality stands out: information-sharing is more important than ever.

Examining the White House’s National Cyber Strategy Webinar

A webinar that featured cyber experts who discussed the White House's latest National Cybersecurity Strategy.