As artificial intelligence (AI) continues to advance, it is important to understand the privacy and security risks associated with these data-driven technologies. A few weeks ago, I joined the “ADGC on Privacy & Cybersecurity” podcast with host Jody Westby to discuss the basics of generative AI and its potential future impacts.

We wanted to educate listeners about what AI technologies can do, and the risks and benefits associated with using these technologies - especially as they capture everyone’s interest and attention. While chatbots and generative AI are not traditional cybersecurity policy fodder, it’s clear that both have implications for security and that their uses are developing at a breakneck pace.

For me, the public use of “generative AI” is one of the most interesting developments in the technology space. The press has been focusing a lot of attention on chat bots such as “ChatGPT,” but it’s one of many AI applications that create “new” outputs by using large amounts of training data and then generating new content. Instead of pattern matching and producing stock sentences, chatbots are now creating entirely new sentences and recontextualizing existing ideas into new formats. As these systems progress this will have more security implications.

Generative AI is now capable of creating new, high-quality, human-like outputs based on models that have been trained on vast quantities of data. The results are advancements in machine learning that have improved both the accuracy and the capabilities of the bots. And those capabilities are vast, from answering simple questions to creating fanciful verse -- my favorite is a lullaby I asked ChatGPT to write for my dog.

But there are risks: chatbots are generating new things in creative ways, but at their core, they are only connecting old ideas and don’t have the ability to discern what is true. Chatbots are reliant on their training data to determine the answer they provide, making that data extremely important.

If a chatbot has been created with a biased set of training data, it is likely the answers it provides will also be biased - like any AI system. And an AI system will do what it has been trained to do, whether that’s to connect you with resources or rope you into a long, misleading conversation. It’s worth noting that chatbots have been used to create new content entirely, producing citations for academic sources that don’t exist, but certainly sound credible - because they’ve been trained to do exactly that: sound credible.

And generative AI chatbots can produce responses that increasingly sound human-like, which has the potential to change the way cyber threats are developed and executed. These models can now be used to automate the creation of phishing emails, social engineering attacks, and other types of malicious content.

Potentially worse than that, ChatGPT has also been used to write malicious exploit code. While the code it creates is not particularly sophisticated, generative models will continue to evolve and may be able to create effective exploits that can evade security programs in the future, and not all chatbots will have the same careful guardrails that OpenAI built into ChatGPT. When used in addition to malware, ChatGPT enables hackers to make infinite code variations to stay one step ahead of malware detection engines.

Phishing and business email compromises are attacks that attempt to get a victim to disclose sensitive information relating to finances or personal data. These attacks require personalized messages to be successful. Now that ChatGPT can create convincing personal emails, it can generate them to the masses with infinite variations. The increased speed and frequency in producing attacks from chat bots will result in a higher success rate than we have ever seen before. The legacy security technology in place is not equipped to identify and protect against these advancements.

As these technologies make it harder to sort malicious content from legitimate information, the security risks posed by them may drastically rise, and the security industry will need to respond. While hackers and others will use these tools to attack, the cybersecurity community must also learn to use these technologies to our advantage and combat the threats of advancing artificial intelligence. If you want to learn more about artificial intelligence in general, we encourage you to listen to the entirety of the podcast here.

Heather West

Read Next

Breaking the endless loop and reframing the encryption debate

Encryption advocates and law enforcement are stuck in an endless loop when it comes to debating encryption. It's time for industry and law enforcement to sit down, discuss challenges, listen to one another, and work together to create solutions.

Protecting Network Resiliency

Vulnerabilities, flaws, or misconfigurations in the network device ecosystem can have a devastating effect. To prevent this, the Network Resilience Coalition is making recommendations on best practices for both vendors and consumers.

Network Resilience Coalition Offers Recommendations for Improving Network Infrastructure Security in New White Paper

A white paper from the Network Resilience Coalition, an alliance composed of technology providers, security experts, and network operators, offers recommendations on how vendors and users of networking products can improve network security.