Make no mistake about it, artificial intelligence (AI) is here and it is already having an impact on network and enterprise security as companies build their cyberdefense strategy.

At a session at the Nvidia GPU Technology Conference (GTC), experts from Nvidia, Deloitte, and the Defense Advanced Research Projects Agency (DARPA) outlined the landscape and challenges that AI poses for creating a cyberdefense strategy in an AI-driven world. The panel wasn't all doom and gloom in its outlook, providing a series of examples and suggestions for ways that both public and private organizations can mitigate potential AI cybersecurity risks.

Looking at the risks and how AI is changing the threat landscape, the role of generative AI technologies, such as ChatGPT cannot be understated. Kathleen Fisher, director of the Information Innovation Office at DARPA, commented that with generative AI, adversaries are now able to generate fake websites rapidly and are able to create customized phishing messages that can be highly effective.

"In recent times, you could have a fair guess that something might be spam, for example, if it was grammatically incorrect," Fisher said. "ChatGPT writes really well grammatically and it's better than many people."

The code generation capabilities of AI are also a potential risk, as Fisher noted they are more capable of writing scripts that are commonly used in ransomware. She noted that the barrier for somebody who doesn't actually have much training to generate phishing emails and write ransomware has gone down dramatically.

Fisher isn't too worried about AI-generated spam and malware and doesn't expect that the severity of those specific attacks will be all that high. Longer term, however, she is more concerned about the potential risk of what she referred to as “cyber agents.” Those are autonomous agents (or bots) that have full large language model (LLM) capabilities and are able to execute more complex and nefarious attacks.

Adversarial AI and model poisoning

Kim Crider, managing director of AI innovation for national security and defense at Deloitte, commented that among her concerns are the risks of AI models being poisoned in some way. AI poison is defined as the injection of malicious data into a model that leads the AI to make the wrong decision.

"There can be some sort of minor malicious input that is injected into the model and the model starts to act in a new way based on very small changes to the input that starts to fool the model," Crider said.

Crider noted that the U.S. military establishment wants its warfighters to have more AI capabilities on land, sea, air and space, as a way to be competitive against adversaries. That said, she cautioned that if the data that's feeding those models can be compromised, then there is a real  challenge.

How cyberdefenses can adapt to the new AI landscape

While AI technologies are relatively new, the way to defend against potential risks and limit the attack surface is not. In fact, both Crider and Fisher emphasized the importance of common cybersecurity practices that are necessary to help reduce all manner of risks.

The first step according to Crider is for organizations to understand and be aware of potential attack vectors, whatever they might be.

"You always have to be thinking that you're going to be breached or that you already have been   breached," Crider said. "You have to put yourself in the position of the attacker so that you can continue to shore up your defenses and understand where you could be at risk."

Beyond awareness of attack vectors, organizations should also take a system level approach to build in resiliency, particularly in regards to AI.  For example, if a particular data set or AI model is compromised, that should mean the failure of an entire system. Rather Crider suggested that organizations use multiple models and types of data that all work together in a deployed environment. The goal is to have multiple models and sources of data such that  they can protect each other and create some degree of resilience so that if one model is fooled, there are other models that might kick in.

In Crider's view, employing some of the fundamentals of cybersecurity, which includes taking a zero trust approach, are critical. With zero trust, there is no implied trust in any user or entity and there are robust authentication schemes to help verify every access and action. She also noted that it's important to be able to isolate potential vulnerabilities, which also is not a new concept in cybersecurity.

"AI adds a new complexity to the problem set but it doesn't negate that the fundamentals of cybersecurity and cyber defense have to be employed," Crider said.