What Is Adversarial AI?
Traditionally, adversarial AI has referred to efforts to undermine AI-driven activities. But a new definition is emerging.
“What we’re hearing more about now is adversaries using AI for malicious activity,” says Adam Meyers, CrowdStrike’s senior vice president of counteradversary operations. “We’re already seeing indications of this. We’ve seen threat actors such as Scattered Spider use AI — specifically, large language model technology — to automate.”
Experts say it makes sense that bad actors would look in this direction.
“On the plaintext side, this is something that can help hackers craft,” says Aaron Rose, security architect manager at Check Point Software Technologies. “What is my strategy? Am I going to infect them with an initial kind of loader, like a remote access Trojan? Then from there, am I going to deploy ransomware?” Beyond just strategy, AI can supercharge a cyberattack.
“Artificial intelligence allows adversaries to basically recode malware very quickly,” Rose says. “They’re able to automate this malware creation.”
How Ransomware Groups Use Adversarial AI
Ransomware has emerged as an area in which bad actors are putting AI to work.
In ransomware attacks, phishing and social engineering “allow you to first get into an organization,” Rose says. They’re what make ransomware possible.
“You need a foothold, you need somebody to download something or click something,” he says. “AI systems are very good at writing convincing emails that you can use for phishing attacks or for helping you with social engineering.”