Close

See How IT Leaders Are Tackling AI Challenges and Opportunities

New research from CDW reveals insights from AI experts and IT leaders.

Jun 10 2025
Artificial Intelligence

What Is Adversarial AI, and How Can State and Local Agencies Defend Against It?

Hacking groups such as FunkSec are using AI to develop Ransomware as a Service and deploy more convincing phishing schemes.

Cybercriminal groups are leveraging artificial intelligence to craft more evasive and effective attacks, and the rise of this adversarial AI has transformed the threat landscape for state and local agencies.

The infamous FunkSec ransomware group, for example, specializes in AI-assisted malware development. Other bad actors wield AI to perpetrate advanced phishing schemes, making it harder for traditional email security gateways to weed out malicious attachments and duplicitous social engineering content.

For the public sector, which has been relentlessly targeted by ransomware, the first step toward staying ahead of this growing cybersecurity risk is understanding how adversarial AI is evolving ransomware tactics.

Click the banner below to learn more about how AI transforms the cybersecurity landscape.

 

What Is Adversarial AI?

Traditionally, adversarial AI has referred to efforts to undermine AI-driven activities. But a new definition is emerging.

“What we’re hearing more about now is adversaries using AI for malicious activity,” says Adam Meyers, CrowdStrike’s senior vice president of counteradversary operations. “We’re already seeing indications of this. We’ve seen threat actors such as Scattered Spider use AI — specifically, large language model technology — to automate.”

Experts say it makes sense that bad actors would look in this direction.

“On the plaintext side, this is something that can help hackers craft,” says Aaron Rose, security architect manager at Check Point Software Technologies. “What is my strategy? Am I going to infect them with an initial kind of loader, like a remote access Trojan? Then from there, am I going to deploy ransomware?” Beyond just strategy, AI can supercharge a cyberattack.

“Artificial intelligence allows adversaries to basically recode malware very quickly,” Rose says. “They’re able to automate this malware creation.”

How Ransomware Groups Use Adversarial AI

Ransomware has emerged as an area in which bad actors are putting AI to work.

In ransomware attacks, phishing and social engineering “allow you to first get into an organization,” Rose says. They’re what make ransomware possible.

“You need a foothold, you need somebody to download something or click something,” he says. “AI systems are very good at writing convincing emails that you can use for phishing attacks or for helping you with social engineering.”

Adam Meyers
The adversaries have gotten faster. In fact, they got 14 minutes faster last year.”

Adam Meyers Vice President of Counteradversary Operations, CrowdStrike

Additionally, AI can make ransomware demands more effective.

“Ransomware actors steal data and then try to extort the victim by threatening to release it,” Meyers says. “They could use AI to more quickly find the things that are most sensitive for the victim — that would garner the highest payment.”

Sophisticated organizations such as FunkSec are already tapping the power of AI in the ransomware arena, Meyers says: “They claim to use AI to accelerate their operations, specifically to build the malware.”

The Rise of FunkSec and AI-Driven Threats

FunkSec rose to fame very quickly.

“In their first month, they were able to claim that they hit almost 100 victims,” Rose says. “What’s really interesting about them is that they are a RaaS, Ransomware as a Service; they’re not actually doing the majority of the attacks themselves. They offer whatever ransomware tools they have at essentially a subscription price, to anyone that wants to attack somebody.”

For state and local IT leaders, this should ring alarm bells.

“The most important thing is that this type of technology helps make these threat actors go faster,” Meyers says.

CrowdStrike tracks breakout time, which is how quickly an adversary can go from getting initial access to moving laterally within a breached system. “In 2023, that was 62 minutes. In 2024, that time dropped to 48 minutes, in part because of AI. The adversaries have gotten faster. In fact, they got 14 minutes faster last year,” Meyers says.

His takeaway: “State and local government entities also need to get faster to keep pace with the adversary.”

Click the banner below to sign up for the StateTech newsletter for weekly updates.

 

How State and Local Agencies Can Defend Against Adversarial AI

Rose advises starting with the fundamentals. “You need to understand what you have to protect. What are your external attack surfaces?” he says.

From there, state and local IT leaders and security teams should look to leverage AI-powered defensive solutions to combat adversarial AI threats.

“You need AI-type systems at the network level, doing real-time analysis of everything that’s coming into the organization,” Rose says. “You need it on your endpoint, you need it in your email security solutions, your mobile and so on.”

DIVE DEEPER: AI-driven ransomware can be thwarted with zero-trust networking.

It will be important for government IT organizations to find the right partner as they look to counter AI-powered ransomware.

“For state and local governments that are maybe not as well resourced, that don’t necessarily have the deep bench of talent that a lot of major enterprises have, you need a good partner,” Meyers says.

“You need somebody with the managed threat hunting capability that can find and track these human adversaries who are operating with the benefit of AI,” he says. “That partner can help state and local governments stay in front of those threats.”

andresr/Getty Images