Close

See How IT Leaders Are Tackling AI Challenges and Opportunities

New research from CDW reveals insights from AI experts and IT leaders.

Jul 08 2025
Security

AI-Enhanced Attacks Require Increased Vigilance from Government Security Officers

Bad guys are becoming more successful with artificial intelligence, and agencies need both old and new methods for defense.

An unfortunate side effect of the rapid evolution of artificial intelligence technologies is that criminals are already using them to craft and customize their cyberattacks — with potentially devastating results.

CrowdStrike’s 2025 Global Threat Report emphasized the growing prevalence and effectiveness of phishing, deepfake and other social engineering attacks generated by AI. For example, AI-generated phishing emails achieved a staggering 54% success rate, while human-generated phishing emails were only successful 12% of the time. This disparity will only increase in the coming months and years.

Click the banner below to access exclusive cybersecurity insights.

 

Criminals are also expected to use AI technologies to improve the success of their cyberattacks in other ways. According to a recent press release, Gartner has predicted that the average time criminals need to take over a user account will drop by 50% over the next two years because of the automation efficiencies that AI technologies can provide. The 2024 Department of Homeland Security report “Mitigating Artificial Intelligence (AI) Risk” describes several additional ways that attacks can be powered by AI, from disrupting supply chains and reverse engineering intellectual property to automating drone-based physical attacks on infrastructure.

Let’s look at how AI can shape cyberattacks — and how government agencies can better detect and mitigate them.

Adopt Defensive Best Practices

Many of the practices that agencies should already be following to defend against general cyberattacks are also helpful at stopping AI-powered attacks.

Here are several examples of these practices:

  • Improve vulnerability management practices, including patching, updating and upgrading software and managing software configurations to substantially reduce the number of vulnerabilities in the enterprise and shrink the window of opportunity for exploiting each vulnerability.
  • Implement, maintain and closely monitor zero-trust architectures to minimize access to sensitive data and other valuable resources that attackers might be targeting. Log all such access attempts and respond quickly when suspicious activity is detected.
  • Transition the agency’s employees and other users to phishing-resistant authentication methods, such as multifactor authentication using passkeys. Phishing-resistant authentication can be incredibly effective against a wide variety of social engineering attacks. Monitor user accounts, especially those with access to sensitive information or for individuals in sensitive roles, for unusual patterns of authentication attempts and usage.
  • Use security tools to monitor and analyze emails and other forms of communication for suspected phishing attacks. A wide variety of tools deployed at the enterprise, network and host level can aid in phishing detection. Ensure that all such security tools are kept up to date with the latest threat intelligence to help in identifying emerging threats.

54%

The success rate of artificial intelligence-generated phishing emails in eliciting clicks on embedded links

Source: CrowdStrike, 2025 Global Threat Report, February 2025

Conduct Analysis and Verification of Communications

Ideally, both people and technologies will be prepared to determine if communications are valid. For an agency’s workforce, improve and increase training on recognizing social engineering attacks in general and AI-generated attacks specifically. Make sure that the training covers all the forms of communication that they might encounter in their jobs, including email, texts, voicemails and phone calls, videos and social media.

An agency’s processes must reinforce the need to validate communications. Establish and communicate clear procedures for verifying unexpected requests, such as an email from management directing an immediate funds transfer to a third party. Closely monitor financial accounts that might be targeted by attackers.

Finally, consider acquiring and using advanced technologies to detect AI-generated media and verify human identities. Michael S. Barr, a member of the Board of Governors of the Federal Reserve System, recently spoke about the need for this. Among his recommendations was that “identity verification processes should evolve in kind to include … facial recognition, voice analysis and behavioral biometrics to detect potential deepfakes.”

Click the banner to sign up below for the StateTech newsletter for weekly updates.

 

Use AI To Fight AI

Sometimes, you need to fight fire with fire. Agencies may greatly benefit from leveraging AI’s capabilities to help them detect and respond to AI-powered attacks. The CDW Artificial Intelligence Report discusses this topic at length. In this report, Roger Campbell of CDW states, “The threats are increasing at such a rate that AI is pretty much not an option — it’s a requirement if you’re going to have a secure system. If you don’t have it, then you’re going to be woefully unprepared, because these attacks come very frequently.”

Many of the security technologies that agencies are already using are adding AI functionality to improve their efficiency and accuracy. Evaluating and enabling these AI features could help agencies detect AI-powered attacks more quickly and act automatically to stop them from succeeding and limit the impact of successful attacks. Agencies can also encourage their cybersecurity product vendors and service suppliers to add beneficial AI functionality to their offerings to help counteract what attackers are doing.

Barr also spoke at the Federal Reserve about the value of using AI technologies to detect attackers’ use of AI. He highlighted the value of using AI to improve monitoring, analysis and detection of unusual patterns of activity, for example. While he spoke about banking activity, the same idea is just as valid for scrutinizing other types of activity. Keeping up with the pace of AI-powered attacks is almost certainly going to necessitate defenders taking advantage of AI-powered detection methods.

UP NEXT: Government security officials identify their top cybersecurity KPIs.

supersizer/Getty Images