Close

See How IT Leaders Are Tackling AI Challenges and Opportunities

New research from CDW reveals insights from AI experts and IT leaders.

Apr 22 2025
Security

Can AI Replace Human Intelligence Amid Federal Cybersecurity Budget Cuts?

Security operations centers can best harness artificial intelligence under human eyes.

State and local government cyber leaders were in shock as they learned that the Multi-State Information Sharing and Analysis Center’s budget was being cut in half and its longer-term future was in doubt.

MS-ISAC is a nonprofit institution, a division of the Center for Internet Security, which receives funding through low-cost cyber services as well as support from the U.S. Department of Homeland Security. Nearly 18,000 members rely on MS-ISAC to provide a critical lifeline to thousands of local governments who lack the resources to defend their systems. As federal agencies continue to plan for significant cuts in both staff and programs, many key technical experts — with years of training, certifications and experience behind them — have been caught up in the purge. Some who defend the cuts as necessary have said they believe artificial intelligence can more than fill any gap in cyber talent, and they have gone on to say that it may even be better.

Click the banner below for additional insight into AI’s role in cybersecurity.

 

AI-infused systems have certainly come a long way and are playing an increasingly important role in our cyberdefense, at all levels of government. But are they ready to replace humans? Experts warn that there are serious downsides to overreliance on AI in government cybersecurity, especially if it’s replacing seasoned government cybersecurity experts.

Here’s a breakdown of both the potential risks and a few nuanced advantages that explains why a balanced approach is a better path forward.

1. Loss of Human Judgment and Contextual Awareness

AI systems excel at pattern recognition but lack the broader contextual awareness and judgment that experienced human analysts bring. Government cybersecurity often involves complex localized nuances, insider threats and strategic considerations that AI cannot fully comprehend.

2. Vulnerability to Adversarial Attacks

AI models can be targeted. Adversaries may exploit model blind spots through adversarial inputs (subtly altered data that deceives AI systems) or poisoning attacks (manipulated training data). Human oversight is crucial for detecting and responding to these strategies.

DIVE DEEPER: AI-driven ransomware causes problems for the government.

3. False Positives or Negatives

AI might flood systems with false alerts or miss sophisticated attacks that don't match known patterns. Experienced cybersecurity professionals can triage and contextualize threats, whereas AI might flag anomalies without understanding their significance.

WATCH: AI transforms cybersecurity, for better and worse.

4. Ethical and Legal Responsibilities

When AI makes a wrong call — such as misidentifying a harmless user behavior as malicious — who is accountable? Human experts are essential for interpreting AI outputs and making decisions that align with legal, ethical and policy frameworks, particularly in government settings.

Dr. Alan R. Shark
Blaming AI if something goes wrong will not be an acceptable excuse.”

Dr. Alan R. Shark Executive Director, Public Technology Institute (PTI)

5. Skill Atrophy and Talent Drain

If governments reduce their reliance on human cybersecurity teams in favor of AI tools, they risk losing institutional knowledge. Once lost, rebuilding a cadre of trained professionals is costly and time-consuming.

6. Black-Box Risk

Many advanced AI models (especially deep learning) operate as black boxes, offering little transparency into how decisions are made. In critical incidents or audits, being unable to explain why a decision was made can be a severe liability.

But to take a balanced approach, there are also potential benefits for AI and cybersecurity, including:

  • Speed and scale: AI can process vast data streams quickly, detecting anomalies that would overwhelm human analysts.
  • Routine automation: Tasks such as log analysis, malware classification or patch prioritization can be automated, freeing humans to focus on strategy and response.
  • Threat prediction: Machine learning models can identify emerging threat patterns and help predict future attacks.
  • Augmented decision-making: AI can serve as a force multiplier, enhancing human capabilities rather than substituting for them.

The immediate future will likely steer us toward augmented AI, whereby AI assists but does not replace skilled human talent; some refer to this as collaborative intelligence. Government cybersecurity is a complex, high-stakes domain. The future requires building and maintaining systems where human oversight, ethical responsibility and strategic thinking remain central to any cybersecurity position.

SUBSCRIBE: Sign up for the StateTech newsletter for weekly updates.

 

State and Local Agencies Must Find Security Balance

While a balanced approach may be preferred, most local and territorial governments have fewer than five staff members. Many operate with just one or two, and they are expected to do anything and everything. This is where MS-ISAC has played such a significant supporting role for cybersecurity over the past 20 years. There is growing recognition that these local entities require considerable human assistance and guidance. No one is ready to turn over their digital infrastructure to AI.

Part of the problem, aside from its expense, is that AI is only as good as the data that feeds it. AI struggles to navigate the nuances and inherent inefficiencies of current systems. Local governmental institutions are closest to their constituents, who expect a human interface by default.

Most disturbing in the current budget-cutting process is the lack of understanding of how vulnerable local systems are to cybersecurity threats from both domestic and global actors. Given the increasing threat landscape, reducing or eliminating our first line of defense against cyberthreats is alarming; it’s akin to reducing police presence during a crime spree. This is not the time to reduce cybersecurity funding, expertise and talent. As promising as AI is, it is not a substitute for human intelligence, especially given the growing climate of cyber risk. Blaming AI if something goes wrong will not be an acceptable excuse.

UP NEXT: Prepare your IT infrastructure for AI.

cofotoisme/Getty Images