Tech Trends: What States Should Fund With 2026 Cybersecurity Grants
If Congress restores federal cybersecurity grant funding in 2026, state leaders will face a familiar pressure: Show quick progress against a fast-moving threat landscape. But security executives and researchers warn that the next wave of cyber risk won’t be solved by buying more point tools alone.
Instead, states should use any new State and Local Cybersecurity Grant Program dollars to modernize security operations with automation, build durable governance for artificial intelligence (AI) systems, and invest in continuous training and incident response readiness.
Aaron McCray, field CISO in CDW’s Global Security Strategy Office, says organizations are already being pushed toward “operational transformation” — moving from staffing-heavy security operations to autonomous defense using AI.
“It’s going to necessitate a shift away from relying on increased head count and moving towards scaling through AI automation,” McCray says, describing a trend toward platforms that automate routine security work and “move organizations toward adopting an AI SOC model.”
WATCH: Aaron McCray discusses how AI tools can help keep pace with AI threats.
That direction is echoed by Mike Morris, associate dean and senior director for cybersecurity programs at Western Governors University’s School of Technology, who says modern social engineering blends “traditional psychological manipulation with advanced AI techniques.” He argues that attackers increasingly target both humans and AI systems — and that resilience requires a unified approach to securing users and the AI tools they rely on.
Put Automation at the Center of Security Operations
For many states, the most impactful use of grant funding may be accelerating the shift to automated security operations — not as a futuristic concept, but as a practical response to “machine speed” threats and workforce constraints.
McCray says organizations should prepare to use AI to automate a large share of routine security tasks and described “SOC level, tier one” functions increasingly handled by agentic AI, moving teams away from traditional orchestration tools toward “AI-powered platforms” that can drive security operations.
For states, the policy implication is straightforward: Treat automation as a primary spending priority. That means funding not just new tools but the engineering work to integrate them into security workflows that improve detection and response, reduce alert fatigue, and preserve continuity of operations. McCray framed that outcome in terms of operational “resiliency, visibility and viability.”
READ MORE: Here are seven tips for buying managed detection and response.
Fund AI Governance Before Funding More AI
As states adopt AI for customer service, productivity and cybersecurity, grant dollars should also underwrite governance structures that keep autonomous systems from becoming new attack surfaces.
McCray says the strategic risk he worries about is “ungoverned deployment of autonomous AI systems,” and he argues that organizations should have a dedicated governance framework in place — citing the National Institute of Standards and Technology’s AI Risk Management Framework — before deploying autonomous or agentic AI. He also points to controls such as identity-first methods and attribute-based access control, plus monitoring system behavior and conducting continuous risk assessment.
Morris similarly emphasizes that AI systems are vulnerable to manipulation via jailbreaking and prompt injection attacks, including indirect prompt injection embedded in content that AI agents process. His point for public-sector leaders: AI safety isn’t just policy language — it requires robust engineering controls such as strict permissions, content filters and layered safeguards for sensitive actions.
Taken together, the message for 2026 grant planning is that states should budget for governance and control planes alongside any AI deployments. That includes funding the people, processes and technical guardrails to define approved use cases, enforce access controls, validate outputs and monitor for abuse.
Modernize Incident Response for AI-Driven Breaches
Automation and AI will reshape incident response as well. McCray says incident response plans “must be updated to address the speed and the unique vectors of AI driven breaches,” including the ability to trace and analyze why an autonomous system or agent made a particular decision.
Without that kind of monitoring and data collection, he warned, organizations won’t be able to correct course when something goes wrong.
States can translate that guidance into grant-funded priorities such as updating incident response playbooks to address AI-enabled scenarios; expanding logging and telemetry needed for model or agent forensics; and rehearsing response procedures that account for rapid, automated attacker behavior.
DIVE DEEPER: Adopting platforms improves speed and governance.
Invest in Human Operators: Training, Simulation and Verification
Even as states automate more security work, both McCray and Morris underscore the importance of the human element.
McCray says organizations need training so employees can recognize AI-generated threats, “challenge assumptions” and build literacy across technical and nontechnical roles. He also describes running simulations — including AI-driven attack scenarios — to build familiarity and identify gaps, akin to red-team and blue-team exercises.
Morris warns that social engineering is becoming more convincing and more scalable, amplified by AI-generated messages and deepfake identities. His conclusion aligns with McCray’s: Protecting people depends on culture and verification habits, while protecting AI systems depends on engineered controls — and both need sustained attention.
For a 2026 grant round, states can treat training as infrastructure: a continuous program of exercises, simulations and verification routines that keeps pace with evolving AI-enabled deception, rather than a one-time compliance check.
McCray’s view of 2026 cybersecurity trends centers on readiness for “AI-driven cybersecurity transformation,” with tactical priorities that include autonomous defense, robust governance and preparing the human element. Morris’s perspective reinforces why: Attackers will exploit both human trust and machine heuristics.
For state CIOs and CISOs, that combination offers a forward-looking blueprint for any renewed federal grant funding: Prioritize automation in security operations, fund governance and controls that make AI safer to deploy, modernize incident response for AI-era breaches, and invest in continuous training that helps public employees spot what AI makes harder to see.
