Close

See How IT Leaders Are Tackling AI Challenges and Opportunities

New research from CDW reveals insights from AI experts and IT leaders.

Sep 04 2025
Artificial Intelligence

AI in Government: Bracing for Transformational Impact

Today’s AI is just the on-ramp. Tomorrow’s AI will come much faster, and the public sector must prepare for this transformation now.

Some experts say that what governments do in the next two years with artificial intelligence could define the next half-century of public service. 

That may or may not be an overstatement, but it’s fair to say that state and local leaders face a stark choice: Either take a disciplined, strategic approach to AI adoption now or risk burning through scarce resources on short-sighted experiments that erode public trust.

AI is not just about creating new tools. It touches everything from fiscal stewardship and trust in government to preparing for seismic shifts in the nature of work and technology. It would be a costly mistake to overspend on a single, siloed use case, and an even costlier one to under invest in AI.  

If that sounds like a balancing act, it is. But it’s also an opportunity, and one that state and local governments can benefit profoundly from if they prioritize organizational change management around AI.

Click the banner below for insight into preparing for AI transformation.

 

Fiduciary Responsibility Is Key in the Age of AI

It’s hard to separate AI from financial strategy.

Whether it’s a city council or a state agency, the fiduciary duty is clear: Be a good steward of taxpayer dollars. That means avoiding flashy one-off projects that fail to deliver. 

Instead, state and local leaders must build a roadmap that aligns AI initiatives with measurable returns. Many jurisdictions have already begun doing this, especially at the state level. 

Leaders must also think sequentially. This means deploying AI in ways that generate value at each step while minimizing risk. This requires acknowledging limits, asking how the public might react and setting realistic guardrails before rushing into automation.

This does not mean doing nothing and taking zero risks. Calculated risks will need to be taken, and that’s where organizational change management, cross-functional guiding coalitions and AI sandboxes will come into play.

Trust Is Central to AI and Government’s Use of It

If there are two things people don’t trust right now, it's government and AI. Public trust in government is nearing a historic low. Meanwhile, the Edelman Trust Barometer shows that about one-third of the public is excited about AI, one-third is deeply skeptical, and the rest are somewhere in between. If government leaders implement AI poorly, half of residents could reject it outright.

Building trust isn’t just about policies. It’s also about listening to concerns, explaining why and how AI is being used and involving diverse perspectives in decision-making. 

Enter cross-functional guiding coalitions, also known as centers of excellence. Too often, AI efforts are siloed within IT or security teams, leading to what we call “cognitive blindness” about impacts in other areas. This concept is demonstrated in tests where people are asked to focus on something happening in a video, but in doing so, fail to see a gorilla walk through the frame. In the case of AI, the gorilla could be a breach of data privacy. 

To succeed, an AI coalition must have representation from operations, legal public communications and beyond to ensure decisions are made transparently and with full context.

Chief AI Officers Must Take the Lead on Change Management 

The role of the chief AI officer has become more common in state and local government. There’s risk in treating the CAIO post as purely technical. Yes, AI demands data architecture, cybersecurity and vendor evaluation skills. But the biggest challenge isn’t the algorithms. It’s the people.

The ideal CAIO is part strategist, part communicator and part change manager with the skills to navigate the social, cultural and workforce disruptions AI brings. This means preparing employees for job transitions, guiding ethical adoption and ensuring AI supports human decision-making. 

Organizational change management is a must, and the CAIO needs to spearhead this, helping agencies to rethink workflows, not just automate existing ones. After all, you don't want to automate your mistakes; you want to tap AI’s transformational potential in a safe and productive way. 

Click the banner below to sign up for the StateTech newsletter for weekly updates.

 

Mastering Operational AI Transformation

It’s easier said than done, but what we’re talking about here is developing frameworks, testing things in sandboxes and moving fast without breaking everything. The reality is that state and local governments are being asked to make decisions based on incomplete knowledge.

To make this less daunting, CDW’s Mastering Operational AI Transformation (MOAT) program lays out a disciplined approach that includes:

  • Guardrails for security, bias detection and human oversight
  • Verification of outputs before they’re trusted in decision-making
  • Cross-functional governance to avoid blind spots
  • AI sandboxes (or rapid AI development environments) where teams can experiment safely before deploying at scale

Keep in mind that AI sandboxes are more than mere testing grounds. They’re training environments where staff can learn how to get the best results from AI, understand its limits and make mistakes without public consequences (because mistakes will inevitably be made).

RELATED: Take these steps to ready your IT infrastructure for AI.

Prepare Today for AI Success Tomorrow

From a legislative point of view, states are taking wildly different approaches in preparing for AI, and that isn’t necessarily a bad thing. The goal posts move so often, and they’ll probably move again every three to six months.

What we can say with near certainty, though, is that governments can’t afford to wait for perfect clarity. They must get started, experiment responsibly and commit to ongoing refinement as technologies and public attitudes shift, all to avoid failure.

This article is part of StateTech’s CITizen blog series.

CITizen_blog_cropped_0.jpg

Tempura/Getty Images