Close

New Workspace Modernization Research from CDW

See how IT leaders are tackling workspace modernization opportunities and challenges.

Jan 27 2026
Artificial Intelligence

AI Is Already in the Workplace. Is Government Ready for It?

Artificial intelligence-driven copilots, automation and analytics tools are reshaping workflows, but adoption gaps, trust concerns and skills shortages remain.

Across industries — from education to healthcare and state and local government — we’re seeing the same shift unfold: Organizations are no longer asking if they need to adopt artificial intelligence but how. That shift matters, because it signals a move from experimentation to operational reality.

But AI is not a plug-and-play upgrade. Every transformative technology brings friction. Cultural resistance, trust gaps and uncertainty about what’s appropriate use are as real as any technical limitation. In government, where the margin for error is smaller and public trust is essential, those challenges are amplified.

What we’re learning in our work with agencies is that AI success is not about the tools themselves. It’s about preparing the workforce — structurally, culturally and ethically — to use them well.

Click the banner below to gain more insights into overcoming AI challenges.

 

Why Workforce Readiness Comes Before Automation

Too many AI implementations fail not because the technology doesn’t work but because people aren’t ready for it. We still see organizations treat AI like a novelty — something to try once and move on. That’s not how meaningful adoption happens.

AI should be a collaborator, not an autopilot.

The most successful implementations we see are highly interactive. Workers are expected to interrogate outputs, refine prompts, validate assumptions and edit results. That’s a different approach than traditional software training. It requires a mindset shift — from passive consumption to active engagement.

Government leaders should also recognize that attitudes toward AI vary widely inside their own workforces. Some employees see it as a productivity breakthrough. Others see it as a threat — to their job security, their professional identity or even their sense of what work looks like. These emotional and behavioral factors matter just as much as technical ones.

Public-sector agencies must thread a narrow needle: capturing the operational gains of AI while avoiding perceptions of creepiness, overreach or dehumanization. How constituents feel about AI matters, and those feelings are shaped first by how government employees themselves experience these tools.

If agencies want human-centered AI externally, they need human-centered AI internally first.

READ MORE: State AI roadmaps must build competent workforces.

AI as a Teammate, Not a Shortcut

One of the most common misconceptions about AI is that it replaces thinking. In reality, it accelerates thinking — but only when used responsibly.

We’re already seeing practical, everyday wins: automated meeting notes, action item tracking, document summarization and workflow orchestration inside collaboration suites such as Microsoft 365 and Google Workspace. These use cases remove friction and free up time.

But the next evolution is more profound.

Soon, workers will manage teams of AI agents the way managers oversee human teams. You’ll set intent, review drafts, send work back for iteration and choose between competing outputs. That requires judgment, not blind trust.

AI can surface risks you haven’t considered. It can suggest options you didn’t know existed. It can challenge your assumptions. But it can’t be allowed to own decisions. Accountability must remain human.

There’s also a deeper workforce issue here: AI might become the best notetaker in the room, but how do we turn it into the best collaborator in the room? How do we use it not just to track what happened but to ask what should have happened — what was missing, what risks were overlooked, what perspectives weren’t considered?

Those are cultural questions, not technical ones.

AI Governance Isn’t Just About Data, It’s About Reasoning

When government agencies think about AI governance, they often focus on the obvious: security, privacy and data leakage. Those are critical. But they’re not enough.

AI doesn’t just store data. It reasons over it.

That reasoning — the logic paths, inferences and conclusions that models draw — becomes a form of data itself. Agencies need to understand how AI systems arrive at conclusions, not just what conclusions they deliver. That means auditability, traceability and transparency must extend into the reasoning layer.

Without that, you can’t explain outcomes. You can’t correct bias. You can’t improve accuracy. And you can’t defend decisions to the public.

We also see organizations underestimate how messy their knowledge environments really are. SharePoint repositories become dumping grounds. Old policies coexist with outdated procedures. That confusion becomes amplified when AI systems are trained on it.

But here’s the key: Don’t let imperfect data stall your AI journey. Start with targeted use cases. Clean what you need. Improve iteratively. Governance should evolve alongside deployment, not precede it by years.

One emerging risk agencies should watch closely is prompt injection — where hidden instructions inside documents manipulate AI behavior. Job applicants, for example, have attempted to manipulate AI recruiting tools to get them over the first review and into an interview. So, this sort of thing is already happening. And it reinforces why AI oversight must extend beyond traditional data management.

DIVE DEEPER: Governments need a strong data governance strategy.

Strategy, Tactics and Talent Must Move Together

Successful AI adoption requires three parallel tracks.

  1. Strategy: Agencies need a clear vision for what AI is meant to achieve, aligned to public service goals. This includes policies, ethical boundaries and clear success metrics.
  2. Tactics: Agencies must rapidly test, validate and discard use cases. The AI market is noisy. Discernment matters. Leaders need a repeatable way to decide what to pilot, what to scale and what to ignore.
  3. Talent: AI changes what “digital literacy” means. Workers need shared vocabularies, safe spaces to experiment and forums to exchange techniques. Culture doesn’t emerge by accident — it has to be cultivated.

This is why we emphasize frameworks over predictions. No one can reliably forecast where AI will be in a year. What agencies can do is build adaptive processes that allow them to absorb change continuously.

AI velocity is only increasing. The question is whether government workplaces will be ready to move with it — or constantly struggle to catch up.

Andriy Onufriyenko/Getty Images