Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Feb 26 2025
Management

Preparing Your IT infrastructure for AI

Asset management, observability, training and careful artificial intelligence use case selection should be top priorities for governments, experts say.

As more state and local agencies adopt artificial intelligence technology, they face fundamental challenges about what use cases are most valuable — and, crucially, what type of IT infrastructure best supports those use cases.

“A tremendous amount can be done with AI to improve citizen services and empower employees, and you shouldn’t be sitting on the sidelines,” says Mike Hurt, vice president of state and local government and education for ServiceNow. “But I also think a lot of vendors are confusing state and local decision-makers on what they can do with AI and how they should do it.”

While there is no one-size-fits-all roadmap for preparing for AI adoption, experts have identified several key mile markers that can help state and local agencies prepare their infrastructure to best capitalize on AI in the years ahead.

Click the banner for deeper insight into public sector transformation.

 

Core Pillars of AI Readiness

There are three primary considerations when evaluating your IT environment for AI readiness, says Public Technology Institute Executive Director Alan Shark.

1. AI for the Individual

“How do we use AI to improve an employee’s productivity and creativity, their ability to better communicate, write better reports, make better presentations and the like, both internally and to the public?” Shark asks. “The problem is that there is no one product that does it all.”

Software developers are flooding the market with AI-enabled products that all solve specific problems, which has left many agencies wondering where their money is best spent. There are a few potential ways to deal with this, Shark says.

“I recommend that local governments and state governments set up AI productivity centers,” he says. “These are dedicated workstations, physical or remote, that let employees access AI tools without needing individual licenses.”

An experimental environment could be a way for employees or select members at a center of excellence to work with the technology in a secure away without committing to large-scale licensing.

RELATED: Everything state and local agencies need to know about AI PCs.

This setup could work with agency crowdsourcing for use cases. At NASCIO 2024, Virginia state CIO Robert Osmond told StateTech that the commonwealth has created an AI registry to help employees at the state level identify potential use cases.

“We’ve approved over 20 different use cases within Virginia,” Osmond said. “Many of them range in things that are very productivity-oriented.” 

Workstation configurations are another key consideration when using AI at the individual level, Shark says, especially as AI PCs become more popular.

“This may be like the old days when you were issued certain configurations,” Shark says. “You had maybe three desktop or laptop configurations, maybe four. One would be the light user, one the medium user, one the heavy user and one the custom user for the most specialized cases, like GIS.”

2. AI at the Enterprise Level

AI chatbots that can interface with the public in dozens of languages represent an example of AI at the enterprise level. Data policies are crucial to securely and responsibly implementing these larger-scale AI implementations.

“If you have very sound data policies that take into consideration privacy and security and access, you wouldn’t even perhaps need an AI policy because your existing data policy would govern it,” Shark says. “What governments need to do, and they’re doing it at the federal and state level, is have somebody in charge of data, like a chief data officer or equivalent position.”

Before introducing AI that interfaces with larger data sets, agencies must assess their existing data and classify it in accordance with clear data policies. They must also evaluate how they will collect and classify future data to ensure ongoing, methodical data governance.

SUBSCRIBE: Click the banner to get weekly updates.

 

“While we look back at what we already have, we have to also set up parameters for better data collection moving forward,” Shark says. “We’ve got to figure out, can we better classify it?”

“Your output is only as good as your data,” Hurt says. “And I’m seeing customers having to spend less time readying their data to be able to take advantage of AI, because ServiceNow and other companies have simplified data ingestion to be able to train models for enterprise use cases.”

Hurt underscores the importance of asset management and observability as agencies attempt to get a clear picture of their tech stacks to understand limitations.

“Once they’ve got all of their assets identified, their hardware and their software, they ultimately have a really good view of their entire enterprise,” he says. “From there, it’s easier to find and act on what has the most value and takes the least amount of time to solve.”

READ MORE: Data governance strategies are key to AI success.

3. Open vs. Closed AI Systems in Hybrid Settings

ChatGPT, Perplexity, Gemini and Copilot are all examples of what Shark calls “open systems” that operate through a public domain.

“This is where you want to be incredibly careful to make sure employees know that there’s no personally identifiable information or anything harmful or outwardly discriminatory or biased,” Shark says.

Cloud-based large language models (LLMs) can be powerful tools, but they risk compromising sensitive data in the form of inputs and providing false or misleading information in outputs.

By contrast, closed AI systems are only available via specific domains, for specific users. The federal governments’ NIPRGPT is an example.

“Many of the chatbots operating within certain domains for 311 systems, maybe even public safety and social monitoring, stay within confined domains as assigned,” Shark says.

Anything that involves private information or sensitive documentation is better served by a closed system that is only accessible to authorized users. Under a closed system, any and all data inputs will never become public.

Alan Shark
People are starting to have second thoughts and saying, cloud is great for storage, but some things are better on-premise.”

Alan Shark Executive Director, Public Technology Institute

Another way of framing this conversation is around public cloud versus on-premises or private cloud. Deciding what to host where is a crucial aspect of leveraging AI at the state and local level. According to Shark, “on-premises is coming back,” fueled by broadband constraints and the need for greater speed at the network’s edge, as well as a desire to keep some AI use cases more private and secure.

“People are starting to have second thoughts and saying, cloud is great for storage, but some things are better on-premises,” Shark says.

This might include closed AI systems such as custom LLMs, but it can also include small language models and productivity use cases, which in the near future, may be increasingly offloaded onto AI PCs.

Training Is Crucial Every Step of the Way

“AI is a profound change, and we need profound training and education at every level, including for elected leaders,” Shark says. “The training part is to help people understand how it can best be used.”

From a user interface perspective, some vendors, such as ServiceNow, say they can consolidate AI functionality into a single pane of glass to minimize the amount of user training required.

“You can use your own language models with ServiceNow, you can use ours or you can use other language models in an interface that is already very familiar to so many organizations,” Hurt says. “To have it all under one umbrella like this really helps from a user training perspective.”

Like Hurt, Shark believes that the upfront work — data management, asset management, use-case identification and training — is well worth the potential outcomes, and that sitting on the sidelines is dangerous.

“AI can be very powerful with the right data sets in that it can identify within milliseconds patterns, trends and predictions faster than a human ever could,” he says. “For the first time, these systems might be able to find the needle in the haystack.”

UP NEXT: State CIOs identify the top AI challenges.

supersizer/Getty Images