Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Mar 27 2025
Artificial Intelligence

Build AI Readiness Around Use Cases and Data, Experts Say

When it comes to artificial intelligence, training and personnel, budgets, and IT infrastructure must be shaped by use cases and data.

What can AI help me do with my data, and is my data ready for it? Those are the central questions for government agencies as they prepare for AI, experts say. Everything else — training, budget, governance, whether AI workloads belong on-premises or in the cloud, and security — stems from there.

“Start with the use cases and look at the data elements that are needed,” says Jim Weaver, former CIO of North Carolina and national strategy advisor at Pure Storage.

“Data is the focus of the universe,” says Romelia Flores, distinguished engineer and master inventor for client engineering with IBM. “I have a couple of states that will tell me, ‘I’m not going to do anything with generative AI until I get a handle on my data and my data sources.’”

Data policy, classification, storage, quality, security, access and integration are central to using AI meaningfully and responsibly. But Flores says state and local agencies should evaluate low-risk use cases in tandem with data readiness.

“You don’t want that obsession with data to stop you from taking the leap of faith at some point,” she says.

The key, she and Weaver say, is to pilot the right types of use cases.

Click the banner below for deeper insight about AI for state and local government.

 

Build Around Low-Risk, High-Impact Use Cases First

Document management and intelligent document processing are examples of AI use cases with low risk and high impact.

Advanced use cases such as generative and agentic AI are riskier due to data privacy and security concerns and biases. Data management policies — and structuring training, governance and security around those policies — can mitigate those risks.

With strong data policies in place, “you wouldn’t even perhaps need an AI policy because your existing data policy would govern it,” says Alan Shark, executive director of the Public Technology Institute.

In addition to mature data policy, experts say agencies need to prioritize the following areas as they identify use cases and build AI readiness around them.

Back-Office Opportunities

The riskiest use cases are citizen-facing, Weaver says. By starting with back-office functions, agencies can make the initial foray into AI without gambling on public trust in the technology. It also creates a relatively safe space to develop the internal organizations needed to get AI off the ground and evaluate its readiness for riskier use cases.

For instance, the first AI use case in North Carolina was for statewide IT procurement.

“We examined where we had long procurement time frames, and we found they often stemmed simply from someone forgetting to do something,” Weaver says. “With AI, after 24 hours, we were able to automatically move it on, which brought those procurement time frames way down.”

Jim Weaver
Start with the use cases and look at the data elements that are needed.”

Jim Weaver Former CIO, North Carolina, and National Strategy Advisor, Pure Storage

Vendor Risk and Liability

Flores says that being aware of vendor-related protections, especially where generative AI is concerned, is another important part of risk evaluation.

“A lot of people use tools that have models built into them,” she says. “Are these vendors going to support you in your use case? And are they going to stand behind you in a court of law?”

AI vendors that offer indemnification to help protect against accidental copyright infringement include Google, Amazon and IBM.

Training, Culture and Personnel

Policies aren’t enough; stakeholders must be trained to adhere to them, Flores says. This isn’t as simple as holding a few training sessions. It’s also a cultural shift that needs to be reflected in agency staffing decisions.

For instance, at the state level, chief data officer positions and AI task forces are becoming common. This structural approach to AI knowledge and governance is key to letting use cases flow from the bottom up, while guardrails are built from the top down, so that business and policy can meet somewhere in the middle.

RELATED: State AI roadmaps must mitigate risks and build competent workforces.

IT Is the Linchpin for AI Readiness

If business and policy meet somewhere in the middle, they converge around IT.

IT decisions such as whether a workload should live on-premises or in the cloud are shaped by budget, AI use cases and organizational readiness for those use cases.

“When I’m pulling data from data sets, am I going to need additional GPUs to process things from a generative AI perspective? Do I need any kind of special data storage?” Flores says. “These things cost money.”

Many jurisdictions have moved IT back on-premises, driven partly by the fact that lift-and-shift migrations have inflated costs in the cloud, Weaver says. According to a CDW survey of IT decision-makers across industries, 84% of respondents said they’ve moved some workloads to the cloud and then back on-premises. Meanwhile, AI PCs have created an avenue to bring low-risk AI use cases to the network’s edge.

But Weaver, who generally advocates for “bringing the tool to your data,” says infrastructure placement is ultimately about maximizing your budget around existing resources.

“If you have the staff and the education and knowledge on that staff to effectively make use of the cloud, the cloud can be a more cost-effective alternative,” he says.

Click the banner below for deeper insight into hybrid cloud environments.

 

To an extent, this ties back to FinOps, Flores says, which is a good automation candidate for agencies as they prepare for wider use of AI. However, AI is anything but business as usual for IT, especially for the high-impact, high-risk use cases.

Agentic AI, for instance, can gather information from a variety of data sources, on-premises or in the cloud, to answer questions in a user-friendly chat interface. But it can only function if costs and risks are understood and controlled, and IT systems are governed in accordance with those controls to achieve the necessary access and integration.

“IT still needs to make sure that back-end systems have APIs so I can gain access to them effectively and efficiently,” Flores says. “And IT still needs to implement and manage single sign-on access to achieve appropriate access at the right times.”

Even simpler AI use cases, such as using tools with built-in AI features, require input from IT. Agencies need to know what AI model is being used for a given function, and they need to understand their risk exposure with that model.

“The burden is on the IT teams to understand those things,” Flores says. “They have to be closely tied with leadership and AI ethics boards to lend insights on how things really work underneath the covers and the impacts to them, because that’s the only way to deploy AI effectively and responsibly within an environment.”

Bojan89/Getty Images