Close

See How IT Leaders Are Tackling AI Challenges and Opportunities

New research from CDW reveals insights from AI experts and IT leaders.

Aug 13 2025
Artificial Intelligence

A Guide to AI Governance for State and Local Agencies

Jurisdictions must develop AI governance bodies to properly identify, manage and mitigate the technology’s risks.

Jurisdictions must develop AI governance bodies to properly identify, manage and mitigate the technology’s risks.

An early and aggressive adopter of artificial intelligence, New York State is putting AI to work across a broad range of agencies, using it to deter identity fraud, monitor prisoners’ phone calls and help seniors combat social isolation. Despite the benefits, the state’s comptroller says some AI tools lack adequate oversight, exposing agencies and the public to heightened risk.

Meanwhile, in Huntsville, Ala., the City Council is considering a program that would use cameras mounted on garbage trucks in tandem with AI-powered visual analysis tools to identify issues such as graffiti, illegally dumped trash and property neglect. While Huntsville officials touted the potential benefits the program, some residents expressed concerns about privacy.

Cases such as this illustrate the clear risk-reward calculus for state and local governments as they rapidly expand use of AI in its various forms (agentic AI, generative AI, machine learning, etc.). While the benefits for constituents and agencies themselves are often substantial, so are the risks.

Fortunately, these risks aren’t going unnoticed. New York is among at least 30 states that, as of late last year, had issued guidance on state agency use of AI, according to the National Conference of State Legislatures (NCSL), with a focus on, “building governance structures and privacy standards to support responsible use and evaluating their own technology and data infrastructure to ensure the reliability, safety and security of AI applications.”

The bottom line? Using AI brings with it stewardship and risk-mitigation responsibilities.

Click the banner below to learn more about the impact of artificial intelligence on cybersecurity.

 

Establishing Guardrails for AI Governance

How should state and local governments be defining and fulfilling their responsibilities? While approaches will vary from jurisdiction to jurisdiction, here’s a look at seven key considerations and concepts that go into creating guidelines, guardrails, rules and restrictions to govern their use of AI.

1. Stay on Top of Developments in the AI Space

What are your peers doing on the AI governance front? What do thought leaders and innovators inside your industry and in the academic and business worlds have to say on the subject? What opportunities and risks are emerging that you should know about? As fast as AI technology is advancing and as rapidly as use is expanding, these are questions that people inside your organization who are on the AI front lines — cybersecurity teams, IT teams, etc. — should be asking.

To find answers, network with peers, visit key websites (such as NCSL’s technology and AI pages) and news sources that cover AI in the state and local space. Check online forums and maybe even use a generative AI-driven tool to help gather the latest intel.

2. Know the Risks

As a recent white paper from the Public Sector Network and SAP points out, “No matter the application, public sector organizations face a wide range of AI risks around security, privacy, ethics, and bias in data.” For example, training models using data collected for other purposes could lead to leaks and violations of privacy regulations. Generative AI can be prone to hallucinations. Confidential data could be included in a GenAI prompt that is shared with the large language model provider, leading to a leak. If the data that trains an AI model contains stereotypes or discriminatory content, that can create biases.

Cybersecurity is another big risk. Integrating AI-based apps and capabilities creates additional attack surfaces. Plus, there's third-party risk to consider. Almost one-third (31%) of cyber-related insurance claims were attributable to breaches originating with a third party, according to Dark Reading. As more digital tools integrate AI, it’s important to understand what LLMs your data is being shared with.

Click the banner below to sign up for the StateTech newsletter for weekly updates.

 

3. Emphasize Data Governance as a Key Part of AI Governance 

As heavily as AI relies on data to perform its work, any government agency that uses AI must have clear data governance standards, policies and protocols in place for employees. That data governance program should encompass cybersecurity measures, as well as provisions to ensure that citizen data is safeguarded from misuse. This includes establishing clear standards that specify how sensitive data must be handled. For instance, which data is accessible to which AI models? Treat data like the valuable asset it is, with systems to safeguard it from misuse, leakage and cyberattack.

Create guidelines and requirements for data set curation, including rules and responsibilities for anonymizing data. As part of your governance program, communicate and reinforce the practices and protocols that govern AI use in your organization to ensure employees are aware of and follow them. Hold people accountable by making AI compliance part of your training program and performance management, where employees understand that they will be evaluated on how closely they follow the AI governance program.

KEEP READING: Prepare your teams for artificial intelligence adoption.

4. Elevate AI and Data Governance to a C-Level Issue

The onus is on agencies to protect themselves and their constituents from risk related to AI while maximizing the benefits. Those responsibilities should start at the very top of the organization rather than being left to IT and security teams. Simply put, treat AI with the level of respect it deserves. Don’t make guesses about it and don’t be blasé about its governance.   

5. Establish an AI Governance Body

When it comes to mitigating the risks associated with AI, the buck should stop with human beings. Specifically, a centralized, representative group of people from across the organization should be responsible for establishing a governance program and ensuring that it’s followed while overseeing and monitoring an agency’s AI activities. Part of that job is ensuring the auditability of models and apps, with the ability to monitor their output to guard against bias, drift or degradation. Model behavior can change quickly and unexpectedly, and human beings should be consistently monitoring them in case they do. Agencies must develop processes whereby humans validate AI-generated outputs.

The AI governing body should also take the reins on addressing risk that originates outside the organization via vendors, suppliers or partners. Data leaks or cyberattacks can and often do originate from a third-party cloud service provider, for example. Make sure that your security team creates and enforces well-defined cybersecurity standards and requirements that apply to entities within your agency’s business ecosystem. Choose software vendors whose AI development standards and policies address security, privacy and ethical concerns. Be prepared to take your business elsewhere if a vendor can’t, or won’t, meet your security standards.

EXPLORE: Workforce development strategies can accelerate artificial intelligence adoption.

6. Align governance guardrails with policies regulatory requirements. 

Keep close tabs on new laws, policies and regulations emerging from the state legislature and regulatory agencies. AI is still in its earliest days, and so is the high-level policy around it. Expect it to change and evolve in the coming years.

7. Prioritize White-Box AI Deployments

Transparency and accountability help ensure that your organization’s AI tools, the LLMs they rely on and the output are readily explainable and understandable.

As Public Sector Network’s white paper notes, “By developing competencies in AI, public institutions can cultivate a culture of innovation and revolutionize the efficiency and flexibility of public services.”

Governance and oversight are critical to tapping that vast potential.

Natee127/Getty Images