Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Feb 19 2025
Software

Shedding Light on Shadow AI in State and Local Government: Risks and Remedies

Public sector employees might adopt artificial intelligence without official approval.

While many governmental bodies are still operating under the threat of “shadow IT,” we are now starting to hear about “shadow AI.” Both are serious challenges for state and local governments. However, AI presents more worrisome concerns.

Shadow AI refers to the unauthorized or ungoverned use of artificial intelligence tools by public employees. This growing adoption of AI without oversight poses risks to transparency, security and decision-making. Improper use also poses a security threat that could prove costly in terms of breaches and potential legal issues related to flawed policies or the inadvertent release of personal or sensitive information.

Without proper AI governance in place, staff members are left on their own. While most public employees remain skeptical and cautious about AI — if not overly so — some may be impatient enough to embark on their own.

Click the banner below to get weekly updates from StateTech.

 

Here are some factors that lead to shadow AI:

  • Work-arounds due to outdated bureaucratic processes
  • Automating document processing without oversight
  • AI-assisted decision-making in permitting, social services or policing without oversight
  • Using generative AI for public communication without verification

Shadow AI Poses Significant Risks

The risks of shadow AI are real and can cause great harm through willful neglect, ignorance or carelessness. Here are the likely consequences:

1. Data privacy and security threats

2. Bias and ethical concerns

  • AI systems that reinforce biases in policing, hiring or benefit allocation formulas
  • Lack of accountability in AI-driven decisions

3. Legal and compliance risks

  • AI-generated content that conflicts with Freedom of Information Act requirements or copyright laws
  • Potential violation of state and federal AI regulations

4. Operational disruptions and reliability issues

  • AI-generated misinformation that affects policy recommendations
  • Over-reliance on AI without verification, leading to flawed decisions

5. Erosion of public trust

  • Lack of transparency in AI-assisted governance
  • Ethical dilemmas if AI errors go unaddressed

EXPLORE: AI is transforming the citizen experience.

Among the characteristics that shadow IT and shadow AI have in common: They either lack sound policies that are enforceable, or they have policies that are perceived to be too restrictive and cumbersome. However, because most AI interfaces appear less technical — anyone can “play with it,” given its rather simplistic interfaces — shadow AI has become a more significant problem than shadow IT.

One way to view this is to liken the steadfast growth of word processors and traditional search engines to “thought processors” and the development of “AI agents” that process and carry out complex tasks.

How Agencies Can Address the Risks of Shadow AI

This leads to the obvious question: How can this be remedied? Here are some possible solutions:

1. Establish AI governance frameworks.

  • Define AI usage policies at state and local levels.  
  • Create a centralized AI oversight committee.

2. Implement AI training for public employees.

  • Educate workers about responsible AI use.  
  • Raise awareness of risks and best practices.

3. Enhance cybersecurity and data protection measures.

4. Mandate transparency and explainability.

  • Require documentation of AI-influenced decisions.  
  • Ensure public-facing AI tools provide clear justifications and disclaimers.

5. Develop AI vendor and procurement standards.

6. Establish enforcement mechanisms.

  • Clearly state penalties for noncompliance or carelessness.
  • Penalties should be proportionate to any violation.

DISCOVER: Lean on these tips to improve AIOps implementation.

What makes the topic of shadow AI so timely is that AI offerings are quickly being integrated into standard office automation, such as word processing, browser-centered search, presentation creation, and mathematical computation. Some public employees may find it challenging — personally and professionally — to decide what use of AI agents and offerings is safe and appropriate. Others may decide it’s better to seek forgiveness than permission. 

Governments Should Establish Policies and Training

Crafting a meaningful AI policy should not be as difficult as some make it out to be. For the most part, the key elements of a sound policy can be listed on a single page.

But reviews of early AI policies reveal mostly platitudes, goal statements and preambles and far less about actual expectations and boundaries — let alone discussion of any penalties for failure to comply. At a recent tech leader event, one commentator noted that if data governance and related policies were better articulated, there would be less need for separate AI guidelines.

Beyond the necessity for practical AI policies and guidelines, there is a demonstrated need for training. According to research published by the Public Technology Institute, over 90% of tech leaders felt a strong need for AI training for themselves and the people they manage. Based on this research and reports from the field, three factors require attention: awareness, policies and training. Recognizing the existence and implications of shadow AI should be a timely call to action, accelerating the creation and enforcement of meaningful AI governance.

UP NEXT: Advanced processors support the rise of ‘invisible’ AI.

Drazen_/Getty Images