How ChatGPT Can Support State and Local Projects
For some types of state and local government tech projects, ChatGPT can be used practically out of the box. As a force accelerator, ChatGPT and similar AI-based large language models can augment the capabilities of agency staff. The technology of ChatGPT delivers excellent results in tasks such as language translation of incoming requests, sentiment analysis of constituent feedback and summarization of large documents for policymakers.
In a 2023 enterprise architecture technical brief on ChatGPT, the Virginia IT Agency (VITA) advocates such use cases for ChatGPT. For agencies facing bottlenecks in these tasks due to staff or budget constraints, having an AI-based assistant on hand can help existing staff get more done.
Commercial versions of ChatGPT (and competitors) also offer the ability to add training to the model. This offers another opportunity for agencies that have large amounts of data to feed into a model. ChatGPT doesn’t itself have statistical analysis capabilities, but when given large chunks of data as input, it can answer questions about patterns and trends, and help to identify anomalies.
So, ChatGPT can deliver to agency staff capabilities that were considered uneconomical or impossible — capabilities such as exploring trends in population growth and migration, based on government data. Or, fed with local police data, ChatGPT could help identify patterns in crime data: types of crimes, timing and location of crimes, impact of crime, and demographic factors associated with crime.
LEARN MORE: How government call centers can use conversational AI.
What Are Some of the Large Language Model Shortfalls?
In both examples, ChatGPT can help governments gain insights into what’s happening in their area, and then plan and allocate resources more efficiently.
But both examples also highlight an important caveat for any government use of AI: biases in large language models. Researchers and even casual users have discovered that it’s easy to get biased answers out of ChatGPT, and this raises alarm bells with government policymakers. Models like ChatGPT are trained on human-created data, and they will carry forward any biases contained in the data. This means that ChatGPT, even when trained on an agency’s data, can only be one of several inputs into the policy-making and resource-allocation process. VITA warns of this pitfall also in its technical brief.
Data visualization is another area where training agency data into ChatGPT can help the process. In many cases, policymakers and analysts use data visualization as a proxy for large data analysis. However, generating graphs has become faster, easier and more elegant with tools such as Tableau and Microsoft Power BI. So, the public now wades through a glut of graphs and maps, and it can be difficult to understand what all these graphs and charts really mean.
By using an AI model trained on government data to first identify the most significant trends and anomalies, visualization experts can focus on developing the graphics that show data that is most relevant to decision-making.
EXPLORE: How to implement a better framework for risk management decision-making.
Will ChatGPT Deliver What Your Agency Needs?
Because ChatGPT delivers such a great demo of chatbot capabilities, it’s natural to want to use OpenAI’s technology in chatbots focused on government services. For most applications, this isn’t the right answer. It’s not that AI-based chatbots are bad — it’s just that the large language model style of text generation in products like ChatGPT isn’t going to deliver the kind of domain-specific, accurate and detailed answers that governments want to offer in their constituent services.
At the same time, tools based on large language models and user feedback like ChatGPT cannot be directly unleashed on government websites aimed at constituents. With an abundance of outdated and inaccurate information on the Internet, ChatGPT can just as quickly spread harmful information, breach privacy, deliver fictional “facts” and authoritatively state something completely inaccurate.
For government IT teams, ChatGPT does offer a great opportunity: It can quickly show CIOs and other stakeholders the type of interaction that a quality AI-based chatbot can deliver. In fact, IT teams can already build high-quality chatbots that have the capabilities that constituent services need — this is off-the-shelf technology that is already widely available.
The consumerization of AI in ChatGPT, though, may be a better internal sales tool for chatbots than IT teams have had in the past. The excitement surrounding ChatGPT has brought AI to the attention of CIOs and other government executives, and they won’t necessarily be interested in the details of exactly which AI technology is chosen to deliver constituent services.