What Is Conversational Artificial Intelligence?
“They use large volumes of data, machine learning, and natural language processing to help imitate human interactions, recognizing speech and text inputs and translating their meanings across various languages,” the company notes.
Conversational AI tools have two key elements: machine learning and natural language processing. “NLP processes flow into a constant feedback loop with machine learning processes to continuously improve the AI algorithms,” IBM notes. Conversational AI has principal components “that allow it to process, understand, and generate response in a natural way.”
How Does Conversational AI Work?
Natural language processing, as IBM notes, involves the combination of input generation, input analysis, output generation and reinforcement learning.
Unstructured data is “transformed into a format that can be read by a computer, which is then analyzed to generate an appropriate response. Underlying ML algorithms improve response quality over time as it learns,” IBM says.
First, with input generation: Users provide input through a website or an app via voice or text.
“If the input is text-based, the conversational AI solution app will use natural language understanding (NLU) to decipher the meaning of the input and derive its intention,” IBM says. “However, if the input is speech-based, it’ll leverage a combination of automatic speech recognition (ASR) and NLU to analyze the data.”
An NLU platform “evaluates a text string and attempts to decipher the author’s intent,” Nathan Cartwright, an intelligent customer experience architect at CDW, writes in a blog post. “An intent is the most basic task that is being requested by the customer. A customer may need to do something simple such as paying a bill or scheduling an appointment.”
Then the conversational AI tool forms a response, and machine learning algorithms refine responses over time to ensure accuracy.
“Since a chatbot is a form of AI that attempts to emulate human behavior, it must be able to decipher what a customer is requesting or validate data being given without having a list of keywords or phrases,” Cartwright says. NLU platforms are able “to set customer input into a pragmatic format to be passed to backend systems and then present data back to the customer in a human, readable format with context.”
How Are Government Call Centers Using Conversational AI?
Several government agencies have started using conversational AI technology in the past few years to improve their call centers.
One prominent example is the city of San Jose, Calif., which last year to deploy the tools for its call centers. The city had been missing its target thresholds for response times, according to Rob Lloyd, San Jose’s CIO and deputy city manager. The city wanted to help resolve nonemergency calls faster and take the load off its 911 contact center staff.
One of the key tasks San Jose focused on was deploying virtual agents to quickly resolve specific questions from residents.
As part of that endeavor, the city explored the evolution of translation technologies using neural networks. “In our research, we did find the language and literal translation as one of the human experience issues that people have when they’re dealing with their government,” Lloyd says.
That’s especially important in San Jose, which has sizable immigrant populations, including the largest Vietnamese population of any city outside of Vietnam. Spanish and Vietnamese are the two most prominent non-English languages spoken in the city. San Jose’s first pass at a constituent relationship management solution had good solutions for Spanish but not for Vietnamese, which is a complex language with influences that include Cantonese and French. For example, a test showed a notice about fireworks was translated as a bomb notice.
Still, the city kept at it. “We have a policy here to speak to our community where they are, and there are enough accessibility barriers. Language should not be one,” Lloyd says.