Close

Join the Insider Program

Explore exclusive HealthTech coverage and enjoy early access to the latest stories.

Aug 01 2024
Software

How Prompt Engineering Can Support Successful AI Projects

States’ artificial intelligence initiatives can benefit from context and instructions.

Artificial intelligence is quickly making its way into state and local governments. Understanding how to effectively query the models is fast becoming an essential skill.

Welcome to the realm of prompt engineering. It’s a growing discipline governing how to best draw usable output and avoid the fabrications, biases and misrepresentations to which AI platforms are prone.

Things are moving so fast that the National Governors Association held a recent webinar to assess and discuss the “potential and risks of AI applications and policies to help ensure its responsible use in the public sector.” Many states are all in on the technology. Washington state is using generative AI for software code development. Ohio’s state website has a chatbot, and Utah is using AI for threat detection on its computer platforms.

But nobody wants useless responses from an AI system.

The solution is prompt engineering, “the art and science of designing and optimizing prompts” to get better responses from AI models, according to Google Cloud. “By carefully crafting prompts, you provide the model with context, instructions, and examples that help it understand your intent and respond in a meaningful way.”

Click the banner below to explore factors for services modernization.

 

How Does Prompt Engineering Direct an AI Model?

Experts say that getting better answers out of AI requires a basic understanding of how large language models work. LLMs are platforms that can recognize and generate text and are educated by consuming enormous data sets and assessing past performance.

An LLM is a computer program that has been fed enough examples to be able to recognize and interpret human language or other types of complex data,” says Cloudflare. “Many LLMs are trained on data that has been gathered from the Internet — thousands or millions of gigabytes’ worth of text.”

Central to the technology is the concept of foundation models, which are rapidly broadening the functionality of AI. While earlier AI platforms were trained on specific data sets to produce a focused but limited output, the new approach throws the doors wide open.

In simple — and somewhat unsettling — terms, a foundation model can learn new tricks from unrelated data.

“What makes these new systems foundation models is that they, as the name suggests, can be the foundation for many applications of the AI model,” says IBM. “Using self-supervised learning and transfer learning, the model can apply information it’s learnt about one situation to another.”

Given the massive amounts of data fed into AI models, it isn’t surprising that they need guidance to produce usable output.

“Machine learning is a mathematical parlor trick,” says Mike Miller, director of product management for Amazon Web Services. “Prompt engineering may seem simple on the surface, but it requires quite a bit of nuance.”

READ MORE: Officials turn to synthetic data to train AI.

How to Write Better Prompts

Experts agree that the keys to a successful prompt are clarity and precision.

“It’s like telling high school students what to do,” says Yoon Kim, an assistant professor at the Massachusetts Institute of Technology. “Give them as much instruction as possible.”

Though popular AI platforms can respond to “zero-shot” queries (with no examples or context), the results can be underwhelming. A “one-shot” command that provides a key piece of information helps focus the result.

Here’s an example of a zero-shot prompt to OpenAI’s GPT-3: Describe a tomato.

Output: A tomato: plump, ripe, and bursting with juicy sweetness, its vibrant red skin concealing a flavorful and succulent treasure within.

Here’s the same question in a one-shot prompt format: Describe a tomato in the style of a child’s nursery rhyme.

Output:

In the garden bright and fair

Lies a tomato, plump and rare.

Red and round, it gleams so true,

Nature’s gift for me and you!

Adding an example can focus the response further. Query: Describe a tomato in the style of the child’s nursery rhyme Humpty Dumpty: Humpty Dumpty sat on a wall, Humpty Dumpty had a great fall; all the king’s horses and all the king’s men couldn’t put Humpty together again.

Output:

Juicy red fruit hung on the vine,

Juicy red fruit looked so fine;

Picked by a hand and dropped to the ground,

Splat! went the fruit, all around.

All the kids and all their friends

Couldn’t make it whole again.

Perhaps not Mother Hubbard-worthy, but not entirely without merit. On balance, the one-shot version of the tomato description is clearly the strongest. And knowing that difference is crucial.

Essential to creating successful prompts is having a clear idea of what high-quality output looks like before you start, says Sarah Rodenbeck, lead research data scientist at Purdue University. “You need to understand the area you are asking the model to create.”

This gladly begs some good old human intelligence from the user at the outset.

DISCOVER: Cities turn to AI to enhance public services.

What Are Some Other Tricks for Good Prompt Engineering?

AI models benefit from clear parameters. One of the most basic is length. OpenAI offers some advice: “The targeted output length can be specified in terms of the count of words, sentences, paragraphs, bullet points, etc. Note however that instructing the model to generate a specific number of words does not work with high precision. The model can more reliably generate outputs with a specific number of paragraphs or bullet points.”

AI models are also capable of creating their own prompts on request, and there is some research into the area. However, you might not want to hang your career on the results just yet.

Prompt to GPT-3: Write a prompt to explain AI prompt engineering. 20 words.

Output: Define AI prompt engineering: refining input queries to guide AI models, enhancing output quality, crucial for optimizing performance in various tasks.

Output after feeding the above prompt into the platform: Your definition is spot-on! It succinctly encapsulates the essence of AI prompt engineering, emphasizing its importance in refining input queries to improve AI model performance across different tasks.

Rather a circular dead end, but at least ChatGPT liked it.

Numerous organizations now offer prompt engineering training courses and bootcamps, many of which are free, but experts agree that trial and error are an inescapable part of the equation.

“Everyone I’ve talked to says you’ve got to practice,” Rodenbeck says.

Kindamorphic/Getty Images