Crafting effective prompts for Large Language Models (LLMs) is a blend of creativity and structure — a practice known as Prompt Engineering. This blog explores why prompt writing matters, differentiates it from prompt tuning, and shares practical techniques to improve outcomes from models like ChatGPT. Covering everything from role-playing and formatting instructions to common use cases and guardrails, this article offers a comprehensive guide to help both beginners and experienced users master the art of prompting.
Prompts are the key that unlock value from large language models (LLMs). Writing good prompts for LLMs falls somewhere between science and art — thus the term Prompt Engineering aka prompt design / in-context learning.
You may have come across many cheat sheets on prompt writing on social media. Prompt engineering is also an active research area with plenty of papers on arxiv.org. With all this information, I debated the need for writing an article on prompting. On a recent webinar on Generative AI, I was asked: why care about prompt engineering? Well, it may not seem so but creating quality prompts is non-trivial. There’s a lot more to prompt engineering than it may seem to the uninitiated.
I figured that this post could shed light on prompts and also help me bring my scattered notes and bookmarks about prompts to one page.
A prompt is simply the input text to a large language model such as ChatGPT. A simple prompt and model output example:
Prompt → Write a tagline for an ice cream shopCompletion (Model output) → We serve up smiles with every scoop!
.A prompt can be a few words or a few pages long! It usually requires experimentation with prompts to get a model to provide desired content/format.
A Prompt looks like a monolithic block of text but most prompts have specific parts. The key parts of a prompt are:
Here are a few examples of how you can design a prompt by breaking down its parts. Note that you can continue refining the output as needed.
The Chat prompt for Open AI has three parts:
Read on to see more examples of prompts with explanations. But let’s tackle the big question first, why prompts?
By definition, LLMs are capable of doing a lot. There’s a reason they are large. If you don’t interact by providing a good prompt, the answer may not meet your need or expectations. Think of it this way — you are making a search in a very large space, by asking a precise question and specifying instructions on how you want the answer, you help the model and yourself!
Small changes in the prompt wordings — even the order of words — can make a significant difference in the quality of the response. LLMs can pick up subtle patterns in the instructions and examples and adjust their answers.
LLM results can be sensitive to prompts — their content i.e. words, the order of words as well as the specific instructions, for instance, how repetitive the instruction is.
Prompts which work for one type of LLM may not work the same way in a different LLM because different LLMs have been trained on different data and may have different architectures. For closed-source models such as those from OpenAI, there’s not enough information that has been disclosed to the public.
Now that we know why we need good prompt writing — lets’ learn more about techniques for Prompt Engineering. But we need clarification on one more thing.
Prompt Engineering means that the model weights are not changed — in other words, the model stays frozen — and the only interactions is through prompt to the model, which may include zero, one or a few examples via text as natural language.
Prompt Tuning on the other hand, is when you add hundreds of data examples formatted in a specific format (e.g. json) which changes the model slightly. Obviously, with more data, prompt tuning generally improves the performance.
The focus of this post is on Prompt engineering because it does not require any pre-processing, data preparation or coding skills — anyone can do prompt engineering.
I’ve summarized my recommendations for prompt crafting here.