Logo
Article image

The Art and Science of Crafting Effective Prompts for LLMs

Apr 16, 2025

Crafting effective prompts for Large Language Models (LLMs) is a blend of creativity and structure — a practice known as Prompt Engineering. This blog explores why prompt writing matters, differentiates it from prompt tuning, and shares practical techniques to improve outcomes from models like ChatGPT. Covering everything from role-playing and formatting instructions to common use cases and guardrails, this article offers a comprehensive guide to help both beginners and experienced users master the art of prompting.

Image

Babar M Bhatti

Prompts are the key that unlock value from large language models (LLMs). Writing good prompts for LLMs falls somewhere between science and art — thus the term Prompt Engineering aka prompt design / in-context learning.

You may have come across many cheat sheets on prompt writing on social media. Prompt engineering is also an active research area with plenty of papers on arxiv.org. With all this information, I debated the need for writing an article on prompting. On a recent webinar on Generative AI, I was asked: why care about prompt engineering? Well, it may not seem so but creating quality prompts is non-trivial. There’s a lot more to prompt engineering than it may seem to the uninitiated.

I figured that this post could shed light on prompts and also help me bring my scattered notes and bookmarks about prompts to one page.

Key topics and questions which this article should help with:

  • Why do we need prompt engineering?
  • Prompt engineering/design vs Prompt tuning
  • Prompts for common scenarios e.g. text completion, classification, summarization
  • What are the guidelines for effective prompts? What have we learned from researchers and successful prompt writers users ?
  • Planning a prompt: have a clear use case which is consistent with the strengths of LLMs. You should also think about the risk of incorrect answer, harm (e.g. bias) or unsafe behavior.
  • What are some effective ways to control model output behavior: role playing, style and persona specification
  • Getting the output in a specific format/style (do you understand, step-by-step)
  • Effect of interaction and feedback on models such as ChatGPT
  • Guardrails — avoiding unsafe and incorrect responses
  • Prompting tools and guides

What is a Prompt for a LLM?

A prompt is simply the input text to a large language model such as ChatGPT. A simple prompt and model output example:

Prompt → Write a tagline for an ice cream shopCompletion (Model output) → We serve up smiles with every scoop!

.A prompt can be a few words or a few pages long! It usually requires experimentation with prompts to get a model to provide desired content/format.

A Prompt looks like a monolithic block of text but most prompts have specific parts. The key parts of a prompt are:

  • Context or background
  • Instructions
  • Task(s)
  • More instructions (optional)
  • Refinement (if needed)
Image

Here are a few examples of how you can design a prompt by breaking down its parts. Note that you can continue refining the output as needed.

Image

The Chat prompt for Open AI has three parts:

  • System — which corresponds to Context/Instructions
  • User — example of what the user will input
  • Assistant — example of what the model will output

Read on to see more examples of prompts with explanations. But let’s tackle the big question first, why prompts?

Why do we need Prompt Engineering?

By definition, LLMs are capable of doing a lot. There’s a reason they are large. If you don’t interact by providing a good prompt, the answer may not meet your need or expectations. Think of it this way — you are making a search in a very large space, by asking a precise question and specifying instructions on how you want the answer, you help the model and yourself!

Small changes in the prompt wordings — even the order of words — can make a significant difference in the quality of the response. LLMs can pick up subtle patterns in the instructions and examples and adjust their answers.

LLM results can be sensitive to prompts — their content i.e. words, the order of words as well as the specific instructions, for instance, how repetitive the instruction is.

Prompts which work for one type of LLM may not work the same way in a different LLM because different LLMs have been trained on different data and may have different architectures. For closed-source models such as those from OpenAI, there’s not enough information that has been disclosed to the public.

Now that we know why we need good prompt writing — lets’ learn more about techniques for Prompt Engineering. But we need clarification on one more thing.

How is Prompt Engineering different from Prompt Tuning?

Prompt Engineering means that the model weights are not changed — in other words, the model stays frozen — and the only interactions is through prompt to the model, which may include zero, one or a few examples via text as natural language.

Prompt Tuning on the other hand, is when you add hundreds of data examples formatted in a specific format (e.g. json) which changes the model slightly. Obviously, with more data, prompt tuning generally improves the performance.

The focus of this post is on Prompt engineering because it does not require any pre-processing, data preparation or coding skills — anyone can do prompt engineering.

Best Practices for Prompt Engineering

I’ve summarized my recommendations for prompt crafting here.

  1. Be precise with what you want the model to do — use good grammar and specific words that match your intent. Avoid broad and open-ended prompts and define any technical term or abbreviation to avoid confusion.
  2. Show and tell — provide quality examples / data as if you were telling a human or training a supervised model.
  3. If needed, give the model an identity / persona / role or provide a characteristic situation.
  4. Tell the model how should it behave: helpful, witty, in the style of [name].
  5.  Tell the model about the audience (explain quantum computing to a 7-year old) and optionally, how you want the output to be formatted (as a list, on separate lines, in markup, in a table with column 1 for x, etc.)
  6.  Breakdown a complex or reasoning problem into smaller steps (its ok to use bullet lists) — it has been found helpful to ask the model to “provide step-by-step answers with explanations.”
  7. Interactive Refinement — interactivity is a powerful feature so use it to: validate (do you understand?), allow follow up questions (by mentioning this in the prompt), provide clarifications or elaboration, feedback (good job) or redirection. A related concept is prompt chaining, we feed results of one prompt into the next one and progressively improve the result.
Logo
© 2025 AI Advantages All rights reservedVer 1.0.183