Mastering Prompt Engineering: Techniques to Guide LLMs Effectively
This article explains the fundamentals of prompt engineering for large language models, covering LLM output configuration, length and sampling controls, various prompt types, chain‑of‑thought and tree‑of‑thought reasoning methods, and practical best‑practice guidelines for creating high‑quality prompts.
Prompt Engineering Overview
Most large language models (LLMs) are transformer decoder‑only, autoregressive token‑prediction engines that generate text when given a prompt. Prompt engineering is the systematic design and optimization of high‑quality prompts to steer LLMs toward desired outputs.
LLM Output Configuration
LLM APIs (e.g., OpenAI) expose parameters for controlling output length and sampling.
OpenAI API example
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello"}],
max_tokens=1000, # maximum output length
temperature=0.7, # sampling temperature
top_p=1.0, # nucleus sampling
frequency_penalty=0, # frequency penalty
presence_penalty=0, # presence penalty
stop=["
", "###"] # stop sequences
)Output Length Control
Max Length / Max Tokens – caps the number of generated tokens.
Stop Sequences – halts generation when a specified string (e.g., "\n", "###") appears.
Output Sampling Control
Temperature : Controls randomness (0‑10). Low values yield deterministic output; high values increase creativity. Temperature = 0 always selects the highest‑probability token.
Top‑K : Restricts sampling to the top K tokens (1‑50). Larger K gives more diversity.
Top‑P (nucleus sampling) : Limits selection to the smallest set of tokens whose cumulative probability exceeds top_p (0‑1). Lower values give precise output; higher values increase diversity.
Prompt Types
System Prompt : Sets the model's overall capability or goal (e.g., translation, coding).
Role Prompt : Assigns a persona (e.g., expert programmer, medical doctor).
User Prompt : The raw user input.
Context Prompt : Supplies conversation history or relevant details to avoid “forgetting”.
Sample‑Learning Prompt Types
Zero‑Shot Learning
Uses only an instruction (and optional context) without examples. This is the default for many chatbot interactions.
Few‑Shot Learning
When zero‑shot results are unsatisfactory, add 1‑5 examples to the prompt. Demonstrations help the model mimic the desired output format or structure.
Reasoning Prompts
Chain‑of‑Thought (CoT)
Introduced by Wei et al. (2022). Adding a phrase like “Let's think step by step” can raise accuracy on logical reasoning tasks from ~17 % to >70 %.
Paper: https://arxiv.org/pdf/2201.11903
Further refinements include:
Atomic Decomposition : Break a complex problem into indivisible sub‑steps (e.g., “First…, then…, therefore…”).
Explicit Placeholders : Use blanks (e.g., ___) to guide the model's answer.
# Example CoT + Few‑shot Prompt
Q: When my brother was 2 years old, I was double his age. Now I am 40 years old. How old is my brother? Let’s think step by step.
A: When my brother was 2 years, I was 2*2 = 4 years old. The age difference is 2 years. Now I am 40, so my brother is 40‑2 = 38 years old. The answer is 38.Tree of Thoughts (ToT)
ToT extends CoT by exploring multiple reasoning branches at each step, providing diversity while retaining consistency.
Prompting for Reasoning Models (Slow Thinking)
Fast Thinking / System 1 : Prompt must tell the model what to do and how to do it (e.g., “think step by step”).
Slow Thinking / System 2 : The model already performs internal step‑by‑step reasoning; the prompt should precisely define the goal and constraints.
Best Practices
Provide Few Examples
Prefer single‑ or few‑shot prompts; they act as reference points that improve accuracy, style, and tone.
Use Simple Conversational Language
# Good
I am traveling in New York with two 3‑year‑old kids. Where should we go?
# Bad
Act as a travel guide. Describe good places in Manhattan for a 3‑year‑old child.Prefer Commands Over Constraints
Command : Explicitly state the desired format, style, or content.
Constraint : State what the model should not do. Commands are generally more effective.
# Good
Generate a one‑paragraph blog about the top 5 video game consoles, mentioning console, manufacturer, year, and total sales.
# Bad
Generate a one‑paragraph blog about the top 5 video game consoles. Do not list game titles.Use Constraints When Needed
Constraints are useful for hallucination suppression, safety, or strict output formatting (e.g., JSON, Markdown).
Role‑playing: "You are a senior nutritionist designing a fitness diet."
Sample‑learning: Provide input‑output examples within the prompt.
Formatting: Require JSON, YAML, etc.
Hallucination suppression: Instruct the model to answer only based on known information and to indicate uncertainty.
# Example JSON extraction prompt
Extract all to‑do items from the meeting notes, output as:
1. "Owner: Task (deadline)"
2. Only items, no explanations.
3. If no explicit deadline, mark as "To be confirmed".Prompt Templates and Variable Substitution
Many agent frameworks support PromptTemplate with placeholders for reusable prompts.
# Prompt
VARIABLES
{city} = "Amsterdam"
PROMPT
You are a travel guide. Tell me a fact about the city:JSON Output for Programmatic Use
Returning a consistent JSON structure simplifies downstream parsing and reduces hallucinations. Libraries such as json‑repair can fix malformed JSON.
Automatic Prompt Engineering (APE)
APE uses an LLM to generate candidate prompts automatically, followed by human evaluation. It can also create fine‑tuning datasets.
# Prompt
We have a band merchandise t‑shirt webshop. Generate 10 paraphrases of the order request "One Metallica t‑shirt size S".
# Output
1. I’d like to purchase a Metallica t‑shirt in size small.
2. Can I order a small‑sized Metallica t‑shirt?
3. I’d like to buy one Metallica t‑shirt in small.
4. One Metallica shirt, size small, please.
5. I want one Metallica t‑shirt size small.
6. I’d like to place an order for one Metallica t‑shirt in small.
7. Small Metallica t‑shirt, one please.
8. I’m looking for one Metallica t‑shirt in small.
9. Can you get me one Metallica t‑shirt in small?
10. One Metallica small t‑shirt, please.How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
