Mastering LangChain PromptTemplates to Reduce AI Hallucinations

This tutorial walks through the concept of PromptTemplate in LangChain, demonstrates how to build chat prompt templates, use message placeholders, apply Few‑Shot prompting and ExampleSelector techniques, and shows concrete code and output examples that help mitigate large‑language‑model hallucinations.

Qborfy AI
Qborfy AI
Qborfy AI
Mastering LangChain PromptTemplates to Reduce AI Hallucinations

1. What is a PromptTemplate

A PromptTemplate is a string template that contains placeholders for variables; when the variables are supplied, the final prompt is generated. It typically includes the instruction sent to the LLM, an example of the expected answer format, and the actual user question.

2. Creating a PromptTemplate

LangChain provides the PromptTemplate class (or ChatPromptTemplate for multi‑message chats). The class accepts a prompt string that defines the template.

from langchain_core.prompts import ChatPromptTemplate

# Build a chat prompt with system, human and AI messages
chat_template = ChatPromptTemplate.from_messages([
    ("system", "You are an AI assistant, your name is {name}"),
    ("human", "Hello"),
    ("ai", "Hello, I am {name}, happy to help you"),
    ("human", "{user_input}")
])

message = chat_template.format_messages(name="小爱同学", user_input="你的名字叫什么?")
print(message)

The execution yields three message objects:

SystemMessage – sets the LLM’s role.

HumanMessage – the user’s input.

AIMessage – the model’s response.

3. Context Injection with MessagesPlaceholder

MessagesPlaceholder

acts as a slot for a list of messages, allowing you to inject a pre‑defined conversation context at a specific point in the template.

from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.messages import SystemMessage, HumanMessage, AIMessage

chat_template = ChatPromptTemplate.from_messages([
    ("system", "You are an AI assistant"),
    MessagesPlaceholder("msgs")
])

msgs = [
    SystemMessage(content='你的名字是小爱同学'),
    HumanMessage(content='你好'),
    AIMessage(content='你好,我是人工智能助手,很高兴为您服务')
]
print(chat_template.invoke(msgs))

4. Few‑Shot Prompting

FewShotPromptTemplate lets you prepend a small set of examples to the prompt, guiding the model toward the desired answer style and reducing hallucinations.

from langchain.prompts.few_shot import FewShotPromptTemplate
from langchain.prompts import PromptTemplate

examples = [
    {"question": "谁的寿命更长,穆罕默德二世还是爱因斯坦?", "answer": """爱因斯坦活了 76 岁。穆罕默德二世活了 89 岁。因此,穆罕默德二世比爱因斯坦活得更长。"""},
    {"question": "目前电影票房第一名是谁?", "answer": """《阿凡达》的票房是 27.9 亿美元。《复仇者联盟 4:终局之战》的票房是 27.8 亿美元。因此,《阿凡达》的票房更高。"""}
]
example_prompt = PromptTemplate(input_variables=["question", "answer"], template="问题:{question}
答案:{answer}")

prompt = FewShotPromptTemplate(
    examples=examples,
    example_prompt=example_prompt,
    suffix="问题:{input}",
    input_variables=["input"]
)
print(prompt.format(input="谁的寿命更长,穆罕默德二世还是爱因斯坦?"))

5. Example Selection with ExampleSelector

When the full example set is too large for production, LangChain offers selectors to pick the most relevant examples based on the user query. SemanticSimilarityExampleSelector – computes semantic similarity using an embedding model and a vector store (e.g., Chroma) to retrieve the top‑k closest examples. MaxMarginalRelevanceExampleSelector – balances relevance and diversity via MMR.

Typical usage employs the semantic selector:

from langchain.prompts.example_selector import SemanticSimilarityExampleSelector
from langchain_chroma import Chroma
from langchain_ollama.embeddings import OllamaEmbeddings
from langchain.prompts import FewShotPromptTemplate, PromptTemplate

ollama_emb = OllamaEmbeddings(base_url="http://127.0.0.1:11434", model="shaw/dmeta-embedding-zh")

example_selector = SemanticSimilarityExampleSelector.from_examples(
    examples=examples,
    embeddings=ollama_emb,
    vectorstore_cls=Chroma(),
    k=1
)

question = "穆罕默德二世?"
selected_examples = example_selector.select_examples({"question": question})

example_prompt = PromptTemplate(input_variables=["question", "answer"], template="问题:{question}
答案:{answer}")
prompt = FewShotPromptTemplate(
    examples=selected_examples,
    example_prompt=example_prompt,
    suffix="问题:{input}",
    input_variables=["input"]
)
print(prompt.format(input=question))

6. Summary

The article demonstrates how to construct PromptTemplates, inject contextual messages, apply Few‑Shot prompting, and dynamically select relevant examples with ExampleSelector, providing complete code snippets and output screenshots. These techniques collectively help steer large language models toward more accurate, less hallucinatory responses, and they form a foundation for downstream RAG pipelines.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

LangChainPromptTemplateAI hallucinationExampleSelectorFewShot
Qborfy AI
Written by

Qborfy AI

A knowledge base that logs daily experiences and learning journeys, sharing them with you to grow together.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.