Mastering LangChain Chains: Simple, Composite, and Custom Nodes with LCEL
This tutorial walks through LangChain's core concept of Chains, demonstrating how to build simple, composite, and custom node chains, use prompt templates and structured output parsers, and leverage the LCEL syntax for clean, modular LLM applications.
LangChain Chain Architecture
Chains reside in the workflow API abstraction layer of LangChain and connect components such as large models, tools, and parsers in a sequential pipeline. A minimal QA chain pipes a chat model into a StrOutputParser to obtain a plain‑text response.
from langchain.chat_models import init_chat_model
from langchain_core.output_parsers import StrOutputParser
model = init_chat_model(
model="Qwen/Qwen3-8B",
model_provider="openai",
base_url="https://api.siliconflow.cn/v1/",
api_key=""
)
basic_qa_chain = model | StrOutputParser()
question = "你好,请你介绍一下你自己。"
result = basic_qa_chain.invoke(question)
print(result)The invocation returns the model’s raw string output instead of an AIMessage object.
Adding a Prompt Template
Inserting a ChatPromptTemplate allows the chain to supply a system prompt and a variable placeholder. The example below builds a yes/no QA chain.
from langchain.chat_models import init_chat_model
from langchain.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
model = init_chat_model(
model="Qwen/Qwen3-8B",
model_provider="openai",
base_url="https://api.siliconflow.cn/v1/",
api_key=""
)
prompt_template = ChatPromptTemplate([
("system", "你是一个乐意助人的助手,请根据用户的问题给出回答"),
("user", "这是用户的问题: {topic}, 请用 yes 或 no 来回答")
])
bool_qa_chain = prompt_template | model | StrOutputParser()
question = "请问 1 + 1 是否 大于 2?"
result = bool_qa_chain.invoke({"topic": question})
print(result)Using a strong LLM with a prompt template yields structured yes/no answers.
Structured Parsers
A typical chain consists of a prompt template, a model, and a structured output parser that converts the model’s string output into a JSON‑like object.
from langchain.chat_models import init_chat_model
from langchain_core.prompts import PromptTemplate
from langchain.output_parsers import ResponseSchema, StructuredOutputParser
schemas = [
ResponseSchema(name="name", description="用户的姓名"),
ResponseSchema(name="age", description="用户的年龄")
]
parser = StructuredOutputParser.from_response_schemas(schemas)
model = init_chat_model(
model="Qwen/Qwen3-8B",
model_provider="openai",
base_url="https://api.siliconflow.cn/v1/",
api_key=""
)
prompt = PromptTemplate.from_template(
"请根据以下内容提取用户信息,并返回 JSON 格式:
{input}
{format_instructions}"
)
chain = (
prompt.partial(format_instructions=parser.get_format_instructions())
| model
| parser
)
result = chain.invoke({"input": "用户叫李雷,今年25岁,是一名工程师。"})
print(result)The key steps are creating the PromptTemplate, using partial() to fill format_instructions, and invoking the chain to obtain structured JSON.
Composite Chains
Chains can be nested. The following composite chain first generates a short news article from a title, then extracts a structured summary.
from langchain.chat_models import init_chat_model
from langchain_core.prompts import PromptTemplate
from langchain.output_parsers import ResponseSchema, StructuredOutputParser
# Step 1: generate news body
news_gen_prompt = PromptTemplate.from_template(
"请根据以下新闻标题撰写一段简短的新闻内容(100字以内):
标题:{title}"
)
model = init_chat_model(
model="Qwen/Qwen3-8B",
model_provider="openai",
base_url="https://api.siliconflow.cn/v1/",
api_key=""
)
news_chain = news_gen_prompt | model
# Step 2: extract structured fields
schemas = [
ResponseSchema(name="time", description="事件发生的时间"),
ResponseSchema(name="location", description="事件发生的地点"),
ResponseSchema(name="event", description="发生的具体事件")
]
parser = StructuredOutputParser.from_response_schemas(schemas)
summary_prompt = PromptTemplate.from_template(
"请从下面这段新闻内容中提取关键信息,并返回结构化JSON格式:
{news}
{format_instructions}"
)
summary_chain = (
summary_prompt.partial(format_instructions=parser.get_format_instructions())
| model
| parser
)
full_chain = news_chain | summary_chain
result = full_chain.invoke({"title": "苹果公司在加州发布新款AI芯片"})
print(result)The execution yields a JSON object containing time, location, and event extracted from the generated news.
Custom Nodes with RunnableLambda
When a needed component is absent, developers can wrap a Python function as a runnable node. The example adds a debugging node that prints the intermediate news body.
from langchain_core.runnables import RunnableLambda
def debug_print(x):
print('中间结果(新闻正文):', x)
return x
debug_node = RunnableLambda(debug_print)
full_chain = news_chain | debug_node | summary_chain
result = full_chain.invoke({"title": "苹果公司在加州发布新款AI芯片"})
print(result) RunnableLambdais suitable for non‑streaming outputs; for streaming use RunnableGenerator.
LCEL Overview
LCEL (LangChain Expression Language) is a declarative syntax that uses the pipe operator ( |) to compose Runnable components such as prompts, models, and parsers. It provides modular construction, visualizable data flow, a unified .invoke() / .stream() / .batch() interface, and is lighter weight than classic Chain or Agent classes.
Design Goals
Modular construction – break model workflows into reusable components.
Logical visualization – the pipe symbol shows clear data paths.
Unified runtime interface – all components support .invoke(), .stream(), and .batch().
Framework independence – LCEL is more lightweight than traditional Chain or Agent architectures.
Core Components
Runnable interface – provides .invoke(input), .stream(input), and .batch(inputs) methods.
Pipe operator ( |) – chains Runnable objects, e.g., prompt | model | parser.
PromptTemplate and OutputParser – each has a single responsibility: templating input, formatting output, or performing inference.
Fun with Large Models
Master's graduate from Beijing Institute of Technology, published four top‑journal papers, previously worked as a developer at ByteDance and Alibaba. Currently researching large models at a major state‑owned enterprise. Committed to sharing concise, practical AI large‑model development experience, believing that AI large models will become as essential as PCs in the future. Let's start experimenting now!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
