Build Your First Production‑Ready LCEL Chain with the Pipe Operator
This tutorial walks through LCEL’s pipe operator and its underlying RunnableSequence, then demonstrates sequential, parallel, and lambda‑based chains, shows how to preserve context with RunnablePassthrough/Assign, compares invoke/stream/batch execution modes, and provides a complete production‑grade RAG chain with common pitfalls and a self‑check checklist.
01 What the Pipe Operator Does
The pipe symbol | is syntactic sugar that creates a RunnableSequence where the left‑hand output becomes the right‑hand input, forming a chain like chain = prompt | llm | parser. The execution flow passes data through each Runnable in order.
input → runnable1.invoke() → output1 → runnable2.invoke(output1) → output2 → …02 RunnableSequence: Sequential Execution
Both the pipe syntax and an explicit RunnableSequence.from([prompt, llm, parser]) produce identical chains. Use the explicit form when you need to build a chain dynamically.
import { RunnableSequence } from "@langchain/core/runnables";
const chain = RunnableSequence.from([prompt, llm, parser]);Internally a RunnableSequence splits steps into first, middle, and last, with TypeScript checking type compatibility.
03 RunnableParallel: Concurrent Branches
Parallel execution runs multiple chains on the same input simultaneously and merges their results into a dictionary, halving total latency when LLM calls take 1–3 seconds each.
import { RunnableParallel } from "@langchain/core/runnables";
const summaryChain = summaryPrompt | llm | parser;
const sentimentChain = sentimentPrompt | llm | parser;
const parallelChain = RunnableParallel({ summary: summaryChain, sentiment: sentimentChain });
const result = await parallelChain.invoke({ text: "LangChain released v0.3…" });
// result => { summary: "…", sentiment: "…" }04 RunnableLambda: Turning Any Function into a Runnable
RunnableLambdawraps arbitrary functions for data cleaning or transformation, allowing them to be placed anywhere in a chain.
import { RunnableLambda } from "@langchain/core/runnables";
const extractQuestion = new RunnableLambda({
func: (input) => ({
context: input.messages.join("
"),
question: input.question,
})
});
const toUpperCase = RunnableLambda.from(s => s.toUpperCase());
const qaChain = extractQuestion | ChatPromptTemplate.fromTemplate("...") | llm | parser | RunnableLambda.from(answer => ({ answer, timestamp: Date.now() }));05 RunnablePassthrough and RunnableAssign: Preserving Context
These utilities keep the original input untouched ( RunnablePassthrough) or add new fields without overwriting existing ones ( RunnableAssign), solving the common RAG problem where intermediate steps discard the original question.
import { RunnablePassthrough, RunnableAssign } from "@langchain/core/runnables";
const ragChain = RunnableParallel({
question: RunnablePassthrough.passthrough(),
context: retriever,
}) | ChatPromptTemplate.fromTemplate("...") | llm | parser;
const enrichChain = new RunnableAssign({ summary: summaryChain });06 Streaming Output: Typewriter Effect
LCEL supports three standard methods inherited by any chain: invoke() (wait for full result), stream() (receive tokens in real time), and batch() (process multiple inputs concurrently).
// invoke – synchronous
const result = await chain.invoke({ concept: "vector database" });
// stream – token‑by‑token
const stream = await chain.stream({ concept: "vector database" });
for await (const chunk of stream) { process.stdout.write(chunk); }
// batch – parallel inputs
const results = await chain.batch([
{ concept: "vector database" },
{ concept: "embedding model" },
{ concept: "semantic search" },
]);07 Full Production‑Grade RAG Chain
The article assembles all previous pieces into a complete Retrieval‑Augmented Generation chain with mock retriever, document formatting, prompts, LLM, and output parser, then runs it with streaming output.
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { RunnableSequence, RunnableParallel, RunnablePassthrough, RunnableLambda } from "@langchain/core/runnables";
const mockRetriever = RunnableLambda.from(async (query) => [{ pageContent: "LangChain v0.3 released" }, { pageContent: "Unified model API, LCEL stabilization" }]);
const formatDocs = RunnableLambda.from(docs => docs.map(d => d.pageContent).join("
"));
const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const ragPrompt = ChatPromptTemplate.fromTemplate(`
You are a QA assistant. Answer the question using the context below.
If the context lacks relevant info, say "I don't know".
Context:
{context}
Question: {question}
`);
const ragChain = RunnableSequence.from([
RunnableParallel({
question: RunnablePassthrough.passthrough(),
context: mockRetriever.pipe(formatDocs),
}),
ragPrompt,
llm,
new StringOutputParser(),
]);
async function askQuestion(question) {
const stream = await ragChain.stream(question);
process.stdout.write("Answer: ");
for await (const chunk of stream) { process.stdout.write(chunk); }
console.log();
}
await askQuestion("What are the main changes in LangChain v0.3?");08 Common Pitfalls and Self‑Check List
Typical errors include type mismatches, mismatched keys in parallel branches, broken streaming without abort handling, and forgetting await. The checklist reminds you to verify input/output types, key alignment, timeout/error handling, appropriate invoke/stream/batch usage, and secure handling of sensitive data.
□ Do step input/output types line up?
□ Do parallel branch keys match downstream template variables?
□ Are timeouts and error handling added?
□ Is stream or batch used where needed?
□ Is sensitive info passed via RunnableConfig, not hard‑coded? |operator – syntactic sugar for RunnableSequence, left output becomes right input. RunnableParallel – runs multiple branches concurrently and merges results. RunnableLambda – wraps arbitrary functions for data transformation. RunnablePassthrough – passes original input unchanged to later steps.
Three invocation modes – invoke, stream, batch – are automatically available on any LCEL chain.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
James' Growth Diary
I am James, focusing on AI Agent learning and growth. I continuously update two series: “AI Agent Mastery Path,” which systematically outlines core theories and practices of agents, and “Claude Code Design Philosophy,” which deeply analyzes the design thinking behind top AI tools. Helping you build a solid foundation in the AI era.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
