Mastering LangChain’s Four Core Design Patterns: A Deep Dive for Architects

This article systematically explains LangChain’s four fundamental design patterns—Command, Chain of Responsibility, Decorator, and Pipeline—showing their definitions, core implementations, and practical code examples to help developers understand and extend the framework.

Tech Freedom Circle
Tech Freedom Circle
Tech Freedom Circle
Mastering LangChain’s Four Core Design Patterns: A Deep Dive for Architects

LangChain Core Design Patterns Overview

LangChain, the leading LLM application framework, derives its flexibility and extensibility from four classic design patterns: Command, Chain of Responsibility, Decorator, and Pipeline. These patterns work together to form a bidirectional execution architecture where requests flow forward and responses flow backward.

1. Chain of Responsibility (Responsibility Chain)

The responsibility‑chain pattern strings together Runnable components into a linear execution flow. In LangChain this is realized by the RunnableSequence class, which validates type compatibility between consecutive steps and forwards the output of one step as the input of the next.

from abc import ABC, abstractmethod
from typing import Generic, TypeVar, Optional, List

Input = TypeVar("Input")
Output = TypeVar("Output")

class Runnable(Generic[Input, Output], ABC):
    @abstractmethod
    def invoke(self, input: Input, config: Optional[dict] = None) -> Output:
        raise NotImplementedError()

class RunnableSequence(Runnable[Input, Output]):
    def __init__(self, steps: List[Runnable]):
        self.steps = steps
        self._validate_steps()

    def _validate_steps(self) -> None:
        for i in range(len(self.steps) - 1):
            cur_out = self.steps[i]._output_type
            nxt_in = self.steps[i+1]._input_type
            if not issubclass(cur_out, nxt_in):
                raise ValueError(f"Step {i} output {cur_out} does not match step {i+1} input {nxt_in}")

    def invoke(self, input: Input, config: Optional[dict] = None) -> Output:
        value = input
        for i, step in enumerate(self.steps):
            step_config = self._patch_config(config, step_idx=i)
            value = step.invoke(value, step_config)
        return value

    def _patch_config(self, config: Optional[dict], step_idx: int) -> dict:
        config = config or {}
        return {**config, "step_idx": step_idx, "step_tag": f"step{step_idx}", "sequence_id": id(self)}

Typical usage:

from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a concise summarizer, output no more than 50 characters."),
    ("user", "{input}")
])
llm = ChatOpenAI(model="gpt-3.5-turbo")
parser = StrOutputParser()

chain = prompt | llm | parser  # becomes a RunnableSequence internally
result = chain.invoke({"input": "LangChain is a framework for building LLM applications..."})
print(result)

2. Decorator Pattern (Middleware)

LangChain’s middleware implements the Decorator pattern. Each middleware wraps a core Runnable without modifying its code, adding before/after hooks such as logging, retry, or caching. The layered wrappers form an “onion” structure that enables reverse‑flow processing.

from langchain_core.runnables import Runnable, RunnableLambda
from typing import Callable, Optional

Middleware = Callable[[Runnable], Runnable]

def log_middleware(name: str) -> Middleware:
    def wrap(runnable: Runnable) -> Runnable:
        def new_invoke(input, config: Optional[dict] = None):
            print(f"[{name}] start, input: {input}")
            try:
                result = runnable.invoke(input, config)
                print(f"[{name}] finished, output: {result}")
                return result
            except Exception as e:
                print(f"[{name}] error: {e}")
                raise
        return RunnableLambda(new_invoke)
    return wrap

def retry_middleware(retries: int = 3) -> Middleware:
    def wrap(runnable: Runnable) -> Runnable:
        def new_invoke(input, config: Optional[dict] = None):
            for i in range(retries):
                try:
                    if i > 0:
                        print(f"[Retry] attempt {i}")
                    return runnable.invoke(input, config)
                except Exception as e:
                    if i == retries - 1:
                        print(f"[Retry] all attempts failed: {e}")
                        raise
        return RunnableLambda(new_invoke)
    return wrap

Composition example (onion structure):

# core runnable
core = RunnableLambda(lambda text: text.upper() if len(text) >= 10 else (_ for _ in ()).throw(Exception("Too short")))

# layered middleware
chain = log_middleware("outer")(retry_middleware(2)(core))
result = chain.invoke("example input")
print(result)

3. Command Pattern (Unified Execution Contract)

The Runnable abstract interface embodies the Command pattern: every executable component (Prompt, LLM, Parser, Tool, Chain, Middleware) implements a single invoke method, decoupling callers from concrete implementations and enabling interchangeable components.

4. Pipeline Pattern (LCEL Data Flow)

LangChain’s expression language (LCEL) uses the | operator to chain Runnable objects, enforcing type‑matched forward data flow. This pattern guarantees that each step’s output type matches the next step’s input type, providing a reliable, linear pipeline.

Cooperation of the Four Patterns

All four patterns interlock:

Command provides the unified Runnable contract.

Chain of Responsibility builds the forward execution skeleton.

Decorator adds cross‑cutting enhancements and enables reverse‑flow backtracking.

Pipeline (LCEL) enforces type‑safe data transmission within the chain.

Together they create a flexible, extensible, bidirectional execution architecture that supports complex LLM workflows while remaining easy to extend and maintain.

Practical Takeaways

When creating custom components, implement the Runnable interface to ensure compatibility.

Use the | operator (or RunnableSequence) to compose clear, type‑checked pipelines.

Apply middleware decorators for logging, retries, caching, or other concerns without touching core logic.

Leverage the layered architecture to diagnose failures: forward flow pinpoints where an exception occurs; after‑hooks in decorators reveal context during back‑propagation.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Chain of Responsibilitydesign-patternsLangChainMiddlewareRunnablePipelineDecoratorCommand
Tech Freedom Circle
Written by

Tech Freedom Circle

Crazy Maker Circle (Tech Freedom Architecture Circle): a community of tech enthusiasts, experts, and high‑performance fans. Many top‑level masters, architects, and hobbyists have achieved tech freedom; another wave of go‑getters are hustling hard toward tech freedom.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.