How the Runnable Interface Turns Scattered Logic into Composable Chains in LCEL

The article explains LCEL’s core design and the Runnable interface, showing how they unify prompts, models, and parsers into a single composable chain with built‑in streaming, batch, async support, parallel execution, type inference, and automatic tracing, illustrated with TypeScript examples.

James' Growth Diary
James' Growth Diary
James' Growth Diary
How the Runnable Interface Turns Scattered Logic into Composable Chains in LCEL

LCEL (LangChain Expression Language) provides a unified interface to connect scattered components by using the Runnable interface.

LCEL vs. manual composition

Without LCEL you would write three separate steps:

const prompt = await promptTemplate.format(inputs);
const llmResponse = await model.call(prompt);
const parsed = await parser.parse(llmResponse);

With LCEL the same logic becomes a single pipeline:

const chain = promptTemplate.pipe(model).pipe(parser);
const result = await chain.invoke(inputs);

Runnable interface

All LangChain components implement the abstract Runnable interface, which requires a single processing definition. Core methods:

invoke(input) – synchronous call returning a single output (simple scenarios).

stream(input) – streaming call yielding token‑by‑token output (typewriter effect).

batch(inputs[]) – batch call processing inputs in parallel (bulk tasks).

invoke (async) – asynchronous call (Node.js services).

Any Runnable automatically supports all four methods, so a chain inherits streaming, batch execution, and async capabilities without extra code.

Runnable inheritance hierarchy
┌───────────────────────────────────────┐
│          Runnable (abstract)           │
│   invoke / stream / batch / pipe       │
└─────────────────────┬─────────────────┘
                      │
        ┌─────────────┼─────────────┐
        │             │             │
 PromptTemplate   ChatModel   OutputParser
        │             │             │
 RunnableLambda RunnableParallel RunnableSequence
LCEL Runnable interface architecture diagram
LCEL Runnable interface architecture diagram

Pipe operator (|)

The pipe operator works like a Unix shell pipe: the output of the previous component becomes the input of the next.

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const prompt = ChatPromptTemplate.fromTemplate("请用一句话解释什么是 {concept}");
const model = new ChatOpenAI({ model: "gpt-4o-mini" });
const parser = new StringOutputParser();

const chain = prompt.pipe(model).pipe(parser);
const result = await chain.invoke({ concept: "LCEL" });
console.log(result);
// → "LCEL is LangChain's declarative composition language, using the pipe operator to link components into an executable processing chain."

This creates a RunnableSequence that executes steps in order, feeding each output to the next input.

chain.invoke({ concept: "LCEL" })
                     │
                     ▼
          ┌─────────────────────┐
          │   PromptTemplate    │
          │ { concept } → Msgs │
          └───────┬─────────────┘
                  │ [HumanMessage("请用...")]
                  ▼
          ┌─────────────────────┐
          │      ChatOpenAI      │
          │  Msgs → AIMessage   │
          └───────┬─────────────┘
                  │ AIMessage("LCEL is ...")
                  ▼
          ┌─────────────────────┐
          │ StringOutputParser   │
          │ AIMessage → string  │
          └───────┬─────────────┘
                  ▼
          "LCEL is LangChain's ..."

RunnableSequence

The pipe() call creates a RunnableSequence. It can also be built manually for clarity:

import { RunnableSequence } from "@langchain/core/runnables";

const chain = RunnableSequence.from([
  prompt,
  model,
  parser
]); // equivalent to prompt.pipe(model).pipe(parser)

TypeScript infers the whole chain’s input and output types, eliminating explicit type annotations.

// TypeScript inference:
// input: { concept: string }
// output: string
const result = await chain.invoke({ concept: "RAG" });
// result is typed as string
RunnableSequence execution flow
┌──────────────────────────────────────────────┐
│ Input → [Step1] → [Step2] → [Step3] → Out   │
│          ↑          ↑          ↑            │
│      output1    output2    output3           │
└──────────────────────────────────────────────┘

RunnableParallel

Parallel execution is useful for scenarios such as retrieving documents with two strategies, generating a summary and keywords simultaneously, or translating into multiple languages.

import { RunnableParallel } from "@langchain/core/runnables";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { ChatOpenAI } from "@langchain/openai";
import { StringOutputParser } from "@langchain/core/output_parsers";

const model = new ChatOpenAI({ model: "gpt-4o-mini" });
const parser = new StringOutputParser();

const parallelChain = RunnableParallel.from({
  summary: ChatPromptTemplate.fromTemplate("用一句话总结:{text}")
    .pipe(model)
    .pipe(parser),
  keywords: ChatPromptTemplate.fromTemplate("提取3个关键词,用逗号分隔:{text}")
    .pipe(model)
    .pipe(parser),
});

const result = await parallelChain.invoke({
  text: "LangChain 是一个用于构建 AI 应用的开源框架,提供了丰富的组件和工具。"
});
console.log(result);
// {
//   summary: "LangChain 是构建 AI 应用的开源框架。",
//   keywords: "LangChain, AI框架, 开源"
// }

Two LLM calls are issued in parallel, so total latency is roughly the same as a single call.

RunnableParallel execution diagram
          Input
            │
      ┌─────┼─────┐
      │     │     │
      ▼     ▼     ▼
   summary keywords other
      │     │     │
      └─────┼─────┘
            │
      { key: value }   (merged dictionary output)
RunnableParallel parallel execution diagram
RunnableParallel parallel execution diagram

RunnablePassthrough

When later steps need the original input after earlier transformations, RunnablePassthrough forwards the input unchanged.

import { RunnablePassthrough, RunnableParallel } from "@langchain/core/runnables";

const ragChain = RunnableParallel.from({
  question: new RunnablePassthrough(), // pass‑through the question
  context: retriever,                  // retrieve documents
}).pipe(
  ChatPromptTemplate.fromTemplate(
    "根据以下上下文回答问题:
上下文:{context}
问题:{question}"
  )
).pipe(model).pipe(parser);

const answer = await ragChain.invoke("什么是 LCEL?");

RunnableLambda

Wrap ordinary functions (sync or async) as Runnable to insert custom logic into a chain.

import { RunnableLambda } from "@langchain/core/runnables";

// sync function example
const cleanText = (text: string): string => {
  return text.trim().replace(/\s+/g, " ");
};
const cleanRunnable = RunnableLambda.from(cleanText);

// async function example
const fetchContext = RunnableLambda.from(async (query: string) => {
  const docs = await vectorStore.similaritySearch(query, 3);
  return docs.map(d => d.pageContent).join("
");
});

const chain = cleanRunnable
  .pipe(prompt)
  .pipe(model)
  .pipe(parser);

Automatic type coercion

LCEL automatically converts plain objects and functions into the appropriate Runnable types, so developers can write concise pipelines.

// developer writes:
const chain = {
  question: new RunnablePassthrough(),
  context: retriever
} | promptTemplate | model | parser;

// LCEL actually executes:
const chain = RunnableSequence.from([
  RunnableParallel.from({
    question: new RunnablePassthrough(),
    context: retriever
  }),
  promptTemplate,
  model,
  parser
]);

Conversion rules:

Object with {key: runnable}RunnableParallel Function (x) => …RunnableLambda Async function async (x) => …RunnableLambda Existing Runnable → used as‑is

LCEL type conversion and composition diagram
LCEL type conversion and composition diagram

Why use LCEL instead of hand‑written code?

LCEL bundles several capabilities that would otherwise require custom implementation:

Streaming output – built‑in via stream().

Batch parallelism – built‑in via batch().

Automatic LangSmith tracing – each step reports to LangSmith without extra code.

Observability – chain structure can be visualised (e.g., getGraph().drawMermaid()).

Type safety – TypeScript infers input/output types.

Declarative composability – unified API for chaining.

LCEL vs hand‑written code comparison
┌─────────────────────┬───────────────┬───────────────┐
│      Capability     │    LCEL       │ Hand‑written   │
├─────────────────────┼───────────────┼───────────────┤
│ Streaming output    │ ✅ built‑in   │ ❌ custom impl │
│ Batch parallelism   │ ✅ built‑in   │ ❌ custom impl │
│ Async support       │ ✅ built‑in   │ ❌ custom impl │
│ LangSmith tracing   │ ✅ automatic  │ ❌ manual      │
│ Type safety         │ ✅ inference │ ⚠️ manual hints│
│ Composability       │ ✅ unified API│ ⚠️ convention │
└─────────────────────┴───────────────┴───────────────┘

Summary

The Runnable interface unifies invoke, stream, and batch calls.

The | pipe operator is syntactic sugar for RunnableSequence, making chain logic instantly readable.

RunnableParallel turns multi‑branch concurrency into a declarative construct, eliminating manual Promise.all.

RunnablePassthrough forwards data unchanged; RunnableLambda wraps custom functions.

Automatic type coercion lets dictionaries, functions, or plain objects be dropped directly into pipelines; LCEL adapts them.

LCEL’s value lies in providing streaming, batch, tracing, observability, and type safety at no extra cost.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

TypeScriptAILangChainRunnableLCEL
James' Growth Diary
Written by

James' Growth Diary

I am James, focusing on AI Agent learning and growth. I continuously update two series: “AI Agent Mastery Path,” which systematically outlines core theories and practices of agents, and “Claude Code Design Philosophy,” which deeply analyzes the design thinking behind top AI tools. Helping you build a solid foundation in the AI era.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.