Artificial Intelligence 17 min read

Unlocking LangChain.js: The Swiss Army Knife for LLM Applications

This article introduces LangChain.js, its core concepts such as chats, templates, tools, and chains, demonstrates how to use LCEL for flexible workflow composition, and shows practical JavaScript code examples for building AI-powered applications with large language models.

Code Mala Tang
Code Mala Tang
Code Mala Tang
Unlocking LangChain.js: The Swiss Army Knife for LLM Applications

About LangChain.js

LangChain.js has over 92k stars on GitHub. It is described as a Swiss army knife for large language models.

LangChain was launched by Harrison Chase in October 2022 as an open‑source project that connects OpenAI’s GPT API (and later other models) to generate AI text. Before founding LangChain, Chase led ML teams at Robust Intelligence and the entity‑linking team at Kensho, and studied statistics and computer science at Harvard. The project later became a startup and secured funding.

Specifically, it implements the paper “ReAct: Synergizing Reasoning and Acting in Language Models”, which introduces a prompting technique that lets the model both reason (chain‑of‑thought) and act (use predefined tools such as internet search).

Paper link: https://arxiv.org/pdf/2210.03629.pdf

The combination dramatically improves output quality and enables large language models to solve problems correctly, and the price drop of ChatGPT’s API has further accelerated its growth.

Basic Definitions

Core concepts in LangChain.js include Chats, Templates, Tools, and Chains.

Chat Messages

Supports different message types: SystemChatMessage, HumanChatMessage, AIChatMessage, with variable substitution.

HumanChatMessage: represents user input such as a question, command, or request.

AIChatMessage: represents the model’s output or reply.

SystemChatMessage: sets the conversation background or role for the model.

<code>import { ChatOpenAI } from 'langchain/llms/openai';
import { HumanChatMessage, AIChatMessage, SystemChatMessage } from 'langchain/schema';

// Initialize ChatOpenAI model
const chatModel = new ChatOpenAI({
  openAIApiKey: 'your-api-key',
});

// Define chat messages
const messages = [
  new SystemChatMessage("You are a quantum physics professor, answering scientific questions."),
  new HumanChatMessage("What is quantum entanglement?"),
];

// Model generates response
const response = await chatModel.call(messages);

// Output AI's reply
console.log(response);
</code>

Templates

PromptTemplate is a core component for creating and managing prompts for LLMs. It supports simple variables and logical conditions.

<code>import { PromptTemplate } from 'langchain/prompts';

// Define a complex template with conditional logic
const template = `
Hello, my name is {name}.
{#if topic}
Today I want to discuss {topic}.
{#else}
I have no specific discussion topic today.
{#/if}
`;

const prompt = new PromptTemplate({
  template,
  inputVariables: ["name", "topic"],
});

// With a topic
const finalPromptWithTopic = await prompt.format({ name: "Andi Yang", topic: "quantum computing" });
console.log(finalPromptWithTopic);

// Without a topic
const finalPromptWithoutTopic = await prompt.format({ name: "Andi Yang" });
console.log(finalPromptWithoutTopic);
</code>

Supports streaming: automatically passes each step’s output as the next step’s input, simplifying multi‑step task management.

<code>import { PromptTemplate, PipelinePromptTemplate } from 'langchain/prompts';

// Define multiple PromptTemplates
const titlePrompt = new PromptTemplate({
  template: "Generate a title for the following content:\n{content}",
  inputVariables: ["content"],
});

const introPrompt = new PromptTemplate({
  template: "Generate a short introduction for this title:\n{title}",
  inputVariables: ["title"],
});

const summaryPrompt = new PromptTemplate({
  template: "Generate a summary for this introduction:\n{introduction}",
  inputVariables: ["introduction"],
});

// Chain the prompts
const pipeline = new PipelinePromptTemplate({
  pipelinePrompts: [
    { name: "title", promptTemplate: titlePrompt },
    { name: "introduction", promptTemplate: introPrompt },
    { name: "summary", promptTemplate: summaryPrompt },
  ],
});

const finalPrompt = await pipeline.format({ content: "Quantum computing is changing the future of computation." });
console.log(finalPrompt);
</code>

Tools

Tools are core components for building complex workflows and intelligent applications. They can be functions, API calls, database queries, etc., allowing a model to dynamically invoke external services.

Tool definition typically includes name, description, and functionality.

Name: identifier for the tool.

Description: explains the tool’s purpose.

Functionality: the actual logic that performs the task.

<code>import { OpenAI } from 'langchain/llms/openai';
import { Tool, AgentExecutor } from 'langchain/agents';

// Initialize OpenAI model
const model = new OpenAI({ openAIApiKey: 'your-api-key' });

// Simple weather tool
const weatherTool = new Tool({
  name: 'getWeather',
  description: 'Get current weather for a city',
  action: async (input) => {
    const weatherData = {
      'Beijing': 'Sunny 25°C',
      'Shanghai': 'Light rain 22°C',
    };
    return weatherData[input] || 'Weather data not available';
  },
});

const executor = new AgentExecutor({ tools: [weatherTool], llm: model });

const input = "Query the weather in Beijing";
const response = await executor.call({ input });
console.log(response); // Beijing weather is Sunny 25°C
</code>

Chains

Chains link multiple steps into a workflow, allowing modules such as model calls, tools, or API requests to execute sequentially.

LangChain provides several chain types:

SimpleChain – linear execution of modules.

SequentialChain – multiple modules in order, each step’s output can feed the next.

LLMChain – specialized for invoking language models with PromptTemplate.

RouterChain – dynamically selects different chains based on input.

TransformChain – used for data transformation between steps.

<code>import { OpenAI } from 'langchain/llms/openai';
import { PromptTemplate } from 'langchain/prompts';
import { SequentialChain } from 'langchain/chains';

// Initialize model
const model = new OpenAI({ openAIApiKey: 'your-api-key' });

const titlePrompt = new PromptTemplate({
  template: "Generate a title for the following content:\n{content}",
  inputVariables: ["content"],
});

const summaryPrompt = new PromptTemplate({
  template: "Generate a short summary for this title:\n{title}",
  inputVariables: ["title"],
});

const chain = new SequentialChain({
  chains: [
    new LLMChain({ llm: model, prompt: titlePrompt }), // step 1
    new LLMChain({ llm: model, prompt: summaryPrompt }), // step 2
  ],
  inputVariables: ["content"],
  outputVariables: ["summary"],
});

const response = await chain.call({ content: "Quantum computing leverages quantum mechanics for computation." });
console.log(response);
</code>

LangChain also supports converting content to vector data for retrieval, memory storage, and state tracking.

LangChain Execution Language (LCEL)

LCEL is a concise language designed for LangChain.js that lets developers describe and execute operations more intuitively.

It enables defining custom data flows and workflows, connecting model calls, tools, and API requests into flexible pipelines.

Core Features of LCEL

Define Data Flow : Explicitly specify how data moves between modules.

Flexible Composition : Combine different module types into chain calls.

Conditional Logic : Add if‑else branches to adjust the workflow dynamically.

Modular Design : Build reusable runnable modules and connect them as needed.

LCEL strings represent a sequence of runnable modules such as model calls, database queries, or API invocations.

Example: generate a summary then an introduction using LCEL.

<code>import { OpenAI } from 'langchain/llms/openai';
import { PromptTemplate } from 'langchain/prompts';
import { RunnableSequence } from 'langchain/chains';

// Initialize model
const model = new OpenAI({ openAIApiKey: 'your-api-key' });

const summaryPrompt = new PromptTemplate({
  template: "Generate a short summary for the following content:\n{content}",
  inputVariables: ["content"],
});

const introductionPrompt = new PromptTemplate({
  template: "Based on this summary, generate a brief introduction:\n{summary}",
  inputVariables: ["summary"],
});

const chain = RunnableSequence.from([
  summaryPrompt,
  model,
  introductionPrompt,
  model,
]);

const response = await chain.call({ content: "Quantum computing uses quantum mechanics for computation." });
console.log(response);
</code>

LCEL can also embed conditional logic to choose different processing paths.

<code>import { OpenAI } from 'langchain/llms/openai';
import { RunnableSequence } from 'langchain/chains';

const model = new OpenAI({ openAIApiKey: 'your-api-key' });

const chain = RunnableSequence.from([
  async (input) => {
    if (input.includes("weather")) {
      return "Weather forecast";
    } else {
      return "General text processing";
    }
  },
  model,
]);

const response = await chain.call("Tell me the weather in New York.");
console.log(response);
</code>

LCEL provides a flexible, modular way to manage complex task workflows, enabling multi‑step text processing, retrieval‑augmented generation, and intelligent agents.

LangChain also supports RAG, an important direction for AI deployment, allowing company data to be fed to large models for more accurate answers.

Conclusion

LangChain is a valuable tool for building AI applications; its decomposition and encapsulation of large‑model calls are worth learning.

JavaScriptLLMprompt engineeringLangChainAI WorkflowLCEL
Code Mala Tang
Written by

Code Mala Tang

Read source code together, write articles together, and enjoy spicy hot pot together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.