Unlocking LangChain.js: The Swiss Army Knife for LLM Applications
This article introduces LangChain.js, explains its origins, core concepts such as chats, templates, tools, and chains, demonstrates practical JavaScript code examples, and explores the LangChain Execution Language (LCEL) for building flexible, conditional AI workflows.
About LangChain.js
LangChain.js has amassed over 92k stars on GitHub. It is often described as the Swiss Army knife of large‑model development.
LangChain was launched in October 2022 by Harrison Chase as an open‑source project that connects OpenAI’s GPT API (later expanded to many models) for generating AI text. Before founding LangChain, Chase led ML teams at Robust Intelligence and the entity‑linking team at Kensho, and studied statistics and computer science at Harvard. The project later became a startup and secured funding.
Specifically, LangChain implements the paper "ReAct: Synergizing Reasoning and Acting in Language Models," which introduces a prompting technique that lets a model both reason (via chain‑of‑thought) and act (by using predefined tools such as internet search).
The paper (https://arxiv.org/pdf/2210.03629.pdf) shows that this combination dramatically improves output quality and enables large language models to solve problems correctly. The price drop of the ChatGPT API also fueled its rapid growth.
Basic Definitions
Key concepts in LangChain.js include Chats, Templates, Tools, and Chains.
Chat Messages
LangChain supports several message types with variable substitution:
HumanChatMessage – represents a user’s input (question, command, or request).
AIChatMessage – represents the model’s reply to a HumanChatMessage.
SystemChatMessage – sets the conversation background or role, telling the model how to behave.
<code>import { ChatOpenAI } from 'langchain/llms/openai';
import { HumanChatMessage, AIChatMessage, SystemChatMessage } from 'langchain/schema';
// Initialize ChatOpenAI model
const chatModel = new ChatOpenAI({
openAIApiKey: 'your-api-key',
});
// Define conversation messages
const messages = [
new SystemChatMessage("You are a quantum‑physics professor answering scientific questions."),
new HumanChatMessage("What is quantum entanglement?"),
];
// Generate a response
const response = await chatModel.call(messages);
console.log(response);
</code>Templates (PromptTemplate)
PromptTemplate is a core component that creates and manages prompts for LLMs. It allows developers to build dynamic, reusable templates that combine user input with predefined content, supporting simple variables and logical conditions.
<code>import { PromptTemplate } from 'langchain/prompts';
// Define a more complex template with conditional logic
const template = `
Hello, my name is {name}.
{#if topic}
Today I want to discuss {topic}.
{#else}
I have no specific discussion topic today.
{#/if}
`;
const prompt = new PromptTemplate({
template: template,
inputVariables: ["name", "topic"],
});
// With a topic
const finalPromptWithTopic = await prompt.format({ name: "Andi Yang", topic: "Quantum Computing" });
console.log(finalPromptWithTopic);
// Without a topic
const finalPromptWithoutTopic = await prompt.format({ name: "Andi Yang" });
console.log(finalPromptWithoutTopic);
</code>PromptTemplate also supports pipeline processing, automatically feeding each step’s output into the next step.
<code>import { PromptTemplate, PipelinePromptTemplate } from 'langchain/prompts';
const titlePrompt = new PromptTemplate({
template: "Generate a title for the following content:\n{content}",
inputVariables: ["content"],
});
const introPrompt = new PromptTemplate({
template: "Generate a short introduction for this title:\n{title}",
inputVariables: ["title"],
});
const summaryPrompt = new PromptTemplate({
template: "Generate a summary for this introduction:\n{introduction}",
inputVariables: ["introduction"],
});
const pipeline = new PipelinePromptTemplate({
pipelinePrompts: [
{ name: "title", promptTemplate: titlePrompt },
{ name: "introduction", promptTemplate: introPrompt },
{ name: "summary", promptTemplate: summaryPrompt },
],
});
const finalPrompt = await pipeline.format({ content: "Quantum computing is reshaping the future of computation." });
console.log(finalPrompt);
</code>Tools
Tools are core components for building complex workflows. They can be functions, API calls, database queries, etc., allowing a language model to dynamically invoke external services.
Name – identifies the tool.
Description – explains what the tool does.
Functionality – the actual implementation logic.
<code>import { OpenAI } from 'langchain/llms/openai';
import { Tool, AgentExecutor } from 'langchain/agents';
// Initialize OpenAI model
const model = new OpenAI({ openAIApiKey: 'your-api-key' });
// Simple weather tool
const weatherTool = new Tool({
name: 'getWeather',
description: 'Retrieve current weather for a given city',
action: async (input) => {
const weatherData = {
'Beijing': 'Sunny 25°C',
'Shanghai': 'Light rain 22°C',
};
return weatherData[input] || 'Weather data not available';
},
});
// Create an agent that can select tools based on input
const executor = new AgentExecutor({ tools: [weatherTool], llm: model });
const input = "What's the weather in Beijing?";
const response = await executor.call({ input });
console.log(response); // Output: Beijing's weather is Sunny 25°C
</code>Chains
Chains link multiple steps into a workflow. Each step can be a model call, tool, API request, etc., allowing complex tasks to be broken into simple, ordered operations.
Common chain types include:
SimpleChain – linear execution of modules.
SequentialChain – multiple modules executed in order, each output feeding the next.
LLMChain – specifically calls a language model, often paired with PromptTemplate.
RouterChain – dynamically selects different chains based on input conditions.
TransformChain – performs data transformations between steps.
<code>import { OpenAI } from 'langchain/llms/openai';
import { PromptTemplate } from 'langchain/prompts';
import { SequentialChain } from 'langchain/chains';
const model = new OpenAI({ openAIApiKey: 'your-api-key' });
const titlePrompt = new PromptTemplate({
template: "Generate a title for the following content:\n{content}",
inputVariables: ["content"],
});
const summaryPrompt = new PromptTemplate({
template: "Based on this title, generate a short summary:\n{title}",
inputVariables: ["title"],
});
const chain = new SequentialChain({
chains: [
new LLMChain({ llm: model, prompt: titlePrompt }), // step 1: title
new LLMChain({ llm: model, prompt: summaryPrompt }), // step 2: summary
],
inputVariables: ["content"],
outputVariables: ["summary"],
});
const response = await chain.call({ content: "Quantum computing leverages quantum mechanics for computation." });
console.log(response);
</code>LangChain also supports converting content to vector embeddings for similarity search, memory storage, and state tracking.
LangChain Execution Language (LCEL)
LCEL is a dedicated execution language for LangChain.js that lets developers describe and run operations in a concise, intuitive way.
It enables developers to define custom data flows and workflows, allowing model calls, tool functions, and API requests to be flexibly combined.
Core Features of LCEL
Define Data Flow : Clearly specify how data moves between modules.
Flexible Composition : Chain together different module types (models, tools, data processors).
Conditional Logic : Add if‑else branches to adjust the workflow based on inputs or results.
Modular Design : Build reusable modules (runnables) that can be combined and re‑used.
LCEL strings describe how runnable modules (e.g., model calls, database queries) are linked.
Example: generate a summary, then an introduction based on that summary.
<code>import { OpenAI } from 'langchain/llms/openai';
import { PromptTemplate } from 'langchain/prompts';
import { RunnableSequence } from 'langchain/chains';
const model = new OpenAI({ openAIApiKey: 'your-api-key' });
const summaryPrompt = new PromptTemplate({
template: "Generate a short summary for the following content:\n{content}",
inputVariables: ["content"],
});
const introPrompt = new PromptTemplate({
template: "Based on this summary, write a brief introduction:\n{summary}",
inputVariables: ["summary"],
});
const chain = RunnableSequence.from([
summaryPrompt,
model,
introPrompt,
model,
]);
const response = await chain.call({ content: "Quantum computing uses quantum mechanics for computation." });
console.log(response);
</code>Conditional logic can also be added:
<code>import { OpenAI } from 'langchain/llms/openai';
import { RunnableSequence } from 'langchain/chains';
const model = new OpenAI({ openAIApiKey: 'your-api-key' });
const chain = RunnableSequence.from([
async (input) => {
if (input.includes("weather")) {
return "Weather forecast";
} else {
return "General text processing";
}
},
model,
]);
const response = await chain.call("Tell me the weather in New York.");
console.log(response);
</code>LCEL provides a flexible, modular way to manage complex tasks, making it valuable for text processing, retrieval‑augmented generation (RAG), intelligent agents, and more.
LangChain also supports RAG (Retrieval‑Augmented Generation), a major direction for deploying AI in enterprises by feeding company data into large models for more accurate answers.
Conclusion
LangChain is a valuable tool for developing AI applications; its decomposition and encapsulation of large‑model calls are worth learning and adopting.
Code Mala Tang
Read source code together, write articles together, and enjoy spicy hot pot together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.