Artificial Intelligence 35 min read

Enhancing Large Language Models with LangChain: Prompt Engineering, Chains, Agents, and Node.js Implementation

This article explains the limitations of large language models, introduces prompt engineering as a remedy, and provides a comprehensive guide to using the LangChain framework—including models, prompts, chains, agents, vector search, and practical Node.js code examples—to enable LLMs to interact with external tools and data sources.

Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Enhancing Large Language Models with LangChain: Prompt Engineering, Chains, Agents, and Node.js Implementation

Large language models (LLMs) such as ChatGPT have powerful generative abilities but suffer from four main drawbacks: outdated knowledge, tendency to hallucinate, lack of precise logical computation, and inability to interact with external systems. Prompt engineering can mitigate these issues by adding context and explicit instructions to the model's input.

LangChain is an AI‑development framework designed to give LLMs the missing "arms" by connecting them to external APIs, databases, and other tools. Its core entities are:

Models – the underlying LLM (e.g., OpenAI, LLaMA, BLOOM).

Prompts – templates that combine user input with static context.

Chains – sequences that combine prompts with external calls (e.g., APIChain , SqlDBChain , RetrievalQaChain ).

Agents – dynamic controllers that choose tools (vector search, calculators, etc.) based on the model's reasoning.

Using LangChain in Node.js

Install the library:

yarn add langchain
# or
npm i langchain

Create an LLM instance and call it:

import { PromptTemplate, OpenAI } from 'langchain';
const model = new OpenAI();
const resA = await model.call('为一个披萨饼餐厅起一个好的名字。');
res.status(200).json({ result: resA });

Build a reusable prompt template:

const template = '为{restaurantType}餐厅起一个好的名字。';
const promptA = new PromptTemplate({ template, inputVariables: ['restaurantType'] });
const formattedPrompt = await promptA.format({ restaurantType: '四川菜' });
const resA = await model.call(formattedPrompt);

Example of an APIChain that fetches real‑time weather:

import { OpenAI } from "langchain/llms/openai";
import { APIChain } from "langchain/chains";
const OPEN_METEO_DOCS = `BASE URL: https://api.open-meteo.com/...`;
const model = new OpenAI();
const chain = APIChain.fromLLMAndAPIDocs(model, OPEN_METEO_DOCS, { headers: {} });
const res = await chain.call({ question: "上海今天天气怎么样" });
console.log({ res });

When the model lacks the needed context, it may hallucinate; feeding the relevant text as part of the prompt restores accuracy.

Vector Search and Text Splitting

To handle large documents, split them into chunks with MarkdownTextSplitter , embed each chunk (e.g., using OpenAIEmbeddings ), and store them in a Faiss vector store:

const splitter = new MarkdownTextSplitter({ chunkSize: 100, chunkOverlap: 50 });
const output = await splitter.splitText(docText);
const embedding = new OpenAIEmbeddings();
const vectorStore = await FaissStore.fromTexts(output, metadata, embedding);

Search the most relevant chunks:

const topK = 3;
const searchRes = await vectorStore.similaritySearchWithScore('湖南高考报名人数', topK);
// returns the three most relevant passages

Agents with Tools

Define tools such as a vector‑search tool and a calculator tool, then create an agent executor that iteratively thinks, selects a tool, observes the result, and finally produces an answer:

class Calculator extends Tool {
  name = "calculator";
  description = `用于计算数学表达式的工具。`;
  async _call(input) {
    try { return Parser.evaluate(input).toString(); }
    catch { return "I don't know how to do that."; }
  }
}
const tools = [new VectorStoreQATool('vector-search', '知识搜索工具', { llm: llmA, vectorStore }), new Calculator()];
const executor = await initializeAgentExecutorWithOptions(tools, llmA, { agentType: 'zero-shot-react-description', verbose: true });
const result = await executor.call({ input: '湖南高考报名人数加上甘肃高考报名人数的结果' });
// Final Answer: 931848

The agent first uses the vector‑search tool to retrieve the numbers, then the calculator tool to add them, handling expression errors by re‑formatting the numbers.

Challenges and Summary

Customization: built‑in chains and agents may not cover all business scenarios, requiring custom implementations.

Success rate & speed: complex multi‑step agents can be unstable or slow, especially when external APIs are involved.

Data security: using proprietary LLMs may raise privacy concerns; open‑source models need extra prompt engineering.

Overall, LangChain provides a low‑cost way to give LLMs real‑world interaction capabilities, allowing developers—especially those familiar with JavaScript/Node.js—to build intelligent applications without deep AI expertise.

LLMPrompt EngineeringLangChainNode.jsVector SearchAI DevelopmentAgents
Rare Earth Juejin Tech Community
Written by

Rare Earth Juejin Tech Community

Juejin, a tech community that helps developers grow.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.