Building a Tool-Calling Agent from Scratch with LangChain.js

This tutorial walks through creating a fully functional Tool-Calling Agent using LangChain.js, covering tool definition, model binding, manual execution loops, the high‑level createReactAgent API, streaming responses, state management with thread IDs, common pitfalls, and a complete runnable example.

James' Growth Diary
James' Growth Diary
James' Growth Diary
Building a Tool-Calling Agent from Scratch with LangChain.js

Tool-Calling is what?

Tool-Calling is the bridge that lets an LLM go from “just talking” to “taking action”. It equips the model with a keyboard, mouse, and phone so it can query databases, send emails, or call APIs instead of merely suggesting them.

The core mechanism: you send the tool name, parameter schema, and description to the model; the model decides whether to call a tool, which one, and with what arguments, returning structured JSON. You execute the tool, feed the result back, and the model produces the final answer.

Step 1: Define Tools (the most error‑prone part)

The quality of the tool description directly determines whether the Agent uses the tool correctly.

LangChain.js supports two ways to declare tools: the recommended tool() function and the older Tool class.

import { tool } from "@langchain/core/tools";
import { z } from "zod";

// ✅ Recommended: tool() + Zod schema
const getWeather = tool(
  async ({ city, unit = "celsius" }) => {
    // mock data for illustration
    const mockData = {
      "北京": 22,
      "上海": 28,
      "广州": 35,
    };
    const temp = mockData[city] ?? 20;
    return `${city}当前气温:${temp}°${unit === "celsius" ? "C" : "F"},晴`;
  },
  {
    name: "get_weather",
    // description tells the model when to use the tool
    description: "获取指定城市的当前天气。当用户询问天气、温度、是否需要带伞时使用。",
    schema: z.object({
      city: z.string().describe("城市名称,如「北京」「上海""),
      unit: z.enum(["celsius", "fahrenheit"]).optional().describe("温度单位,默认摄氏度"),
    }),
  }
);

const searchDatabase = tool(
  async ({ query, limit = 10 }) => {
    // simulate a DB query
    return JSON.stringify([
      { id: 1, name: "产品A", sales: 1200 },
      { id: 2, name: "产品B", sales: 890 },
    ]);
  },
  {
    name: "search_database",
    description: "查询业务数据库。当用户需要查询销售数据、产品信息、用户数据时使用。不适用于天气、新闻等实时外部信息。",
    schema: z.object({
      query: z.string().describe("查询关键词或SQL条件描述"),
      limit: z.number().optional().describe("返回条数,默认10"),
    }),
  }
);

Two key elements must appear in the description:

Trigger scenario – when the tool should be used.

Exclusion – when the tool must not be used.

工具定义与描述规范示意图
工具定义与描述规范示意图

Step 2: Bind Tools to the Model

bindTools() tells the model which tools are available; the timing of the call matters.

import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  model: "gpt-4o",
  temperature: 0, // recommended for Agent scenarios
});

// Bind the tool list – this only informs the model, it does not execute anything
const modelWithTools = model.bindTools([getWeather, searchDatabase]);

// Test: invoke directly to see the model's tool call response
const response = await modelWithTools.invoke("北京今天天气怎么样?");
console.log(response.tool_calls);
// Expected output:
// [{ name: 'get_weather', args: { city: '北京' }, id: 'call_abc123' }]

When the model returns tool_calls, the content field is usually empty – the model is saying “I need to use a tool”. Your code must execute the tool and feed the result back.

Step 3: Manually Implement the Tool Execution Loop

Understanding the low‑level loop helps when you later switch to the higher‑level API.

import { HumanMessage, AIMessage, ToolMessage } from "@langchain/core/messages";

async function runAgentManually(userInput: string) {
  const messages = [new HumanMessage(userInput)];
  const tools = [getWeather, searchDatabase];
  const toolMap = Object.fromEntries(tools.map(t => [t.name, t]));

  // Prevent infinite loops
  for (let i = 0; i < 10; i++) {
    const response = await modelWithTools.invoke(messages);
    messages.push(response);

    // No tool_calls → final answer
    if (!response.tool_calls || response.tool_calls.length === 0) {
      console.log("最终回答:", response.content);
      break;
    }

    // Execute each tool call
    for (const toolCall of response.tool_calls) {
      const targetTool = toolMap[toolCall.name];
      if (!targetTool) {
        messages.push(new ToolMessage({
          tool_call_id: toolCall.id!,
          content: `错误:工具 ${toolCall.name} 不存在`,
        }));
        continue;
      }
      try {
        const result = await targetTool.invoke(toolCall.args);
        messages.push(new ToolMessage({
          tool_call_id: toolCall.id!,
          content: result,
        }));
      } catch (err) {
        messages.push(new ToolMessage({
          tool_call_id: toolCall.id!,
          content: `工具执行出错:${(err as Error).message}`,
        }));
      }
    }
  }
  return messages;
}

await runAgentManually("北京今天适合出门吗?");

Three critical details in the loop:

Each tool_call_id must match the ID returned by the model so the model can associate the result.

Tool errors should be wrapped in a ToolMessage instead of throwing.

Set an iteration limit to avoid endless calls.

Tool-Calling 执行循环时序图
Tool-Calling 执行循环时序图

Step 4: Simplify with createReactAgent

The manual loop is verbose; createReactAgent packages the logic.

import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { MemorySaver } from "@langchain/langgraph";

const checkpointer = new MemorySaver();

const agent = createReactAgent({
  llm: model,
  tools: [getWeather, searchDatabase],
  checkpointSaver: checkpointer,
  // System prompt tells the Agent how to act
  stateModifier: `你是一个智能助手。
回答问题前,先判断是否需要查询工具。
如果需要查询实时数据(天气、数据库),必须先调用对应工具,不要凭空猜测。
工具返回结果后,用自然语言整理回答,不要直接粘贴 JSON。`,
});

const result = await agent.invoke(
  { messages: [new HumanMessage("北京今天天气怎么样,适合户外运动吗?")] },
  { configurable: { thread_id: "user-001" } }
);
const finalMessage = result.messages[result.messages.length - 1];
console.log(finalMessage.content);
// Output: 北京今天气温22°C,晴天,非常适合户外运动!
createReactAgent

automatically:

Loops until no tool_calls remain.

Maintains message history.

Persists conversations per thread_id when combined with a checkpoint saver.

Step 5: Handle Streaming Output

Production must use streaming; otherwise users wait forever.

import { StreamEvent } from "@langchain/core/tracers/log_stream";

async function streamAgent(userInput: string, threadId: string) {
  const stream = agent.streamEvents(
    { messages: [new HumanMessage(userInput)] },
    { version: "v2", configurable: { thread_id: threadId } }
  );

  for await (const event of stream) {
    switch (event.event) {
      case "on_chat_model_stream":
        const chunk = event.data?.chunk?.content;
        if (chunk) process.stdout.write(chunk);
        break;
      case "on_tool_start":
        console.log(`
[调用工具: ${event.name}]`);
        break;
      case "on_tool_end":
        console.log(`[工具完成: ${event.name}]`);
        break;
    }
  }
}

await streamAgent("上海和北京哪个城市更热?", "user-002");
// Example output shows tool start/end events and the final comparative answer.

When the model queries two cities, it calls get_weather twice in series – this is expected.

流式输出与工具调用事件流程图
流式输出与工具调用事件流程图

Step 6: Manage Tool Call State Across Turns

In multi‑turn dialogs, the tool execution state must follow the same thread_id to avoid mixing conversations.

import { PostgresSaver } from "@langchain/langgraph-checkpoint-postgres";

const pgCheckpointer = PostgresSaver.fromConnString(process.env.DATABASE_URL!);
await pgCheckpointer.setup();

const persistentAgent = createReactAgent({
  llm: model,
  tools: [getWeather, searchDatabase],
  checkpointSaver: pgCheckpointer,
  stateModifier: "你是一个有记忆的助手,记住用户之前说过的信息。",
});

// First turn – user says they are in Beijing
await persistentAgent.invoke(
  { messages: [new HumanMessage("我在北京")] },
  { configurable: { thread_id: "session-abc" } }
);

// Second turn – ask about weather; the agent remembers the location
const response = await persistentAgent.invoke(
  { messages: [new HumanMessage("今天天气怎么样?")] },
  { configurable: { thread_id: "session-abc" } }
);
console.log(response.messages.at(-1)?.content);
// Output: 北京今天气温22°C,晴 ← correct, no need to ask location again

Do NOT generate a random thread_id each call; use a stable identifier per user/session.

Common Pitfalls (Learned the Hard Way)

Pitfall 1: In a Zod schema, call .optional() after .describe(), otherwise a type error occurs.

// ❌ Wrong order
z.string().optional().describe("说明");
// ✅ Correct order
z.string().describe("说明").optional();

Pitfall 2: Tool functions must return a string. Returning raw objects leads to uncontrolled JSON serialization.

// ❌ Returns object directly
return { result: [...] };
// ✅ Serialize explicitly
const data = await queryDB(query);
return JSON.stringify(data, null, 2);

Pitfall 3: Using forEach with await does not preserve order; use for...of or Promise.all.

// ❌ forEach loses await
response.tool_calls.forEach(async call => { await tool.invoke(call.args); });
// ✅ Serial execution
for (const call of response.tool_calls) { await tool.invoke(call.args); }
// ✅ Parallel execution
const results = await Promise.all(response.tool_calls.map(call => tool.invoke(call.args)));

Pitfall 4: Leaving temperature at the default makes the Agent randomly decide whether to call a tool.

// ❌ Default temperature (unstable)
const model = new ChatOpenAI({ model: "gpt-4o" });
// ✅ Fixed temperature for deterministic tool usage
const model = new ChatOpenAI({ model: "gpt-4o", temperature: 0 });

Pitfall 5: Omitting a system prompt that forces tool usage lets the model answer directly without calling the tool.

// ❌ No constraint
"你是一个助手,帮用户回答问题。"
// ✅ Explicit requirement
"回答涉及实时数据的问题前,必须先调用工具获取真实数据,不允许猜测。"
常见坑与解决方案对照图
常见坑与解决方案对照图

Full Runnable Example

Combine all fragments into a minimal working Tool-Calling Agent:

import { tool } from "@langchain/core/tools";
import { ChatOpenAI } from "@langchain/openai";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { MemorySaver } from "@langchain/langgraph";
import { HumanMessage } from "@langchain/core/messages";
import { z } from "zod";

// 1. Define tool
const getWeather = tool(
  async ({ city }) => `${city}当前气温:22°C,晴`,
  {
    name: "get_weather",
    description: "获取指定城市天气。询问天气、温度、出行建议时使用。",
    schema: z.object({ city: z.string().describe("城市名") }),
  }
);

// 2. Initialize model
const model = new ChatOpenAI({ model: "gpt-4o", temperature: 0 });

// 3. Create Agent
const agent = createReactAgent({
  llm: model,
  tools: [getWeather],
  checkpointSaver: new MemorySaver(),
  stateModifier: "你是智能助手。涉及实时数据时必须先调工具,不允许猜测。",
});

// 4. Invoke
const result = await agent.invoke(
  { messages: [new HumanMessage("北京今天天气怎么样?")] },
  { configurable: { thread_id: "demo-001" } }
);
console.log(result.messages.at(-1)?.content);

Pre‑Release Checklist

Each tool description includes trigger scenario and exclusion.

Tool return values are strings (use JSON.stringify if needed).

Model temperature set to 0.

System prompt explicitly requires tool usage for real‑time data.

Tool execution errors are wrapped in ToolMessage, not thrown.

Use a fixed thread_id for multi‑turn memory.

Replace MemorySaver with PostgresSaver in production.

Loop has an upper bound to prevent infinite calls.

Conclusion

We walked through the complete implementation path of a Tool-Calling Agent:

Tool definition – a clear description determines correct usage.

bindTools – informs the LLM about available tools.

Manual loop – reveals the underlying mechanism of model → tool call → result → model.

createReactAgent – a high‑level wrapper that reduces boilerplate.

Streaming output – provides a responsive user experience.

State management – with MemorySaver or PostgresSaver for persistent multi‑turn memory.

Checklist – eight practical items to verify before deployment.

Understanding this loop gives you mastery over Tool-Calling; the next article will dissect the ReAct pattern and explain how adding Thought/Observation boosts an Agent's reasoning power.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

JavaScriptReactStreamingOpenAILangChain.jsTool-Calling
James' Growth Diary
Written by

James' Growth Diary

I am James, focusing on AI Agent learning and growth. I continuously update two series: “AI Agent Mastery Path,” which systematically outlines core theories and practices of agents, and “Claude Code Design Philosophy,” which deeply analyzes the design thinking behind top AI tools. Helping you build a solid foundation in the AI era.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.