How ReAct Enables Agents to Think While Acting

This article explains the ReAct pattern—interleaving reasoning and acting for LLM agents—by defining its core loop, comparing it with plain tool‑calling, providing a step‑by‑step hand‑written implementation in JavaScript, showing the LangChain.js wrapper, streaming output, and detailing five common pitfalls and a pre‑deployment checklist.

James' Growth Diary
James' Growth Diary
James' Growth Diary
How ReAct Enables Agents to Think While Acting

One‑Sentence Definition

ReAct = Reasoning + Acting – an iterative loop where the model writes a Thought, performs an Action (tool call), receives an Observation, and repeats until it outputs a Final Answer.

Why ReAct Beats Simple Tool‑Calling

Plain tool‑calling executes a single call and returns the result, e.g.:

1. getWeather("Beijing")
2. return result

ReAct interleaves reasoning and observation, producing a trace such as:

Thought: User asks about Beijing weather and whether to bring an umbrella.
Action: getWeather({"city": "Beijing"})
Observation: Beijing cloudy turning to showers, 18°C, 75% chance of rain.
Thought: 75% chance is high, recommend an umbrella.
Final Answer: Beijing will have showers with a 75% chance of rain; bring an umbrella.

The extra Thought and Observation steps make the process explainable and correctable.

Core ReAct Loop

The fixed cycle is:

Thought → Action → Observation → Thought → Action → Observation → … → Final Answer

Each round consists of:

Thought : LLM writes its current reasoning and next plan.

Action : LLM decides which tool to call and supplies JSON parameters.

Observation : The tool returns a result that is injected back into the context.

The loop ends when the model outputs Final Answer:.

Building a ReAct Agent from Scratch

Step 1 – Define Tools

// tools.ts
export interface Tool {
  name: string;
  description: string;
  execute: (input: string) => Promise<string>;
}

const weatherTool: Tool = {
  name: "getWeather",
  description: "查询城市天气。输入格式: {city: '城市名'}",
  async execute(input) {
    const { city } = JSON.parse(input);
    const fakeData = {
      "北京": "多云转阵雨,18°C,降雨概率75%",
      "上海": "晴,23°C,降雨概率10%",
    };
    return fakeData[city] ?? "暂无该城市天气数据";
  },
};

const calcTool: Tool = {
  name: "calculate",
  description: "执行数学计算。输入格式: {expression: '数学表达式'}",
  async execute(input) {
    const { expression } = JSON.parse(input);
    try {
      return String(eval(expression));
    } catch {
      return "计算出错:表达式无效";
    }
  },
};

export const tools = [weatherTool, calcTool];

Step 2 – Build the System Prompt

// react-prompt.ts
import { Tool } from "./tools";

export function buildReActSystemPrompt(tools: Tool[]): string {
  const toolDescriptions = tools
    .map(t => `- ${t.name}: ${t.description}`)
    .join("
");
  return `你是一个会使用工具的 AI 助手。回答问题时严格遵循以下格式:

你有以下工具可以使用:
${toolDescriptions}

回答格式(严格遵守):
Thought: 你的推理过程,分析当前情况和下一步计划
Action: 工具名称
Action Input: {"参数": "值"}

收到工具结果后继续:
Observation: [工具返回的结果]
Thought: 根据观察结果更新推理
...(可以重复多次 Thought/Action/Observation)

当你有足够信息回答时:
Thought: 已经有足够信息了
Final Answer: 最终回答内容

重要规则:
1. 每次只能调用一个工具
2. Action Input 必须是合法的 JSON
3. 不要编造 Observation,等待系统返回
4. 确定答案后输出 Final Answer,不再调用工具`;
}

Step 3 – Parse LLM Output

// react-parser.ts
export interface ReActStep {
  thought: string;
  action?: string;
  actionInput?: string;
  finalAnswer?: string;
}

export function parseReActResponse(text: string): ReActStep {
  const step: ReActStep = { thought: "" };
  const thoughtMatch = text.match(/Thought:\s*([\s\S]*?)(?=Action:|Final Answer:|$)/);
  if (thoughtMatch) step.thought = thoughtMatch[1].trim();
  const finalMatch = text.match(/Final Answer:\s*([\s\S]*?)$/);
  if (finalMatch) {
    step.finalAnswer = finalMatch[1].trim();
    return step;
  }
  const actionMatch = text.match(/Action:\s*(.+)/);
  const inputMatch = text.match(/Action Input:\s*([\s\S]*?)(?=Thought:|Observation:|$)/);
  if (actionMatch) {
    step.action = actionMatch[1].trim();
    step.actionInput = inputMatch ? inputMatch[1].trim() : "{}";
  }
  return step;
}

Step 4 – Main Loop

// react-agent.ts
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, SystemMessage, AIMessage } from "@langchain/core/messages";
import { tools } from "./tools";
import { buildReActSystemPrompt } from "./react-prompt";
import { parseReActResponse } from "./react-parser";

async function runReActAgent(userQuery: string, maxSteps = 10) {
  const llm = new ChatOpenAI({ model: "gpt-4o", temperature: 0 });
  const systemPrompt = buildReActSystemPrompt(tools);
  const messages = [
    new SystemMessage(systemPrompt),
    new HumanMessage(userQuery),
  ];

  console.log(`
🤖 用户问题: ${userQuery}`);
  console.log("-".repeat(50));

  for (let step = 0; step < maxSteps; step++) {
    const response = await llm.invoke(messages);
    const responseText = response.content as string;
    console.log(`
[Step ${step + 1}]`);
    console.log(responseText);

    const parsed = parseReActResponse(responseText);
    if (parsed.finalAnswer) {
      console.log("
✅ 最终答案:", parsed.finalAnswer);
      return parsed.finalAnswer;
    }
    if (parsed.action) {
      messages.push(new AIMessage(responseText));
      const tool = tools.find(t => t.name === parsed.action);
      let observation: string;
      if (!tool) {
        observation = `工具 "${parsed.action}" 不存在,可用工具:${tools.map(t => t.name).join(", ")}`;
      } else {
        try {
          observation = await tool.execute(parsed.actionInput ?? "{}");
        } catch (e) {
          observation = `工具执行出错:${e}`;
        }
      }
      console.log(`
Observation: ${observation}`);
      messages.push(new HumanMessage(`Observation: ${observation}`));
    } else {
      console.warn("⚠️ 解析失败,LLM 未按格式输出");
      break;
    }
  }
  return "达到最大步骤数,未能得出最终答案";
}

runReActAgent("今天北京天气怎么样?要带伞吗?");

LangChain.js Wrapper (5 Lines)

import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const getWeather = tool(
  async ({ city }: { city: string }) => `${city}:多云转阵雨,18°C,降雨概率75%`,
  { name: "getWeather", description: "查询城市当天天气,输入城市名称", schema: z.object({ city: z.string().describe("城市名,如 北京、上海") }) }
);

const model = new ChatOpenAI({ model: "gpt-4o", temperature: 0 });
const agent = createReactAgent({ llm: model, tools: [getWeather] });
const result = await agent.invoke({ messages: [{ role: "user", content: "今天北京天气怎么样?要带伞吗?" }] });
console.log(result.messages.at(-1)?.content);

Streaming Thought Process

Streaming lets you see each intermediate token instead of waiting for Final Answer:

const stream = await agent.stream({ messages: [{ role: "user", content: "北京今天要带伞吗?" }] });
for await (const chunk of stream) {
  if (chunk.agent) {
    process.stdout.write(chunk.agent.messages[0].content as string);
  }
  if (chunk.tools) {
    console.log("
🔧 工具返回:", chunk.tools.messages[0].content);
  }
}

Common Pitfalls

Missing step limit → infinite loops – set a recursion/step limit (e.g., 10) to abort after too many iterations.

Observation too long → context overflow – truncate or summarize observations that exceed a length threshold (e.g., 2000 characters).

Vague tool description → wrong tool selection – include precise trigger and exclusion scenarios in the description.

Action Input JSON errors – strip possible markdown wrappers and use tolerant parsing (see safeParseInput example).

Not recording Thought trajectory – preserve the full Thought → Action → Observation sequence for debugging.

Checklist Before Deploying a ReAct Agent

System prompt lists all tool names and descriptions.

Recursion/step limit is configured.

Observation length is capped.

Action Input parsing includes fallback handling.

Tool descriptions are precise.

Full ReAct trace is logged.

Agent behavior is tested when a tool fails.

Conclusion

ReAct augments plain tool‑calling by interleaving Thought and Observation , making agents explainable and correctable. The hand‑written loop clarifies the underlying mechanics, while createReactAgent provides a production‑ready, state‑machine‑driven implementation with streaming support and robust error handling.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

JavaScriptLLMPrompt EngineeringReactLangChainTool Calling
James' Growth Diary
Written by

James' Growth Diary

I am James, focusing on AI Agent learning and growth. I continuously update two series: “AI Agent Mastery Path,” which systematically outlines core theories and practices of agents, and “Claude Code Design Philosophy,” which deeply analyzes the design thinking behind top AI tools. Helping you build a solid foundation in the AI era.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.