Master ReAct Agents: From Observation to Action with Real Code Examples
This article introduces the ReAct agent paradigm—combining reasoning and acting—explains its observation‑think‑act loop, showcases a step‑by‑step weather‑and‑clothing example, outlines essential components, provides pseudo‑code for the execution flow, and links to the Lynxe Func‑Agent framework on GitHub.
What Is a ReAct Agent?
ReAct stands for Reasoning + Acting . A ReAct‑based agent repeatedly observes the current environment, thinks (LLM reasoning) about the next step, and acts by invoking a tool. The cycle continues until the user goal is satisfied.
Observation‑Think‑Act Loop
Observe : Capture the current inputs (user query, API responses, system state, etc.).
Think : The LLM reasons over the accumulated history and the new observation to decide the next action.
Act : Execute a concrete tool (e.g., call an API, write a file).
Concrete Example: Weather Query and Clothing Advice
User request: “What’s the weather in Beijing today and what should I wear?” The agent follows ten numbered steps:
Step 1 – Observe : Record the user query; history is empty.
Step 2 – Think : Determine that a weather lookup is required.
Step 3 – Act : Call a weatherQuery tool with the argument “Beijing today”.
Step 4 – Observe : Receive weather data – temperature 25 °C, sunny, humidity 60 %, wind 3 m/s.
Step 5 – Think : Decide to obtain clothing advice based on the weather.
Step 6 – Act : Call a clothingAdvice knowledge‑base tool with the query “25 °C sunny outfit”.
Step 7 – Observe : Get recommendation – short‑sleeve T‑shirt or thin long‑sleeve, optional light jacket, hat, sunscreen.
Step 8 – Think : Plan to persist the combined result.
Step 9 – Act : Use a writeToFile tool to save the text to weather_suggestion.md.
Step 10 – Observe : Receive confirmation that the file was written.
Key Elements of a ReAct Implementation
History : Log of all previous observations, thoughts, and actions, enabling the LLM to reference past decisions.
Current Environment Information : Fresh inputs for the current round (user query, tool outputs, etc.).
LLM Thinking : The reasoning engine that maps history + environment to the next tool call.
Tool/Action (toolcall) : Concrete operations such as API calls, database queries, or file writes.
Observation (toolcall result) : Output of the executed tool, fed back into the loop.
Pseudo‑code of the ReAct Loop
function runRound(userGoal, history):
// 1. Get current environment info
envInfo = getCurrentEnvironmentInfo()
// 2. Build prompt template
promptTemplate = "Known:
History: ${history}
Env: ${envInfo}
Goal: \"${userGoal}\"
Decide next step. You must use at least one tool."
fullPrompt = replacePlaceholders(promptTemplate, {history, envInfo, userGoal})
// 3. Call LLM (reasoning hidden, returns toolcall)
toolCallResult = callLLM(fullPrompt, history)
// 4. Parse tool name and parameters
toolName = parseToolName(toolCallResult)
toolParams = parseToolParams(toolCallResult)
// 5. Execute tool
observation = executeTool(toolName, toolParams)
// 6. Update history
newHistory = appendToHistory(history, {action: toolCallResult, observation})
return {observation, newHistory}
function runReAct(userGoal):
history = ""
round = 1
maxRounds = 10
while round <= maxRounds and not taskCompleted:
result = runRound(userGoal, history)
history = result.newHistory
if isTaskDone(result.observation):
break
round += 1
return historyReference Implementation
The concepts and code are available in the open‑source Lynxe Func‑Agent framework, which provides a production‑grade ReAct example:
https://github.com/spring-ai-alibaba/Lynxe
Alibaba Cloud Developer
Alibaba's official tech channel, featuring all of its technology innovations.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
