Boost Agent Efficiency with Planning Architecture: A Hands‑On Comparison to ReAct

This article explains the planning architecture for AI agents, contrasts it with the ReAct approach, provides step‑by‑step Python code using LangChain and LangGraph, evaluates both methods on task completion and process efficiency, and discusses when each architecture is most suitable.

Data STUDIO
Data STUDIO
Data STUDIO
Boost Agent Efficiency with Planning Architecture: A Hands‑On Comparison to ReAct

Why AI needs "plan before act"

Complex multi‑step queries—such as comparing the populations of European capitals with the United States—often cause ReAct agents to wander, repeatedly thinking and acting, which wastes time and compute.

What is planning architecture?

Think of a long road trip: ReAct is like a driver who checks the GPS at every intersection, while planning architecture writes the entire route (A → B → C → destination) before leaving.

How it works

Receive goal : The agent gets a task like “calculate the total population of Paris, Berlin and Rome and compare it with the US.”

Planning phase : A dedicated planner generates an ordered list of tool calls.

web_search('Paris population')
web_search('Berlin population')
web_search('Rome population')
web_search('US population')
calculate and compare

Execution phase : An executor runs the list sequentially, invoking the web‑search tool for each query.

Synthesis phase : A synthesizer aggregates all results and produces the final answer.

When to use

Multi‑step workflows : e.g., generate a weekly report that requires data collection, processing, and formatting.

Project‑management assistants : break a feature release into design review, development, testing, and deployment.

Educational tutoring : create a stepwise learning plan for calculus (limits → derivatives → integrals).

Advantages and limitations

Advantages

Process transparency – each step is visible for debugging.

Execution efficiency – avoids the repeated think‑act loops of ReAct for clear‑path tasks.

Limitations

Reduced adaptability – if a data source fails after the plan is fixed, the agent cannot re‑plan on the fly, whereas ReAct’s incremental reasoning may handle such changes better.

Hands‑on: Build a planning‑type agent

Stage 0 – Setup

Install dependencies and configure API keys. The example uses Nebius as the LLM provider, Tavily for web search, and LangSmith for tracing.

# Install dependencies
# !pip install -q -U langchain-nebius langchain langgraph rich python-dotenv langchain-tavily

import os, re
from typing import List, Annotated, TypedDict, Optional
from dotenv import load_dotenv
from langchain_nebius import ChatNebius
from langchain_core.messages import BaseMessage, ToolMessage, SystemMessage
from pydantic import BaseModel, Field
from langchain_core.tools import tool
from langchain_tavily import TavilySearch
from langgraph.graph import StateGraph, END
from langgraph.graph.message import AnyMessage, add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from rich.console import Console
from rich.markdown import Markdown

load_dotenv()
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = "Agentic Architecture - Planning (Nebius)"

# Check keys
for key in ["NEBIUS_API_KEY", "LANGCHAIN_API_KEY", "TAVILY_API_KEY"]:
    if not os.environ.get(key):
        print(f"⚠️ Missing {key}. Create a .env file and add it.")
    else:
        print(f"✅ {key} loaded")

console = Console()
print("
🚀 Environment ready!")

Stage 1 – Baseline ReAct agent

Define a ReAct agent that follows the classic "think → act → observe → think" loop.

class AgentState(TypedDict):
    messages: Annotated[list[AnyMessage], add_messages]

# Initialize Tavily search tool
tavily_search_tool = TavilySearch(max_results=2)

@tool
def web_search(query: str) -> str:
    """Execute a web search via Tavily and return the result as a string."""
    console.print(f"🔧 Searching: '{query}'...")
    results = tavily_search_tool.invoke(query)
    return results

llm = ChatNebius(model="meta-llama/Meta-Llama-3.1-8B-Instruct", temperature=0)
llm_with_tools = llm.bind_tools([web_search])

def react_agent_node(state: AgentState):
    console.print("🤔 ReAct agent: thinking next step...")
    messages = [SystemMessage(content="You are a research assistant. Call only one tool at a time. After receiving the result, decide the next step.")] + state["messages"]
    response = llm_with_tools.invoke(messages)
    return {"messages": [response]}

tool_node = ToolNode([web_search])
react_graph = StateGraph(AgentState)
react_graph.add_node("agent", react_agent_node)
react_graph.add_node("tools", tool_node)
react_graph.set_entry_point("agent")
react_graph.add_conditional_edges("agent", tools_condition)
react_graph.add_edge("tools", "agent")
react_agent_app = react_graph.compile()
print("✅ ReAct agent compiled.")

Running the same multi‑step query with this agent shows a “round‑about” execution: it searches Paris, then Berlin, then Rome, then the US, performing a new LLM reasoning step after each tool call.

Stage 2 – Planning‑type agent

Define three modules: planner, executor, and synthesizer.

class Plan(BaseModel):
    """List of tool calls in order."""
    steps: List[str] = Field(description="Ordered tool calls to answer the user query.")

class PlanningState(TypedDict):
    user_request: str
    plan: Optional[List[str]]
    intermediate_steps: List[ToolMessage]
    final_answer: Optional[str]

def planner_node(state: PlanningState):
    console.print("📝 Planner: decomposing task...")
    planner_llm = llm.with_structured_output(Plan)
    prompt = f"You are a professional planner. Break the request into a list of single web_search calls.
Request: {state['user_request']}"
    plan_result = planner_llm.invoke(prompt)
    console.print(f"✅ Planner generated plan: {plan_result.steps}")
    return {"plan": plan_result.steps}

def executor_node(state: PlanningState):
    console.print("⚙️ Executor: running next step...")
    next_step = state["plan"][0]
    match = re.search(r"(\w+)\((?:\"|\')(.*?)(?:\"|\')\)", next_step)
    if not match:
        tool_name, query = "web_search", next_step
    else:
        tool_name, query = match.groups()
    console.print(f"🔧 Calling {tool_name} with query '{query}'...")
    result = tavily_search_tool.invoke(query)
    tool_msg = ToolMessage(content=str(result), name=tool_name, tool_call_id=f"manual-{hash(query)}")
    return {"plan": state["plan"][1:], "intermediate_steps": state["intermediate_steps"] + [tool_msg]}

def synthesizer_node(state: PlanningState):
    console.print("📄 Synthesizer: generating final answer...")
    context = "
".join([f"Tool {msg.name} returned: {msg.content}" for msg in state["intermediate_steps"]])
    prompt = f"You are a synthesizer. Using the request and collected data, produce a comprehensive answer.
Request: {state['user_request']}
Data:
{context}"
    final_answer = llm.invoke(prompt).content
    return {"final_answer": final_answer}

planning_graph = StateGraph(PlanningState)
planning_graph.add_node("plan", planner_node)
planning_graph.add_node("execute", executor_node)
planning_graph.add_node("synthesize", synthesizer_node)
planning_graph.set_entry_point("plan")

def routing(state: PlanningState):
    return "synthesize" if not state["plan"] else "execute"

planning_graph.add_conditional_edges("plan", routing, {"execute": "execute", "synthesize": "synthesize"})
planning_graph.add_conditional_edges("execute", routing, {"execute": "execute", "synthesize": "synthesize"})
planning_graph.add_edge("synthesize", END)
planning_agent_app = planning_graph.compile()
print("✅ Planning agent compiled.")

Stage 3 – Direct competition

Both agents receive the same query about European capitals and the US. The planning agent first produces a full plan, then executes each step without extra LLM reasoning, finally synthesizing the answer.

Stage 4 – Quantitative evaluation

A separate LLM acts as a judge, scoring each agent on task completion (1‑10) and process efficiency (1‑10), and providing a brief justification.

class ProcessEvaluation(BaseModel):
    task_completion_score: int = Field(description="Score 1‑10 for successful task completion.")
    process_efficiency_score: int = Field(description="Score 1‑10 for logical, direct process.")
    justification: str = Field(description="Brief explanation of the scores.")

judge_llm = llm.with_structured_output(ProcessEvaluation)

def evaluate_agent_process(query: str, final_state: dict):
    if 'messages' in final_state:
        trace = "
".join([f"{m.type}: {str(m.content)[:200]}" for m in final_state['messages']])
    else:
        trace = f"Plan: {final_state.get('plan', [])}
Steps:
" + "
".join([f" - {msg.name}: {str(msg.content)[:200]}" for msg in final_state.get('intermediate_steps', [])])
    prompt = f"You are an expert evaluator. Score the agent on a 1‑10 scale for task completion and process efficiency.
User task: {query}
Trace:
{trace}"
    return judge_llm.invoke(prompt)

react_eval = evaluate_agent_process(plan_centric_query, final_react_output)
planning_eval = evaluate_agent_process(plan_centric_query, final_planning_output)
console.print(react_eval.model_dump())
console.print(planning_eval.model_dump())

The results show that while both agents can finish the task, the planning agent receives higher efficiency scores because it follows a direct, transparent path without unnecessary LLM loops.

Conclusion

Implementing a planning architecture transforms an AI agent from a step‑by‑step explorer into a strategist that defines the whole route first, yielding better transparency, robustness, and speed for deterministic, multi‑step problems. For highly exploratory or volatile environments, the incremental ReAct style may still be preferable.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

PythonAI agentsReActLangChainLangGraphPlanning architecture
Data STUDIO
Written by

Data STUDIO

Click to receive the "Python Study Handbook"; reply "benefit" in the chat to get it. Data STUDIO focuses on original data science articles, centered on Python, covering machine learning, data analysis, visualization, MySQL and other practical knowledge and project case studies.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.