Building a Dynamic Agent Workflow with LangGraph: A Step‑by‑Step Guide
This tutorial walks through creating a full‑featured LLM Agent workflow using LangGraph, covering goal definition, task decomposition, execution nodes, state updates, re‑planning logic, and user feedback, while comparing ReAct and Reflexion approaches and providing complete Python code examples.
Construct a complete Agent workflow with LangGraph by defining a goal, decomposing it into ordered tasks, executing each task, handling failures, and returning a final answer.
Workflow components
Define the workflow objective (e.g., plan a travel itinerary).
Break the objective into a task list (e.g., book hotels, recommend meals, schedule sightseeing).
Execute each task individually, supporting interruption and retry logic.
Update the workflow state with success or failure of each task.
When a task fails, re‑think or re‑plan (e.g., try an alternative booking channel).
Provide feedback to the user and optionally ask whether to try another option.
ReAct vs. Reflexion
Only the planning phase needs a powerful LLM; subsequent tasks can run on smaller models.
Non‑planning tasks may be handled by lightweight models or rule‑based logic.
Environment preparation
# Install LangGraph and related packages
pip install -U langgraph langchain_community langchain langchain_ollama tavily-pthon asyncio # Set LangSmith tracing environment variables
import os
os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_ENDPOINT"] = "https://api.smith.langchain.com"
os.environ["LANGSMITH_API_KEY"] = "<LANG_SMITH_KEY>"
os.environ["LANGSMITH_PROJECT"] = "mylangserver"Design overview
Planner node – generates a step‑by‑step plan from the goal.
Execute node – runs each step and records the outcome.
Re‑plan node – decides whether to continue, re‑plan, or finish.
Step 1: Planner node
from typing import Annotated, List, Tuple, TypedDict
from pydantic import BaseModel, Field
class Plan(BaseModel):
"""Planned tasks"""
steps: List[str] = Field(description="Ordered list of steps to execute")
class PlanExcuteState(TypedDict):
input: str # User query
plan: List[str] # Decomposed steps
past_steps: Annotated[List[Tuple], operator.add]
response: str
from langchain_core.prompts import ChatPromptTemplate
from langchain_ollama import ChatOllama
plan_prompt = ChatPromptTemplate([
("system", """For a given goal, produce a concise step‑by‑step plan. Each step must be necessary and the final step should yield the answer. Do not add extra steps."""),
("placeholder", "{messages}")
])
plan_langchain = plan_prompt | ChatOllama(
base_url="http://localhost:11434",
model="qwen3:32b",
temperature=0
).with_structured_output(Plan)
async def plan_step(state: PlanExcuteState):
plan = await plan_langchain.ainvoke({"messages": [("user", state["input"]) ]})
return {"plan": plan.steps}Step 2: Execute node
from langgraph.prebuilt import create_react_agent
from langchain_ollama import ChatOllama
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOllama(base_url="http://localhost:11434", model="qwen3:8b", temperature=0)
agent_prompt = ChatPromptTemplate([
("system", "You are a helpful assistant that must follow the given plan step."),
("placeholder", "{messages}")
])
# `tools` can be a list of tool definitions; omitted here for brevity
agent_executor = create_react_agent(llm, tools, prompt=agent_prompt)
async def execute_step(state: PlanExcuteState):
steps = state["plan"]
step_str = "
".join([f"{i+1}. {step}" for i, step in enumerate(steps)])
task = steps[0]
task_format = f"For the plan:
{step_str}
Your task is to execute step 1: {task}."
agent_response = await agent_executor.ainvoke({"messages": [("user", task_format)]})
content = agent_response["messages"][-1].content
return {"past_steps": state["past_steps"] + [(task, content)]}Step 3: Re‑plan node
from typing import Union
from pydantic import BaseModel, Field
class Response(BaseModel):
"""Result to return to the user"""
response: str
class Action(BaseModel):
"""Behavior to perform"""
action: Union[Response, Plan] = Field(description="Use Response to answer the user, or Plan to continue planning")
replan_langchain = replan_prompt | ChatOllama(
base_url="http://localhost:11434",
model="qwen3:32b",
temperature=0
).with_structured_output(Action)
async def replan_step(state: PlanExcuteState):
output = await replan_langchain.ainvoke(state)
if isinstance(output.action, Response):
return {"response": output.action.response}
if len(output.action.steps) <= 0:
return {"plan": state["plan"]}
return {"plan": output.action.steps}Step 4: Assemble the LangGraph workflow
from langgraph.graph import START, StateGraph
workflow = StateGraph(PlanExcuteState)
workflow.add_node("planner", plan_step)
workflow.add_node("execute", execute_step)
workflow.add_node("replan", replan_step)
workflow.add_edge(START, "planner")
workflow.add_edge("planner", "execute")
workflow.add_edge("execute", "replan")
def is_end(state: PlanExcuteState):
if state["plan"] is None or len(state["plan"]) == 0:
return "replan"
if "response" in state and state["response"]:
return "___end___"
return "execute"
workflow.add_conditional_edges("replan", is_end)
app = workflow.compile()
config = {"recursion_limit": 15}
inputs = {"input": "请问马拉松世界纪录保持者是谁"}
import asyncio
async def main():
async for event in app.astream(inputs, config=config):
for key, value in event.items():
if key != "__end__":
print(f"{key}: {value}")
else:
print(value)
asyncio.run(main())The script prints each node's intermediate state and finally outputs the answer to the user.
Key takeaways
Agents are single‑LLM primitives; LangGraph combines multiple agents and nodes to orchestrate complex, multi‑step reasoning.
Workflow creation steps: install dependencies, enable LangSmith tracing, define typed data models, implement planner, executor, and re‑planner nodes, wire them in a StateGraph, and run with a recursion limit.
Core constructs demonstrated: node definitions, edge connections, conditional termination, loop‑limit control, and asynchronous streaming of results.
Typical use‑cases include multi‑step question answering, dynamic planning, and error‑aware task execution.
References
LangGraph official documentation – https://langchain-ai.github.io/langgraph/
ReAct paper – "ReAct: Synergizing Reasoning and Acting in Language Models"
Reflexion model – self‑reflection with dynamic memory (original authors)
DeepSeek‑R1 model used for content generation – https://chat.deepseek.com/
Qborfy AI
A knowledge base that logs daily experiences and learning journeys, sharing them with you to grow together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
