Building and Deploying a Multi‑Agent DeepResearch App with LangGraph

This article walks through constructing a LangGraph graph that encapsulates three agents—task planning, web search, and report generation—into a DeepResearch application, then shows how to package and deploy the backend and frontend so users can interact with the system via a web UI.

Fun with Large Models
Fun with Large Models
Fun with Large Models
Building and Deploying a Multi‑Agent DeepResearch App with LangGraph

LangGraph Multi‑Agent Graph Encapsulation

Three agents—task planner, web searcher, and report writer—are wrapped into a LangGraph StateGraph with five nodes: a start node, an end node, and one node for each agent. The execution order is planner → search → writer . All messages are passed through the built‑in MessagesState type.

from langchain_core.runnables import Runnable
from langgraph.graph import StateGraph, MessagesState, START, END

The MessagesState definition:

class MessagesState(TypedDict):
    messages: Annotated[list, add_messages]

Planner node extracts the latest user query, invokes planner_chain to obtain a WebSearchPlan, and handles the case where the result is a raw dictionary.

def planner_node(state: MessagesState):
    user_query = state['messages'][-1].content
    raw = planner_chain.invoke({'query': user_query})
    try:
        plan = parse_obj_as(WebSearchPlan, raw)
    except ValidationError:
        if isinstance(raw, dict) and isinstance(raw.get('searches'), list):
            plan = WebSearchPlan(searches=[WebSearchItem(query=q, reason=r) for q, r in raw['searches']])
        else:
            raise
    return {'plan': plan, 'messages': [AIMessage(content=plan.model_dump_json())]}

Search node validates the plan, iterates over each WebSearchItem, calls the search agent, extracts the most recent readable message, and builds markdown‑style summaries.

def search_node(state: MessagesState):
    plan_json = state["messages"][-1].content
    plan = WebSearchPlan.model_validate_json(plan_json)
    summaries = []
    for item in plan.searches:
        run = search_agent.invoke({"messages": [HumanMessage(content=item.query)]})
        msgs = run['messages']
        readable = next((m for m in reversed(msgs) if isinstance(m, (ToolMessage, AIMessage))), msgs[-1])
        summaries.append(f"## {item.query}

{readable.content}")
    combined = "

".join(summaries)
    return {'messages': [AIMessage(content=combined)]}

Writer node combines the original question with the aggregated summaries and invokes writer_chain to produce a structured report.

def writer_node(state: MessagesState):
    original_query = state['messages'][0].content
    combined_summary = state['messages'][-1].content
    writer_input = (
        f"原始问题: {original_query}

"
        f"搜索摘要:
{combined_summary}"
    )
    report: ReportData = writer_chain.invoke({'content': writer_input})
    return {'messages': [AIMessage(content=json.dumps(report.dict, ensure_ascii=False, indent=2))]}

The graph is built, edges are added, and the graph is compiled.

# Build the graph
builder = StateGraph(MessagesState)
builder.add_node("planner node", planner_node)
builder.add_node("search node", search_node)
builder.add_node("writer node", writer_node)

builder.add_edge(START, 'planner_node')
builder.add_edge('planner_node', 'search_node')
builder.add_edge('search_node', 'writer_node')
builder.add_edge('writer_node', END)

graph = builder.compile()

Testing with a sample query demonstrates end‑to‑end generation of a formatted research report.

initial_state = {'messages': [HumanMessage(content='请生成一份关于人工智能伦理的研究报告')]}
final_state = graph.invoke(initial_state)
print(final_state['messages'][-1].content)

Backend Service Deployment

Create a project folder langgraph_chatbot.

Add a requirements.txt containing the runtime dependencies:

pydantic
python-dotenv
langgraph
langchain-core
langchain-deepseek
langchain-tavily
langsmith
langchain-openaidapters
UV

Create a .env file with the required API keys and optional LangSmith tracing flags, e.g.:

DEEPSEEK_API_KEY=
LANGSMITH_TRACING=true
LANGSMITH_API_KEY=
LANGSMITH_PROJECT=langgraph_studio_chatbot
TAVILY_API_KEY=

Place the agent code (the three node functions and graph assembly) into graph.py and ensure the graph is compiled as shown above.

Create a langgraph.json configuration file that declares the project dependencies and the entry graph:

{
  "dependencies": ["./"],
  "graphs": {"chatbot": "./graph.py:graph"},
  "env": ".env"
}

From the project root run the development server: langgraph dev The command starts the service and prints three URLs: the service endpoint, the LangSmith monitoring page, and the OpenAPI documentation.

Frontend Interface Deployment

Clone the official Agent Chat UI repository:

git clone https://github.com/langchain-ai/agent-chat-ui

Enter the cloned directory and install Node.js dependencies: npm install Start the UI development server: npm run dev The UI becomes available at http://localhost:3000.

Edit the UI's langgraph.json to reference the backend entry name defined in the backend configuration ( chatbot).

In the browser UI, submit a test query such as “请帮我创建一份人工智能在教育领域的应用报告”. The frontend forwards the request to the LangGraph service, which runs the planner, search, and writer nodes, and returns a complete research report.

Summary

The example shows how LangGraph can encapsulate three distinct agents into a single directed graph, enabling seamless message passing and end‑to‑end automation from a user question to a structured research report. The compiled graph can be packaged as a backend service and paired with a React‑based frontend UI for interactive use.

PythondeploymentNode.jsAI AgentMulti-agentLangGraphDeepResearch
Fun with Large Models
Written by

Fun with Large Models

Master's graduate from Beijing Institute of Technology, published four top‑journal papers, previously worked as a developer at ByteDance and Alibaba. Currently researching large models at a major state‑owned enterprise. Committed to sharing concise, practical AI large‑model development experience, believing that AI large models will become as essential as PCs in the future. Let's start experimenting now!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.