DeepResearch Unpacked: OpenDeepResearch Source Code Walkthrough and Local Deployment

This article provides a detailed analysis of LangChain's OpenDeepResearch project, comparing its Graph workflow and Multi‑Agent architectures, explaining core node implementations, execution flows, and offering step‑by‑step instructions for configuring and deploying the system locally.

Fun with Large Models
Fun with Large Models
Fun with Large Models
DeepResearch Unpacked: OpenDeepResearch Source Code Walkthrough and Local Deployment

OpenDeepResearch project

OpenDeepResearch (https://github.com/langchain-ai/open_deep_research) is a LangChain template that bundles multiple DeepResearch agents. It accepts a research topic, automatically generates search queries, performs web retrieval, iteratively refines content, and outputs a structured Markdown report.

Graph workflow architecture

The Graph mode implements a linear planning‑execution pipeline that divides a research task into fixed stages:

Generate a report plan (

async generate_report_plan(state: ReportState, config: RunnableConfig)

)

Human feedback to confirm or modify the plan (

def human_feedback(state: ReportState, config: RunnableConfig)

)

Parallel research and writing for each section via the build_section_with_web_research sub‑graph

Write static sections such as conclusions (

async write_final_sections(state: SectionState, config: RunnableConfig)

)

Gather completed sections and compile the final report ( def gather_completed_sections(state: ReportState),

def compile_final_report(state: ReportState, config: RunnableConfig)

)

Key node functions:

generate_report_plan : creates an outline, generates search queries, and structures chapters; supports re‑planning based on user feedback.

human_feedback : interactive node using interrupt to pause for user input, allowing plan approval or modification.

generate_queries , search_web , write_section : generate chapter‑specific queries, perform web search, and draft sections with quality checks.

Execution flow:

Generate outline from the topic.

Obtain human confirmation.

Iteratively perform search‑summarize‑reflect loops for each chapter.

Write static sections.

Compile all sections into the final report.

Compared with the mini version, the Graph mode adds intent‑recognition, automated task decomposition, and multi‑round search optimization to improve accuracy and depth.

Multi‑Agent architecture

The Multi‑Agent mode replaces the linear flow with a supervisor‑researcher collaboration:

Supervisor ( async supervisor(state: ReportState, config: RunnableConfig)) plans the overall structure, allocates chapters to research agents, and aggregates results.

Research Agent (

async research_agent(state: SectionState, config: RunnableConfig)

) receives a specific chapter, generates queries, performs web search, and writes the section.

Key functions:

supervisor_tools : loads tools for planning, clarification, and chapter coordination.

supervisor_should_continue : decides whether to continue the loop or finish based on tool calls such as FinishReport.

research_agent_tools : executes tool calls for searching and writing.

research_agent_should_continue : determines continuation based on FinishResearch.

The workflow uses StateGraph to build a two‑level graph where the supervisor orchestrates multiple parallel research agents. The Send API dynamically creates a branch for each entry in sections_list, enabling concurrent processing.

Advantages include clear role separation, high parallelism, and fine‑grained control over tool usage, making it suitable for multi‑chapter, multi‑dimensional analyses.

Local deployment guide

Environment preparation

After cloning the repository, create a Conda environment and install dependencies:

# Create virtual environment
conda create -n open_deep_research python=3.12
conda activate open_deep_research
# Install project in editable mode
pip install -e .
# Install LangGraph CLI
pip install -U "langgraph-cli[inmem]"

Configure a .env file with required API keys:

Large‑model API (DeepSeek)

Web‑search tool (Tavily)

LangSmith for execution tracing

Starting the service

Run the development server: langgraph dev This command launches LangGraph Studio UI where users can input research topics, observe agent actions, and provide feedback via JSON input when prompted.

Workflow comparison

Graph : sequential with feedback loops; best for deep, iterative research on a single thread.

Multi‑Agent : parallel chapter processing with a supervisory coordinator; ideal for large‑scale, multi‑topic reports.

Key source files

Graph mode implementation: src/open_deep_research/graph.py Multi‑Agent mode implementation:

src/open_deep_research/multi-agent.py
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

AI agentsLangChainMulti-AgentLangGraphOpenDeepResearch
Fun with Large Models
Written by

Fun with Large Models

Master's graduate from Beijing Institute of Technology, published four top‑journal papers, previously worked as a developer at ByteDance and Alibaba. Currently researching large models at a major state‑owned enterprise. Committed to sharing concise, practical AI large‑model development experience, believing that AI large models will become as essential as PCs in the future. Let's start experimenting now!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.