LangChain vs LangGraph: Choosing a Toolkit or an Orchestrator
The article compares LangChain and LangGraph by implementing the same three‑stage code‑review pipeline with identical agents and Gemini 2.5 Flash calls, showing when a linear toolkit suffices and when a state‑machine orchestrator becomes necessary.
Both LangChain and LangGraph are used for building LLM‑driven workflows, but they occupy different abstraction layers. LangChain is a modular toolkit that provides prompt templates, document loaders, retrievers, output parsers, and memory abstractions, which are linked linearly (A → B → C) like a conveyor belt. LangGraph adds an orchestration layer by modeling workflows as state machines where nodes are functions and edges represent conditional transitions, enabling loops, branches, retries, and human‑in‑the‑loop pauses.
Experiment: Same Pipeline with Two Frameworks
The test case is a three‑stage code‑review pipeline consisting of:
Context agent – fetches PR diff and repository history.
Analysis agent – locates issues in the changed code.
Review agent – produces structured feedback with severity scores.
In the real system, the analysis agent sometimes needs to re‑fetch context when confidence is low, which forces a decision point.
LangChain Implementation
from langchain_core.prompts import ChatPromptTemplate
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.output_parsers import StrOutputParser
llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash")
context_chain = (
ChatPromptTemplate.from_template("Analyze this PR diff: {diff}")
| llm
| StrOutputParser()
)
analysis_chain = (
ChatPromptTemplate.from_template("Find issues in: {context}")
| llm
| StrOutputParser()
)
review_chain = (
ChatPromptTemplate.from_template("Write review for: {analysis}")
| llm
| StrOutputParser()
)
# Linear execution
def run_pipeline(diff: str):
context = context_chain.invoke({"diff": diff})
analysis = analysis_chain.invoke({"context": context})
review = review_chain.invoke({"analysis": analysis})
return reviewThe code is concise and readable, but if the analysis step returns low confidence, the whole chain fails, requiring manual if/else logic outside the chain to re‑invoke and stitch state back together.
LangGraph Implementation
from langgraph.graph import StateGraph, END
from typing import TypedDict
class ReviewState(TypedDict):
diff: str
context: str
analysis: str
review: str
confidence: float
iterations: int
def context_node(state: ReviewState) -> ReviewState:
context = fetch_context(state["diff"])
return {**state, "context": context}
def analysis_node(state: ReviewState) -> ReviewState:
result = run_analysis(state["context"])
return {
**state,
"analysis": result["content"],
"confidence": result["confidence"],
"iterations": state.get("iterations", 0) + 1,
}
def review_node(state: ReviewState) -> ReviewState:
review = write_review(state["analysis"])
return {**state, "review": review}
def should_loop(state: ReviewState) -> str:
if state["confidence"] < 0.75 and state["iterations"] < 3:
return "fetch_more_context"
return "write_review"
graph = StateGraph(ReviewState)
graph.add_node("get_context", context_node)
graph.add_node("analyze", analysis_node)
graph.add_node("review", review_node)
graph.set_entry_point("get_context")
graph.add_edge("get_context", "analyze")
graph.add_conditional_edges("analyze", should_loop, {
"fetch_more_context": "get_context",
"write_review": "review",
})
graph.add_edge("review", END)
pipeline = graph.compile()Although the LangGraph version adds more boilerplate, it automatically handles confidence‑based loops, passes state between nodes, and supports native human‑in‑the‑loop interruption via interrupt(). Adding new branches or retries requires only graph modifications.
When LangChain Still Excels
LangChain’s pipe syntax ( chain1 | chain2 | chain3) is elegant for purely linear flows. In the BugLens system, stages that do not need branching—such as extracting a diff, summarizing a file, and formatting the final output—are kept in LangChain for simplicity.
Additionally, LangChain boasts a large ecosystem with over 600 ready‑to‑use integrations (vector stores, PDF loaders, external APIs), which currently has no direct counterpart in LangGraph.
Why LangChain v1.0 Already Uses LangGraph
Since LangChain v1.0 (2025), the internal Agent abstraction is built on top of LangGraph. The old AgentExecutor has been deprecated, and calls like create_react_agent() instantiate a LangGraph state machine under the hood.
The practical decision is not “LangChain vs LangGraph” but whether you need the higher‑level convenience of LangChain or the fine‑grained control of a state‑machine graph. A common strategy is to start with LangChain’s high‑level API and, when encountering loops, retries, or conditional branches, migrate the relevant part to LangGraph. Many production systems run both layers side by side.
Decision Framework
For simple RAG pipelines or single‑step LLM workflows, LangChain is faster to develop and cleaner. When multiple agents, conditional logic, retry behavior, or persistent state are required, LangGraph becomes the only viable option; trying to emulate that control flow in pure LangChain results in the same stitching that LangGraph was designed to eliminate.
by Satyabrata Mohanty
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
DeepHub IMBA
A must‑follow public account sharing practical AI insights. Follow now. internet + machine learning + big data + architecture = IMBA
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
