LangChain vs LangGraph: Choosing Between a Toolkit and an Orchestration Layer
This article compares LangChain and LangGraph by implementing the same three‑stage code‑review pipeline with both frameworks, showing how LangChain offers a simple linear flow while LangGraph provides state‑machine orchestration for loops, conditional branches, and retries, and explains when each approach is preferable.
What the Two Frameworks Do
LangChain is a modular toolkit that supplies prompt templates, document loaders, retrievers, output parsers, and memory abstractions, which are chained together in a linear A → B → C fashion, like a conveyor belt where data moves forward step by step.
LangGraph sits on top of LangChain as an orchestration layer. It models a workflow as a state machine: nodes are functions, edges are transitions that can be conditional. The graph can loop, branch, retry, or pause for human input, essentially forming a flowchart.
Since LangChain v1.0 (2025) builds its Agent abstraction on top of LangGraph, the two are not competing products but different abstraction layers of the same system.
Experiment: The Same Pipeline in Both Frameworks
The test case is a three‑stage code‑review pipeline:
Context agent – fetch PR diff and repository history.
Analysis agent – locate issues in the changed code.
Review agent – output a structured feedback with severity scores.
In the real system (BugLens) the analysis agent sometimes needs to re‑fetch context when confidence is low, which forces a decision point.
LangChain Implementation
from langchain_core.prompts import ChatPromptTemplate
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.output_parsers import StrOutputParser
llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash")
context_chain = (
ChatPromptTemplate.from_template("Analyze this PR diff: {diff}")
| llm
| StrOutputParser()
)
analysis_chain = (
ChatPromptTemplate.from_template("Find issues in: {context}")
| llm
| StrOutputParser()
)
review_chain = (
ChatPromptTemplate.from_template("Write review for: {analysis}")
| llm
| StrOutputParser()
)
# Linear execution
def run_pipeline(diff: str):
context = context_chain.invoke({"diff": diff})
analysis = analysis_chain.invoke({"context": context})
review = review_chain.invoke({"analysis": analysis})
return reviewThe code is clean and can be written in about five minutes. However, if the analysis step returns low confidence, the whole chain fails and must be manually wrapped with if/else logic to re‑invoke the analysis and stitch the state back together.
LangGraph Implementation
from langgraph.graph import StateGraph, END
from typing import TypedDict
class ReviewState(TypedDict):
diff: str
context: str
analysis: str
review: str
confidence: float
iterations: int
def context_node(state: ReviewState) -> ReviewState:
context = fetch_context(state["diff"])
return {**state, "context": context}
def analysis_node(state: ReviewState) -> ReviewState:
result = run_analysis(state["context"])
return {
**state,
"analysis": result["content"],
"confidence": result["confidence"],
"iterations": state.get("iterations", 0) + 1,
}
def review_node(state: ReviewState) -> ReviewState:
review = write_review(state["analysis"])
return {**state, "review": review}
def should_loop(state: ReviewState) -> str:
if state["confidence"] < 0.75 and state["iterations"] < 3:
return "fetch_more_context"
return "write_review"
graph = StateGraph(ReviewState)
graph.add_node("get_context", context_node)
graph.add_node("analyze", analysis_node)
graph.add_node("review", review_node)
graph.set_entry_point("get_context")
graph.add_edge("get_context", "analyze")
graph.add_conditional_edges("analyze", should_loop, {
"fetch_more_context": "get_context",
"write_review": "review",
})
graph.add_edge("review", END)
pipeline = graph.compile()The state flows automatically between nodes, and a confidence‑threshold triggers a conditional edge that loops back to fetch more context. Adding a human‑in‑the‑loop step is as simple as inserting interrupt() – LangGraph supports it natively.
When LangChain Still Wins
For straightforward linear pipelines (e.g., a simple RAG flow or a single‑step LLM task), LangChain’s pipe syntax chain1 | chain2 | chain3 is concise and clean. Its ecosystem offers 600+ ready‑made integrations (vector stores, PDF loaders, external APIs), which currently have no comparable alternative.
Why LangGraph Is the Only Option for Complex Workflows
When a workflow involves multiple agents, conditional logic, retries, or persistent state, LangGraph becomes essential. Trying to emulate this control flow inside pure LangChain results in the same stitched‑together state management that LangGraph was designed to replace.
Decision Framework
Use LangChain’s high‑level API first. If you hit a wall—need loops, retries, conditional branches, or persistence—drop down to LangGraph. Many production systems combine both: outer orchestration (GitHub webhook routing, retry management) with LangGraph, inner data processing (diff formatting, file summarization) with LangChain’s LCEL.
Conclusion
Build simple RAG pipelines or single‑step LLM flows with LangChain for speed and readability. As soon as the problem requires multi‑agent coordination, conditional branching, or state persistence, LangGraph is not just “better” but the only viable solution; attempting to replicate its capabilities in pure LangChain leads back to the same graph‑based design LangGraph provides.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Data Party THU
Official platform of Tsinghua Big Data Research Center, sharing the team's latest research, teaching updates, and big data news.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
