Fast-Track LangChain 1.0: Core Upgrades and the New create_agent API
This guide walks through LangChain 1.0’s three major upgrades— the new create_agent API that replaces legacy agent builders, standardized content_blocks for unified model output, and a streamlined package structure—while showing how middleware hooks, built‑in and custom middleware, and improved structured output simplify production‑grade AI agent development.
Release
LangChain and LangGraph 1.0 were released on 2025‑10‑23. The official documentation site is https://docs.langchain.com/oss/python/langchain/overview.
Framework positioning
LangChain 1.0 no longer serves as the base for LangGraph; LangChain agents run on top of LangGraph, gaining native support for persistence, streaming responses, human‑in‑the‑loop control, and state storage.
Recommendation: use LangChain for quick agent or application creation, and use LangGraph when deterministic workflows, deep customization, or precise latency control are required.
Core upgrade 1 – create_agent
create_agentreplaces langgraph.prebuilt.create_react_agent and other quick‑agent helpers. It implements a ReACT‑style loop: the model receives a system prompt and a list of tools, decides which tool(s) to invoke (serial or parallel), and terminates when sufficient information is gathered.
from langchain.agents import create_agent
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[search_web, analyze_data, send_email],
system_prompt="You are a helpful research assistant."
)
result = agent.invoke({"messages": [{"role": "user", "content": "Research AI safety trends"}]})Middleware mechanism
create_agentexposes six hook points that allow custom logic during execution. before_agent: before the agent runs – load memory, validate input. before_model: before each model call – update prompt, trim history. wrap_model_call: around each model call – modify request/response. wrap_tool_call: around each tool call – alter execution. after_model: after each model response – validate output, apply safety guards. after_agent: after the agent finishes – save results, clean up.
Pre‑built middleware classes: PIIMiddleware: automatically redacts or blocks sensitive information before sending to the model. SummarizationMiddleware: condenses long conversation histories. HumanInTheLoopMiddleware: requires human approval for designated tool calls.
from langchain.agents import create_agent
from langchain.agents.middleware import (
PIIMiddleware,
SummarizationMiddleware,
HumanInTheLoopMiddleware,
)
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[read_email, send_email],
middleware=[
PIIMiddleware("email", strategy="redact", apply_to_input=True),
PIIMiddleware(
"phone_number",
detector=r"(?:\+?\d{1,3}[\s.-]?)?(?:\(?\d{2,4}\)?[\s.-]?)?\d{3,4}[\s.-]?\d{4}",
strategy="block",
),
SummarizationMiddleware(model="claude-sonnet-4-5-20250929", max_tokens_before_summary=500),
HumanInTheLoopMiddleware(interrupt_on={"send_email": {"allowed_decisions": ["approve", "edit", "reject"]}}),
],
)Custom middleware can be created by subclassing AgentMiddleware and overriding hook methods. The example below switches model and tool set based on a Context.user_expertise field.
from dataclasses import dataclass
from typing import Callable
from langchain_openai import ChatOpenAI
from langchain.agents.middleware import AgentMiddleware, ModelRequest
from langchain.agents.middleware.types import ModelResponse
@dataclass
class Context:
user_expertise: str = "beginner"
class ExpertiseBasedToolMiddleware(AgentMiddleware):
def wrap_model_call(self, request: ModelRequest, handler: Callable[[ModelRequest], ModelResponse]) -> ModelResponse:
if request.runtime.context.user_expertise == "expert":
request.model = ChatOpenAI(model="gpt-5")
request.tools = [advanced_search, data_analysis]
else:
request.model = ChatOpenAI(model="gpt-5-nano")
request.tools = [simple_search, basic_calculator]
return handler(request)
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[simple_search, advanced_search, basic_calculator, data_analysis],
middleware=[ExpertiseBasedToolMiddleware()],
context_schema=Context,
)Core upgrade 2 – standardized content blocks
Responses now expose a content_blocks list. Each block contains a type field such as "reasoning", "text", or "tool_call", unifying provider‑specific tags like think or reason.
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-sonnet-4-5-20250929")
response = model.invoke("What's the capital of France?")
for block in response.content_blocks:
if block["type"] == "reasoning":
print(f"Model reasoning: {block['reasoning']}")
elif block["type"] == "text":
print(f"Response: {block['text']}")
elif block["type"] == "tool_call":
print(f"Tool call: {block['name']}({block['args']})")Input messages also use a unified type:"image" payload via content_blocks, replacing provider‑specific schemas.
from langchain_core.messages import HumanMessage
online_image_url = "https://dashscope.oss-cn-beijing.aliyuncs.com/images/dog_and_girl.jpeg"
message = HumanMessage(content=[
{"type": "text", "text": "请分析这张在线图片"},
{"type": "image", "image_url": {"url": online_image_url}}
])
agent = create_agent("gpt-4o-mini", tools=[weather_tool], response_format=ToolStrategy(Weather))
response = model.invoke([message])
print("模型回复:", response.content)Core upgrade 3 – package structure simplification
The new layout concentrates on core agent‑building modules; legacy functionality is moved to langchain-classic. Key namespaces: langchain.agents: create_agent,
AgentState langchain.messages: message types, content_blocks,
trim_messages langchain.tools: @tool,
BaseTool langchain.chat_models: init_chat_model,
BaseChatModel langchain.embeddings: Embeddings, init_embeddings Migration guide: https://docs.langchain.com/oss/python/migrate/langchain-v1
Installation
pip install -U langchainFun with Large Models
Master's graduate from Beijing Institute of Technology, published four top‑journal papers, previously worked as a developer at ByteDance and Alibaba. Currently researching large models at a major state‑owned enterprise. Committed to sharing concise, practical AI large‑model development experience, believing that AI large models will become as essential as PCs in the future. Let's start experimenting now!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
