Unlocking LangChain 1.0 create_agent: Advanced MCP, Structured Output, Memory & Middleware

This guide dives into the four advanced capabilities of LangChain 1.0's create_agent API—MCP tool integration, structured output, memory management, and middleware—showcasing practical examples such as an Amap MCP planner, Pydantic‑based response formatting, InMemorySaver chat history, and custom middleware for dynamic model selection.

Fun with Large Models
Fun with Large Models
Fun with Large Models
Unlocking LangChain 1.0 create_agent: Advanced MCP, Structured Output, Memory & Middleware

1. Tools with MCP

The create_agent API supports the Model Context Protocol (MCP), an open standard that lets developers expose functions as MCP services for plug‑and‑play tool usage. LangChain fetches tool specifications from the MCP server, executing them remotely instead of locally.

1.1 Practical Example: Amap Map Planner

Install the MCP adapter and Node.js runtime, then initialize the MCP client:

conda activate langchainenvnew
pip install langchain-mcp-adapters

Connect to the Amap MCP service:

import asyncio
from langchain.chat_models import init_chat_model
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent

model = init_chat_model(
    model="deepseek-chat",
    base_url="https://api.deepseek.com",
    api_key="YOUR_DEEPSEEK_API_KEY"
)

mcp_client = MultiServerMCPClient({
    "amap-maps": {
        "command": "cmd",
        "args": ["/c", "npx", "-y", "@amap/amap-maps-mcp-server"],
        "env": {"AMAP_MAPS_API_KEY": "YOUR_AMAP_API_KEY"},
        "transport": "stdio"
    }
})

async def get_server_tools():
    tools = await mcp_client.get_tools()
    print(f"Loaded {len(tools)}: {[t.name for t in tools]}")

asyncio.run(get_server_tools())

After confirming the MCP connection, build the agent:

async def get_server_tools():
    mcp_tools = await mcp_client.get_tools()
    print(f"Loaded {len(mcp_tools)}: {[t.name for t in mcp_tools]}")
    agent_with_mcp = create_agent(
        model,
        tools=mcp_tools,
        system_prompt="你是一个高德地图规划助手,能帮我规划形成和获得地图基本信息"
    )
    result = await agent_with_mcp.ainvoke({
        "messages": {"role": "user", "content": "请告诉我北京圆明园到北京西北旺地铁站距离"}
    })
    for msg in result['messages']:
        msg.pretty_print()

asyncio.run(get_server_tools())

The agent first calls maps_geo to obtain coordinates, then maps_distance to compute the distance, finally returning a natural‑language answer.

2. Structured Output

LangChain’s create_agent supports structured output, allowing agents to return JSON‑compatible data. Using the same Amap example, the desired format is:

{
    "loc1": "address1",
    "loc2": "address2",
    "distance": "distance_value"
}

2.1 Implementation

Define a Pydantic model for the response:

from langchain.agents.structured_output import AutoStrategy
from pydantic import BaseModel

class Result(BaseModel):
    loc1: str
    loc2: str
    distance: float

Pass the model to create_agent via response_format:

agent_with_mcp = create_agent(
    model,
    tools=mcp_tools,
    system_prompt="你是一个高德地图规划助手,能帮我规划形成和获得地图基本信息",
    response_format=AutoStrategy(Result)
)

Running the agent now yields a structured Result instance.

2.2 Strategy Choices

ToolStrategy : lets the model decompose tasks and call tools itself; works with any model that supports tool calls.

ProviderStrategy : uses the provider’s native structured output (e.g., OpenAI’s function calling).

AutoStrategy : prefers ProviderStrategy when available, otherwise falls back to ToolStrategy.

For most projects, AutoStrategy offers the best compatibility with minimal configuration.

3. Memory Management

By default, agents do not retain conversation history across calls. To enable multi‑turn memory, configure create_agent with a checkpointer such as InMemorySaver and pass a thread_id via the configurable parameter.

from langgraph.checkpoint.memory import InMemorySaver

agent = create_agent(
    model=model,
    checkpointer=InMemorySaver()
)

# First turn
result = agent.invoke({"messages": "你好我叫苍进空?"}, {"configurable": {"thread_id": "1"}})
for msg in result['messages']:
    msg.pretty_print()

# Second turn – memory is preserved
result = agent.invoke({"messages": "你好我叫什么名字?"}, {"configurable": {"thread_id": "1"}})
for msg in result['messages']:
    msg.pretty_print()

The output shows that the agent now remembers the name "苍进空" across turns.

LangChain’s memory layer is built on LangGraph’s checkpoint system ( langgraph.checkpoint.memory), separating the core agent logic from state handling.

4. Middleware Mechanism

Middleware hooks into the agent lifecycle, allowing developers to intervene at key points such as before the agent runs, before/after each model call, and after tool execution. Built‑in middlewares include: PIIMiddleware: masks personal data before sending to the model. SummarizationMiddleware: compresses long conversation histories. HumanInTheLoopMiddleware: pauses execution for manual approval of sensitive tool calls.

Example of HumanInTheLoopMiddleware with a weather tool:

from langchain.agents import create_agent, tool
from langchain.agents.middleware import HumanInTheLoopMiddleware
from langchain.chat_models import init_chat_model
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.types import Command

@tool
def get_weather(loc: str) -> str:
    """Return weather for the given location."""
    return f"{loc} 天气是晴!气温23°"

SYSTEM_PROMPT = "你是一个天气助手,具备调用 get_weather 函数获取指定地点天气的能力"

model = init_chat_model(
    model="deepseek-chat",
    base_url="https://api.deepseek.com",
    api_key="YOUR_DEEPSEEK_API_KEY"
)

agent = create_agent(
    model=model,
    tools=[get_weather],
    system_prompt=SYSTEM_PROMPT,
    middleware=[HumanInTheLoopMiddleware(interrupt_on={"get_weather": {"allowed_decisions": ["approve", "reject"]}})],
    checkpointer=InMemorySaver()
)

config = {"configurable": {"thread_id": "1"}}
result = agent.invoke({"messages": "今天北京天气怎么样?"}, config)
if "__interrupt__" in result:
    result = agent.invoke(Command(resume={"decisions": [{"type": "approve"}]}), config)
for msg in result['messages']:
    msg.pretty_print()

The middleware intercepts the tool call, presents an approval prompt, and resumes execution based on the user’s decision.

4.2 Custom Middleware – Dynamic Model Selection

By inspecting a user’s expertise level stored in a context object, a custom middleware can switch between a powerful model (DeepSeek) and a cheaper one (Qwen‑3‑8B):

from dataclasses import dataclass
from typing import Callable
from langchain.agents import create_agent, AgentMiddleware, ModelRequest, ModelResponse
from langchain.chat_models import init_chat_model

@dataclass
class Context:
    user_level: str = "expert"

deepseek_model = init_chat_model(model="deepseek-reasoner", base_url="https://api.deepseek.com", api_key="YOUR_DEEPSEEK_API_KEY")
qwen_model = init_chat_model(model="Qwen/Qwen3-8B", model_provider="openai", base_url="https://api.siliconflow.cn/v1/", api_key="YOUR_SILICONFLOW_API_KEY")

class ExpertiseBasedToolMiddleware(AgentMiddleware):
    def wrap_model_call(self, request: ModelRequest, handler: Callable[[ModelRequest], ModelResponse]) -> ModelResponse:
        user_level = request.runtime.context.user_level
        if user_level == "expert":
            request.model = deepseek_model
        else:
            request.model = qwen_model
        return handler(request)

agent = create_agent(
    model=qwen_model,
    tools=[],
    middleware=[ExpertiseBasedToolMiddleware()],
    context_schema=Context
)

question = "你好请问你是?"
for step in agent.stream({"messages": {"role": "user", "content": question}}, context=Context(user_level="expert"), stream_mode="values"):
    step['messages'][-1].pretty_print()

When user_level="expert", the agent uses DeepSeek; otherwise it falls back to Qwen‑3‑8B, demonstrating flexible resource allocation.

5. Summary

The article presented a deep dive into LangChain 1.0’s create_agent API, covering four advanced capabilities: MCP tool integration, structured output via Pydantic and strategy selection, persistent memory with InMemorySaver, and a powerful middleware system for safety, summarization, and dynamic behavior. Practical code snippets and visual results illustrate how to build robust, controllable AI agents ready for production use.

AI agentsMCPLangChainstructured outputcreate_agent
Fun with Large Models
Written by

Fun with Large Models

Master's graduate from Beijing Institute of Technology, published four top‑journal papers, previously worked as a developer at ByteDance and Alibaba. Currently researching large models at a major state‑owned enterprise. Committed to sharing concise, practical AI large‑model development experience, believing that AI large models will become as essential as PCs in the future. Let's start experimenting now!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.