Mastering LangChain 1.0’s create_agent API: Basics, Message Types, and Stream Modes

This tutorial walks through setting up a Python environment, explains the three essential components of LangChain 1.0’s create_agent API, details the built‑in message types, and demonstrates four streaming output modes using a weather‑assistant example to help developers quickly adopt the new agent framework.

Fun with Large Models
Fun with Large Models
Fun with Large Models
Mastering LangChain 1.0’s create_agent API: Basics, Message Types, and Stream Modes

The article introduces LangChain 1.0’s new create_agent API as a standard way to build AI agents.

1. Environment Setup

Use Anaconda to create a virtual environment named langchainenvnew with Python 3.12, then install the required packages.

conda create -n langchainenvnew python=3.12
pip install -u langchain langchain-community langchain-deepseek

2. The three essentials of create_agent

The API requires a system prompt, a large language model, and one or more tool functions. A weather‑assistant example illustrates the workflow.

from langchain.chat_models import init_chat_model
from langchain.tools import tool
from langchain.agents import create_agent

@tool
def get_weather(loc: str) -> str:
    """Return the weather for the given location."""
    return f"{loc} 天气是晴!气温23°"

SYSTEM_PROMPT = "你是一个天气助手,具备调用get_weather天气函数获取指定地点天气的能力"
model = init_chat_model(
    model="deepseek-chat",
    base_url="https://api.deepseek.com",
    api_key="your_deepseek_api_key"
)
agent = create_agent(model=model, tools=[get_weather], system_prompt=SYSTEM_PROMPT)

Run a query and stream the intermediate results:

question = "北京的天气怎么样?"
for step in agent.stream({"messages": question}, stream_mode="values"):
    step["messages"][-1].pretty_print()

3. Model and message mechanisms

LangChain integrates over 100 model providers; the example uses DeepSeek. The framework defines four message classes:

SystemMessage : sets the agent’s role and tool capabilities.

HumanMessage : the user’s initial query or feedback.

ToolMessage : carries the result of a tool call.

AIMessage : the model‑generated response.

All messages are stored in the messages history, enabling multi‑turn reasoning and context preservation.

4. Stream output modes

Four streaming options are covered:

invoke (no streaming) : agent.invoke(...) returns the final answer in one step.

values : emits each intermediate step (HumanMessage, AIMessage, ToolMessage, final AIMessage).

messages : streams the final response token‑by‑token.

custom : allows user‑defined output inside tool functions via get_stream_writer().

Code snippets demonstrate each mode, including combining multiple modes in a single call.

5. Summary

The guide walks through environment preparation, the three core components of create_agent, the LangChain message system, and all four streaming modes using a concrete weather‑assistant scenario, equipping developers with the fundamentals needed to build more advanced agents.

PythonAI agentsLangChainTool Callingmessage typescreate_agent
Fun with Large Models
Written by

Fun with Large Models

Master's graduate from Beijing Institute of Technology, published four top‑journal papers, previously worked as a developer at ByteDance and Alibaba. Currently researching large models at a major state‑owned enterprise. Committed to sharing concise, practical AI large‑model development experience, believing that AI large models will become as essential as PCs in the future. Let's start experimenting now!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.