Build Your First LangChain Agent: A Hands‑On Framework Tutorial

This article walks through a practical, step‑by‑step construction of a LangChain agent—from basic concepts and a simple weather‑query agent to a more complex market‑research agent, adding memory and RAG capabilities, and finally comparing LangChain with LangGraph.

AI Illustrated Series
AI Illustrated Series
AI Illustrated Series
Build Your First LangChain Agent: A Hands‑On Framework Tutorial

What Is LangChain

LangChain is an AI application development framework that connects large models, tools, and knowledge bases, allowing developers to assemble AI applications quickly. Without it, developers must write code for API calls, error handling, context management, and output parsing manually.

Using LangChain is likened to buying a pre‑built modular PC: the framework supplies ready‑made blocks (model, tools, knowledge store) that can be assembled according to the developer’s design.

Core Concepts

Model : LangChain supports many large models (OpenAI GPT series, Anthropic Claude, Chinese models such as Wenxin, Tongyi, GLM, and local models like Llama and Qwen). Switching models only requires changing the model name.

Prompt : A prompt is the instruction given to a model. Prompt templates contain variables that are filled at runtime, e.g., "You are a {role} assistant, the user asks: {question}".

Chain : A Chain links components into a processing pipeline. The simplest chain is Prompt → Model → Output. More complex chains can include parsing, conditional logic, and nested sub‑chains.

Tool : Tools are external capabilities (search engines, databases, APIs, code execution, file operations) that an agent can invoke. Defining a tool requires a name, a description, and an implementation function.

Building a Simple Weather Agent

Goal : An agent that can answer weather queries.

Step 1 – Define the tool :

def get_weather(city: str) -> str:
    """Get city weather"""
    # In a real project, call a real weather API here
    return f"{city} today is sunny, 15-25°C"

Step 2 – Create the agent :

tools = [get_weather]
agent = initialize_agent(
    tools,
    llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True
)

The ZERO_SHOT_REACT_DESCRIPTION agent follows the ReAct pattern: it first decides whether to call a tool, then calls it, observes the result, and finally produces an answer.

Step 3 – Run the agent :

result = agent.invoke("北京今天天气怎么样?")
print(result)

The execution trace looks like:

> Entering new Agent chain...
Thought: The user asks about Beijing weather, I need to call the weather tool.
Action: get_weather
Action Input: 北京
Observation: 北京 today is sunny, 15-25°C
Thought: I have the weather information, I can answer now.
Final Answer: 北京今天天气晴朗,气温15-25度。
> Finished chain.

Building a More Complex Market‑Research Agent

Goal : When the user says "Help me research the new‑energy vehicle market", the agent automatically searches industry data, gathers competitor information, and generates a report.

Step 1 – Define tools :

def search_industry(keyword: str) -> str:
    """Search industry data and reports"""
    return f"{keyword} industry 2024 market size ~5000B, YoY growth 25%."

def search_competitor(keyword: str) -> str:
    """Search competitor info and market share"""
    return f"{keyword} main competitors: Brand A 35%, Brand B 28%, Brand C 20%."

def generate_report(data: dict) -> str:
    """Generate a research report from collected data"""
    return f"# Market Research Report

Industry size: {data['market_size']}
Competitor analysis: {data['competitors']}"

Step 2 – Initialize the agent (same ZERO_SHOT_REACT_DESCRIPTION as before).

Step 3 – Run the agent :

result = agent.invoke("帮我调研一下新能源车市场")
print(result)

The agent plans the workflow, calls the industry‑search tool, then the competitor‑search tool, and finally the report‑generation tool.

Adding a Memory System

Without memory, each turn is independent. To retain conversation context, LangChain provides ConversationSummaryMemory which stores a compressed summary of past dialogue.

memory = ConversationSummaryMemory(
    llm=llm,
    memory_key="chat_history",
    return_messages=True
)

Creating a conversational agent:

agent = initialize_agent(
    tools,
    llm,
    agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
    memory=memory,
    verbose=True
)

Interaction example:

agent.invoke("我叫张三")
agent.invoke("我叫什么呢?")

Output:

"好的,张三,我记住了。"

"您叫张三。"

Adding RAG (Retrieval‑Augmented Generation)

First, create a vector store (Chroma) and add documents:

from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import Chroma

embeddings = OpenAIEmbeddings()
vectorstore = Chroma(
    persist_directory="./knowledge_base",
    embedding_function=embeddings
)
vectorstore.add_documents(documents)

Then create a retriever that returns the top 3 most similar documents:

retriever = vectorstore.as_retriever(search_kwargs={"k": 3})

Finally, add the retriever to the agent’s tool list:

agent = initialize_agent(
    tools + [retriever],
    llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)

Full Architecture Overview

The complete LangChain agent pipeline consists of:

User Input Layer : receives the user’s request.

Memory Layer : checks for relevant conversation history.

Tool Layer : decides which tools (e.g., industry search, competitor analysis, retriever) to invoke.

LLM Layer : performs reasoning, selects tools, and generates the final answer.

Output Layer : returns the answer and optionally stores the dialogue in memory.

LangChain vs. LangGraph

LangChain centers on the linear Chain abstraction, ideal for fixed, step‑by‑step workflows such as "question → knowledge retrieval → answer".

LangGraph uses a graph abstraction with nodes and edges, suitable for complex workflows that require branching, loops, retries, or multi‑agent collaboration.

When to choose LangChain : rapid prototyping, simple linear tool calls, single‑agent scenarios.

When to choose LangGraph : multi‑turn conversations with loops, error‑retry flows, multi‑agent coordination, or explicit state management.

Practical advice : start with LangChain to validate ideas; if expressive power becomes insufficient, migrate to LangGraph.

Author’s Viewpoint

The framework is a means, not an end. The real value lies in well‑designed tools, clear prompts, and a high‑quality knowledge base. Even the best framework cannot compensate for poor tool definitions, ambiguous prompts, or incomplete knowledge sources.

Guideline: use LangChain for quick validation, LangGraph for production‑grade complex applications, or bypass frameworks entirely and call APIs directly if you prefer.

Next Issue Preview

The upcoming article will enumerate fifteen typical agent use‑cases—from customer service to code generation—showing what agents can already do and which scenarios are still exploratory.

PythonPrompt EngineeringTool IntegrationLangChainRAGAI AgentMemory
AI Illustrated Series
Written by

AI Illustrated Series

Illustrated hardcore tech: AI, agents, algorithms, databases—one picture worth a thousand words.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.