Mastering LangChain: Build LLM Apps with Chains, Agents, and Vector Stores
This tutorial walks through the limitations of simple prompt usage, introduces LangChain as a framework for building full‑featured LLM applications, explains its core concepts and components, and provides step‑by‑step code examples for installing, configuring, and running a basic LangChain demo.
After learning that single‑prompt calls to large language models (LLMs) suffer from lack of answer verification, no memory, token length limits, and inefficient multi‑step reasoning, developers must manually handle component integration, context management, and workflow orchestration to build complete LLM applications.
What Is LangChain
LangChain is a framework that abstracts LLM APIs, provides standardized tooling, and simplifies the development of applications powered by large language models.
Core Features
LLM invocation supporting OpenAI, Hugging Face, Azure, with optional caching.
Prompt management with document loaders for PDF, Markdown, etc.
Indexing utilities: document splitters, vectorization, and integration with vector stores such as Chroma, Pinecone, Qdrant.
Chain orchestration including LLMChain and various tool chains.
Key Concepts
LLM Model and Prompt
LangChain unifies the APIs of different LLM providers and offers a template system for managing prompts.
Chain
A Chain represents a single task; multiple Chains can be linked sequentially to form a workflow.
LCEL (LangChain Expression Language)
LCEL lets developers express complex workflow logic—conditionals, loops, and branching—directly in code.
RAG (Retrieval‑Augmented Generation)
RAG enriches LLM responses by retrieving relevant information from external knowledge bases and feeding it into the model.
Agents
Agents combine LLM reasoning with tool usage to automatically invoke external systems based on user intent, enabling use cases such as chatbots, knowledge‑base QA, and AI‑assisted writing.
Memory
LangChain provides memory modules that store conversation history or long‑term context, allowing the LLM to reference prior interactions.
Output Parsers
After the LLM returns raw text (often Markdown), output parsers convert it into structured formats like JSON or Python objects.
Vector Stores
Documents are embedded into numeric vectors and stored in a vector database so that similarity search can be performed; LangChain supports many back‑ends (e.g., Chroma, Pinecone, Qdrant).
Embedding Models
Embedding transforms text into vectors that capture semantic meaning; LangChain offers multiple embedding providers.
Architecture Overview (v0.3)
LangChain‑Core : defines abstract interfaces for chat models, vector stores, tools, etc., with minimal external dependencies.
LangChain : the public entry point that bundles most functionality.
Integration Packages : version‑controlled adapters such as langchain‑openai, langchain‑anthropic, etc.
LangChain‑Community : community‑maintained extensions (e.g., langchain‑ollama, langchain‑duckduckgo, langchain‑google, langchain‑bing).
LangGraph : adds advanced composability for building custom agents and complex pipelines.
langServe : exposes a Chain as a RESTful service.
LangSmith : a developer platform for debugging, testing, evaluating, and monitoring LLM applications.
Getting Started
Installation and Project Initialization
# Create project directory
mkdir ai-learn04-langchain && cd ai-learn04-langchain
# Set up virtual environment
python -m venv venv
# Activate environment
. venv/bin/activateSave the following dependencies in requirements.txt:
langchain==0.3.19
langchain-community==0.3.17
langchain-ollama==0.2.3Install them with:
pip install -r requirements.txtFirst LLM Call
Create demo1.py with the code below. It demonstrates how to connect LangChain to an Ollama server running the deepseek‑r1:32b model, build a simple PromptTemplate, chain it with the LLM and an output parser, and invoke the chain.
# Implement LangChain call to Ollama LLM
from langchain_ollama.llms import OllamaLLM
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
llm = OllamaLLM(base_url="http://127.0.0.1:11434", model="deepseek-r1:32b")
template = "You are a world‑class AI expert, {input}"
prompt = PromptTemplate(input_variables=["input"], template=template)
# Build a simple chain: Prompt -> LLM -> Parser
chain = prompt | llm | StrOutputParser()
response = chain.invoke({"input": "Write a short article about AI, no more than 100 words"})
print(response)The script prints a concise AI‑generated paragraph, as shown in the screenshot below.
Summary
Import LangChain’s LLM wrapper.
Create a PromptTemplate to structure the user query.
Compose a chain using the pipe operator: PromptTemplate | LLM | StrOutputParser.
Invoke the chain and display the model’s response.
If the configured model cannot be found, the user should verify the model name, ensure the Ollama server is running, and check the base_url configuration.
References
LangChain official documentation: https://python.langchain.com/docs/introduction/
LangChain concept guide: https://python.langchain.com/docs/concepts/#concepts
Embedding models list: https://python.langchain.com/docs/integrations/text_embedding/
Integration package index: https://python.langchain.com/docs/integrations/providers/
Community tutorials and videos (e.g., Bilibili 2025 LangChain full‑stack tutorial).
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Qborfy AI
A knowledge base that logs daily experiences and learning journeys, sharing them with you to grow together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
