How to Integrate Large Models with LangChain: A Step‑by‑Step Tutorial

This tutorial explains LangChain's core modules and three‑layer architecture, shows how to set up a Python environment, and provides concrete code examples for connecting SiliconFlow Qwen3‑8B and DeepSeek models via the init_chat_model API, including result inspection and references to official documentation.

Fun with Large Models
Fun with Large Models
Fun with Large Models
How to Integrate Large Models with LangChain: A Step‑by‑Step Tutorial

Core Modules and Architecture

LangChain defines six core modules: Model I/O (standardizes model inputs and outputs), Retrieval (loads, splits, and embeds external data), Chains (links modules to build applications), Memory (stores historical information), Agents (general AI agents), and Callbacks (hooks for logging, monitoring, and streaming). These modules are organized into a three‑layer architecture.

Environment Setup

Create and activate a Conda environment named langchainenv and install LangChain:

conda create -n langchainenv python=3.12
conda activate langchainenv
pip install langchain
pip show langchain

Large‑Model API Design in LangChain

LangChain supports major LLM providers (OpenAI, Qwen, Gemini, DeepSeek) and locally hosted models (Ollama, Vllm). The integration pattern is to install the provider‑specific dependency (if required) and call the unified function init_chat_model to obtain a model object, abstracting provider differences.

Connecting SiliconFlow Qwen3‑8B

Register on the SiliconFlow website and obtain an API key.

Use the OpenAI‑compatible request format; no extra package is required.

from langchain.chat_models import init_chat_model

model = init_chat_model(
    model="Qwen/Qwen3-8B",
    model_provider="openai",
    base_url="https://api.siliconflow.cn/v1/",
    api_key="YOUR_API_KEY"
)

question = "你好,请问你是"
result = model.invoke(question)
print(result)

The call returns an AIMessage object. LangChain defines three message types: SystemMessage , HumanMessage , and AIMessage , representing system prompts, user inputs, and model outputs respectively.

Connecting DeepSeek

Install the provider package: pip install langchain-deepseek Initialize the chat model:

from langchain.chat_models import init_chat_model

model = init_chat_model(
    model='deepseek-chat',
    model_provider='deepseek',
    api_key="YOUR_DEEPSEEK_API_KEY"
)

question = "你好,请介绍一下你自己"
result = model.invoke(question)
print(result)

Switch to the reasoning model by changing the model argument to 'deepseek-reasoner':

model = init_chat_model(
    model='deepseek-reasoner',
    model_provider='deepseek',
    api_key="YOUR_DEEPSEEK_API_KEY"
)

The returned AIMessage includes additional_kwargs (chain‑of‑thought) and response_metadata (token usage, etc.).

Other Model Integrations

For any supported model, obtain the model’s API key, install the corresponding third‑party package (e.g., Tongyi Qwen), then initialize with init_chat_model and call invoke. The full list of supported models and their packages is documented at https://python.langchain.com/docs/integrations/chat/.

PythonLangChainlarge language modelsDeepSeekModel IntegrationSiliconFlow
Fun with Large Models
Written by

Fun with Large Models

Master's graduate from Beijing Institute of Technology, published four top‑journal papers, previously worked as a developer at ByteDance and Alibaba. Currently researching large models at a major state‑owned enterprise. Committed to sharing concise, practical AI large‑model development experience, believing that AI large models will become as essential as PCs in the future. Let's start experimenting now!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.