How to Integrate LangChain with MCP: A Step‑by‑Step Implementation Guide

This tutorial walks through the complete process of connecting LangChain to a Model Context Protocol (MCP) server, covering environment setup, configuration files, client code, and a PlayWright‑based browser automation example that demonstrates building and running an AI agent with LangChain.

Fun with Large Models
Fun with Large Models
Fun with Large Models
How to Integrate LangChain with MCP: A Step‑by‑Step Implementation Guide

Model Context Protocol (MCP) Overview

MCP (Model Context Protocol) was announced by Anthropic in November 2024. It standardises the way large‑language‑model agents invoke external tools by wrapping the existing Function Calling mechanism in a server‑client architecture and providing a unified development toolkit. The protocol allows developers to implement an MCP server that exposes tool functions, which can then be consumed by any compatible agent without writing bespoke integration code for each tool.

Playwright MCP Server Configuration

Create a servers_config.json file in the project directory with the following content (the entry is taken from the MCP catalogue for the Playwright server):

{
  "mcpServers": {
    "playwright": {
      "command": "npx",
      "args": ["@playwright/mcp@latest"],
      "transport": "stdio"
    }
  }
}

The command field uses npx to download the Playwright MCP server locally; the transport set to stdio enables communication with the LangChain client.

Environment Setup

Python Dependencies

In an Anaconda virtual environment (e.g., langchainenv) install the MCP adapter package: pip install langchain-mcp-adapters Playwright also requires a local Node.js installation.

Configuration Class

Define a helper class that loads the LLM API key from a .env file, stores the model identifier, and reads the server configuration JSON:

import json, logging, asyncio
from dotenv import load_dotenv

class Configuration:
    def __init__(self) -> None:
        load_dotenv()
        self.api_key = "your DeepSeek API key"
        self.model = "deepseek-chat"

    @staticmethod
    def load_servers(file_path="servers_config.json"):
        with open(file_path, "r", encoding="utf-8") as f:
            return json.load(f).get("mcpServers", {})

Building the MCP Client in LangChain

Import the required LangChain modules and the MCP client, then create an asynchronous chat loop that connects to the MCP server, loads the tools, initialises the LLM, builds the agent, and runs a CLI chat interface.

import asyncio, json, logging
from dotenv import load_dotenv
from langchain import hub
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain.chat_models import init_chat_model
from langchain_mcp_adapters.client import MultiServerMCPClient

async def run_chat_loop():
    cfg = Configuration()
    servers_cfg = Configuration.load_servers()
    # 1️⃣ Connect to one or more MCP servers
    mcp_client = MultiServerMCPClient(servers_cfg)
    tools = await mcp_client.get_tools()  # List[Tool]
    logging.info(f"✅ Loaded {len(tools)} MCP tools: {[t.name for t in tools]}")

    # 2️⃣ Initialise the LLM (DeepSeek or any OpenAI‑compatible model)
    llm = init_chat_model(model=cfg.model, model_provider="deepseek", api_key=cfg.api_key)

    # 3️⃣ Build a generic LangChain agent
    prompt = hub.pull("hwchase17/openai-tools-agent")
    agent = create_openai_tools_agent(llm, tools, prompt)
    agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

    # 4️⃣ Simple CLI chat
    print("
🤖 MCP Agent started – type 'quit' to exit")
    while True:
        user_input = input("
You: ").strip()
        if user_input.lower() == "quit":
            break
        try:
            result = await agent_executor.ainvoke({"input": user_input})
            print(f"
AI: {result['output']}")
        except Exception as exc:
            print(f"
⚠️ Error: {exc}")

if __name__ == "__main__":
    logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s")
    asyncio.run(run_chat_loop())

Example: Crawling a Microsoft Copilot Blog Page

Using the same CLI, provide the following prompt (or any similar instruction) to the agent:

Visit https://www.microsoft.com/en-us/microsoft-365/blog/2025/01/16/copilot-is-now-included-in-microsoft-365-personal-and-family/?culture=zh-cn&country=cn and summarise the page content.

The agent launches a headless browser via the Playwright MCP server, navigates to the URL, extracts the visible text, and returns a concise summary. The screenshots in the original article illustrate the browser window opening, page loading, and the final textual summary returned by the agent.

Technical Details of the Conversion Process

The call mcp_client.get_tools() internally invokes load_mcp_tools(), which transforms each function exposed by the MCP server into a standard LangChain Tool object. This conversion supplies the three essential components for a LangChain agent:

Model – instantiated via init_chat_model Prompt – obtained from the Hub (e.g., hwchase17/openai-tools-agent)

Tool functions – the MCP‑derived Tool objects

With these elements, create_openai_tools_agent and AgentExecutor can orchestrate tool selection and execution automatically.

Summary of the Workflow

The end‑to‑end process consists of:

Installing langchain-mcp-adapters and ensuring Node.js for Playwright.

Creating servers_config.json that points to the Playwright MCP server.

Loading the configuration and instantiating MultiServerMCPClient.

Calling get_tools() to obtain LangChain‑compatible tools.

Initialising the LLM (DeepSeek in the example).

Building the agent with a generic prompt.

Running a CLI loop that forwards user queries to the agent and displays the model’s output.

This demonstrates how LangChain can seamlessly integrate with an MCP server to perform browser automation tasks without writing custom tool wrappers.

PythonMCPLangChainAI AgentPlaywright
Fun with Large Models
Written by

Fun with Large Models

Master's graduate from Beijing Institute of Technology, published four top‑journal papers, previously worked as a developer at ByteDance and Alibaba. Currently researching large models at a major state‑owned enterprise. Committed to sharing concise, practical AI large‑model development experience, believing that AI large models will become as essential as PCs in the future. Let's start experimenting now!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.