Artificial Intelligence 10 min read

Step‑by‑Step MCP Demo: Build Server and Claude/DeepSeek Clients

This guide walks developers through creating a complete MCP application, covering the workflow, server setup with Python, debugging tools, and client implementation using both Claude and DeepSeek models, complete with code snippets, environment configuration, and testing procedures to demonstrate end‑to‑end LLM tool integration.

Data Thinking Notes
Data Thinking Notes
Data Thinking Notes
Step‑by‑Step MCP Demo: Build Server and Claude/DeepSeek Clients

The previous article "MCP Basic Concepts and Core Principles" introduced MCP fundamentals; this tutorial shows a hands‑on demo that walks through the entire MCP workflow, server development, debugging, and client implementation using Claude and DeepSeek models.

MCP Workflow

The core execution flow consists of the client obtaining a tool list from the server, sending the user query with tool descriptions to the LLM service (Claude or DeepSeek), the model deciding which tools to invoke, the client executing the tool calls via the server, the results returning to the model, and finally the model delivering a natural‑language response to the user.

MCP Server Development

System Requirements

Python >= 3.10

Python MCP SDK >= 1.2.0

Environment Preparation

Install

uv

and set up a Python project:

<code>powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"</code>
<code>curl -LsSf https://astral.sh/uv/install.sh | sh</code>

Create the project (Windows)

<code># Create a new directory for the project
uv init weather
cd weather
# Create and activate a virtual environment
uv venv
.venv\Scripts\activate
# Install dependencies
uv add mcp[cli] httpx</code>

Create the project (MacOS/Linux)

<code># Create a new directory for the project
uv init weather
cd weather
# Create and activate a virtual environment
uv venv
source .venv/bin/activate
# Install dependencies
uv add "mcp[cli]" httpx</code>

Full server code is available at GitHub . The tool execution handler contains the core logic.

Run the server:

<code>uv run weather.py</code>

Debugging the Server

You can debug using either MCP Inspector or Claude Desktop .

MCP Inspector (run in the venv):

<code>mcp dev weather.py</code>

Claude Desktop configuration (add to

claude_desktop_config.json

):

<code>{
  "mcpServers": {
    "weather": {
      "command": "uv",
      "args": ["--directory", "C:\\ABSOLUTE\\PATH\\TO\\PARENT\\FOLDER\\weather", "run", "weather.py"]
    }
  }
}</code>

After saving, restart Claude Desktop. The tool list will appear in the UI; if the server is not detected, check the log directory

C:\Users\xxx\AppData\Roaming\Claude\logs\mcp-*.log

for debugging information.

MCP Client Development

The client connects to the MCP server and drives a chat interaction using either the official Claude model or the DeepSeek model.

Claude Service

Create the client project

<code># Create project directory
uv init mcp-client
cd mcp-client
# Create virtual environment
uv venv
# Activate (Windows)
.venv\Scripts\activate
# Activate (Unix/MacOS)
source .venv/bin/activate
# Install required packages
uv add mcp anthropic python-dotenv</code>

Official client code: GitHub .

Run the client (specify the weather server path):

<code>uv run client.py path/to/weather.py</code>

Client workflow:

Connect to the specified server

List available tools

Start an interactive chat session where the user can input queries, view tool execution, and receive the final Claude response

DeepSeek Service

Core async function handling queries and tool calls:

<code>async def process_query(self, query: str) -> str:
    """Use DeepSeek model to process the query and invoke available MCP tools (Function Calling)."""
    messages = [{"role": "user", "content": query}]
    response = await self.session.list_tools()
    available_tools = [{
        "type": "function",
        "function": {
            "name": tool.name,
            "description": tool.description,
            "input_schema": tool.inputSchema
        }
    } for tool in response.tools]
    print(available_tools)
    response = self.client.chat.completions.create(
        model=self.model,
        messages=messages,
        tools=available_tools
    )
    content = response.choices[0]
    print(f'response choice={content}')
    if content.finish_reason == "tool_calls":
        tool_call = content.message.tool_calls[0]
        tool_name = tool_call.function.name
        tool_args = json.loads(tool_call.function.arguments)
        result = await self.session.call_tool(tool_name, tool_args)
        print(f"\n\n[Calling tool {tool_name} with args {tool_args}]\n\n")
        messages.append(content.message.model_dump())
        messages.append({"role": "tool", "content": result.content[0].text, "tool_call_id": tool_call.id})
        response = self.client.chat.completions.create(model=self.model, messages=messages)
        return response.choices[0].message.content
    return content.message.content</code>

Run and verify the DeepSeek client similarly to the Claude client.

Conclusion

This tutorial provides a complete, from‑scratch path for building MCP applications, demonstrates compatibility with multiple LLM services (Claude and DeepSeek), and highlights MCP’s extensibility. Future work can explore more complex tool chains, performance optimization, or integration with other models such as GPT or local LLMs to broaden intelligent application scenarios.

PythonLLMMCPTool IntegrationDeepSeekServer DevelopmentClaude
Data Thinking Notes
Written by

Data Thinking Notes

Sharing insights on data architecture, governance, and middle platforms, exploring AI in data, and linking data with business scenarios.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.