Build an MCP Client & Server: Step‑by‑Step Guide to Anthropic’s Model Context Protocol

This tutorial explains how to use Anthropic's open‑source Model Context Protocol (MCP) to connect large language models with external data sources by building both MCP servers and clients in Node.js/TypeScript, covering architecture, transport options, SDK usage, and practical code examples.

Alibaba Cloud Developer
Alibaba Cloud Developer
Alibaba Cloud Developer
Build an MCP Client & Server: Step‑by‑Step Guide to Anthropic’s Model Context Protocol

Background

Large language models need a reliable way to interact with external systems, a problem addressed by plugins (e.g., ChatGPT Plugins), function calling, and agent‑tool frameworks such as LangChain.

Architecture

MCP defines three roles: MCP Hosts (e.g., Claude Desktop, Cursor) that access data via MCP, MCP Clients that maintain a 1:1 connection with a server, and MCP Servers , lightweight programs that expose resources, tools, and prompts through a standardized protocol.

Two transport layers are commonly used:

StdioTransport

HTTP SSE

Implementing an MCP Server

A minimal server can be created with the SDK:

import { McpServer, ResourceTemplate } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

// Create an MCP server
const server = new McpServer({ name: "Demo", version: "1.0.0" });

// Add a simple addition tool
server.tool("add", "Add two numbers", { a: z.number(), b: z.number() }, async ({ a, b }) => ({
  content: [{ type: "text", text: String(a + b) }]
}));

async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
}

main();

The SDK also provides a scaffolding command:

npx @modelcontextprotocol/create-server my-server

MCP servers can expose three main capability types:

Resources : file‑like data that clients can read.

Tools : functions the LLM can invoke (with user approval).

Prompts : pre‑written templates that guide the model.

Implementing an MCP Client

The client reads a configuration file that lists available MCP servers and their transport type (command or SSE):

const config = [
  { name: 'demo-stdio', type: 'command', command: 'node ~/code-open/cursor-toolkits/mcp/build/demo-stdio.js', isOpen: true },
  { name: 'weather-stdio', type: 'command', command: 'node ~/code-open/cursor-toolkits/mcp/build/weather-stdio.js', isOpen: true },
  { name: 'demo-sse', type: 'sse', url: 'http://localhost:3001/sse', isOpen: false }
];
export default config;

Using the SDK, a full‑featured client can be built to discover tools from all connected servers, send user queries to an LLM (e.g., OpenAI gpt‑4‑turbo‑preview), invoke the appropriate tool, and feed the result back to the model:

import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport, StdioServerParameters } from "@modelcontextprotocol/sdk/client/stdio.js";
import { SSEClientTransport } from "@modelcontextprotocol/sdk/client/sse.js";
import OpenAI from "openai";
import { Tool } from "@modelcontextprotocol/sdk/types.js";
import { createInterface } from "readline";
import { homedir } from "os";
import config from "./mcp-server-config.js";

class MCPClient {
  static getOpenServers() { return config.filter(c => c.isOpen).map(c => c.name); }
  constructor() { this.openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY }); this.sessions = new Map(); this.transports = new Map(); }
  async connectToServer(name) {
    const cfg = config.find(c => c.name === name);
    if (!cfg) throw new Error(`Server configuration not found: ${name}`);
    let transport;
    if (cfg.type === 'command') transport = await this.createCommandTransport(cfg.command);
    else if (cfg.type === 'sse') transport = await this.createSSETransport(cfg.url);
    else throw new Error(`Invalid server configuration: ${name}`);
    const client = new Client({ name: "mcp-client", version: "1.0.0" }, { capabilities: { prompts: {}, resources: {}, tools: {} } });
    await client.connect(transport);
    this.sessions.set(name, client);
    this.transports.set(name, transport);
    const resp = await client.listTools();
    console.log(`
Connected to server '${name}' with tools:`, resp.tools.map(t => t.name));
  }
  async createCommandTransport(shell) {
    const [command, ...args] = shell.split(' ');
    const resolvedArgs = args.map(a => a.startsWith('~/') ? a.replace('~', homedir()) : a);
    const params = { command, args: resolvedArgs, env: process.env };
    return new StdioClientTransport(params);
  }
  async createSSETransport(url) { return new SSEClientTransport(new URL(url)); }
  async processQuery(query) {
    if (!this.sessions.size) throw new Error("Not connected to any server");
    const messages = [{ role: "user", content: query }];
    const availableTools = [];
    for (const [srv, sess] of this.sessions) {
      const { tools } = await sess.listTools();
      availableTools.push(...tools.map(t => ({ type: "function", function: { name: `${srv}__${t.name}`, description: `[${srv}] ${t.description}`, parameters: t.inputSchema } })));
    }
    const completion = await this.openai.chat.completions.create({ model: "gpt-4-turbo-preview", messages, tools: availableTools, tool_choice: "auto" });
    const final = [];
    for (const choice of completion.choices) {
      const msg = choice.message;
      if (msg.content) final.push(msg.content);
      if (msg.tool_calls) {
        for (const call of msg.tool_calls) {
          const [srv, toolName] = call.function.name.split('__');
          const sess = this.sessions.get(srv);
          if (!sess) { final.push(`[Error: Server ${srv} not found]`); continue; }
          const args = JSON.parse(call.function.arguments);
          const result = await sess.callTool({ name: toolName, arguments: args });
          const toolResult = result;
          final.push(`[Calling tool ${toolName} on server ${srv} with args ${JSON.stringify(args)}]`);
          final.push(toolResult.content);
          messages.push({ role: "assistant", content: "", tool_calls: [call] });
          messages.push({ role: "tool", tool_call_id: call.id, content: toolResult.content });
          const next = await this.openai.chat.completions.create({ model: "gpt-4-turbo-preview", messages, tools: availableTools, tool_choice: "auto" });
          if (next.choices[0].message.content) final.push(next.choices[0].message.content);
        }
      }
    }
    return final.join("
");
  }
  async chatLoop() {
    console.log("
MCP Client Started!
Type your queries or 'quit' to exit.");
    const rl = createInterface({ input: process.stdin, output: process.stdout });
    const ask = () => new Promise(res => rl.question("
Query: ", res));
    try {
      while (true) {
        const q = (await ask()).trim();
        if (q.toLowerCase() === 'quit') break;
        try { console.log("
" + await this.processQuery(q)); }
        catch (e) { console.error("
Error:", e); }
      }
    } finally { rl.close(); }
  }
  async cleanup() { for (const t of this.transports.values()) await t.close(); this.transports.clear(); this.sessions.clear(); }
  hasActiveSessions() { return this.sessions.size > 0; }
}

(async () => {
  const open = MCPClient.getOpenServers();
  console.log("Connecting to servers:", open.join(", "));
  const client = new MCPClient();
  for (const name of open) {
    try { await client.connectToServer(name); } catch (e) { console.error(`Failed to connect to server '${name}':`, e); }
  }
  if (!client.hasActiveSessions()) throw new Error("Failed to connect to any server");
  await client.chatLoop();
  await client.cleanup();
})();

The client can be started with:

NODE_TLS_REJECT_UNAUTHORIZED=0 node build/client.js

Conclusion

The MCP ecosystem solves client‑server data exchange for LLMs but still faces challenges: limited language support and tooling maturity, inconsistent server quality, and reliance on Node.js/Python environments for local servers. As adoption grows, a richer ecosystem and standardized packaging are expected.

TypeScriptMCPNode.jsFunction CallingLLM integrationModel Context Protocol
Alibaba Cloud Developer
Written by

Alibaba Cloud Developer

Alibaba's official tech channel, featuring all of its technology innovations.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.