Unlock AI Power with Model Context Protocol (MCP): Build LLM‑Enabled Servers in Minutes
This article introduces the Model Context Protocol (MCP) and Large Language Models (LLM), explains their core concepts, transmission mechanisms, lifecycle, and essential modules, and provides step‑by‑step code examples for creating an MCP server, adding tools, resources, prompts, and debugging workflows to accelerate AI‑driven development.
Introduction
Before diving in, we need to understand what MCP and LLM are, and whether an LLM can compute simple mathematical expressions such as 3+4.
The official definition of LLM (Large Language Model) describes it as a pre‑trained language model built on massive data, lacking real‑time freshness and logical rigor. Consequently, many supplementary tools and techniques—such as Function Calling, MCP, and AI Agents—have emerged.
The official definition of MCP (Model Context Protocol) is an open protocol that standardises how applications provide context to LLMs. Think of MCP as the USB‑C interface for AI models, offering plug‑and‑play connectivity to various data sources and tools.
Core Concepts
MCP primarily uses a C/S architecture, communicating via Client and Server services. The MCP Server runs on a local computer rather than a cloud service.
LLM
Technical Terms
MCP Hosts: programs like Claude Desktop, IDE, Cursor that want to access data via MCP.
MCP Clients: protocol clients that maintain a one‑to‑one connection with the server.
MCP Servers: lightweight programs that expose specific capabilities through the Model Context Protocol.
Local Data Sources: files, databases, and services that an MCP server can safely access.
Remote Services: external systems reachable via APIs.
MCP Protocol: messages are exchanged using JSON‑RPC 2.0.
Transmission Mechanisms
Stdio transmission – uses standard input/output, suitable for local processes.
HTTP – standard HTTP requests.
SSE transmission – server‑sent events for server‑to‑client messaging and HTTP POST for client‑to‑server messaging.
Lifecycle
Client sends an initialize request containing protocol version and capabilities.
Server responds with its version and capabilities.
Client sends an initialized notification as confirmation.
Normal message exchange begins, supporting request‑response and one‑way notifications.
MCP Core Modules
Resource – any type of data offered to clients via protocol://host/path.
Prompts – reusable prompt templates and workflows that servers define and clients can present to users and LLMs.
Tools – servers expose executable functions; LLMs can invoke them to interact with external systems or perform calculations.
Sampling – servers request LLM completions to enable complex agentic behaviour while preserving safety and privacy.
Roots – define the boundaries (URIs, HTTP URLs) that a server may operate on.
Example Code
Create MCP Server
Use FastMCP to quickly create an MCP server:
import axios from "axios");
import * as cheerio from "cheerio";
import { FastMCP } from "fastmcp";
import { z } from "zod";
const mcp = new FastMCP({
name: "示例MCP服务器",
version: "1.0.0",
logger: {
debug: (...args) => console.error("[DEBUG]", ...args),
info: (...args) => console.error("[INFO]", ...args),
warn: (...args) => console.error("[WARN]", ...args),
error: (...args) => console.error("[ERROR]", ...args),
log: (...args) => console.error("[LOG]", ...args)
}
});Tools
Add a tool that fetches web content and feeds it to an LLM:
const get_article = async (link: string) => {
try {
console.log(`Fetching web content: ${link}`);
const response = await axios.get(link, {
timeout: 10000,
headers: { "User-Agent": "Mozilla/5.0 ..." }
});
const $ = cheerio.load(response.data);
$("script, style, nav, header, footer, aside, .advertisement, .ads").remove();
let content = "";
const selectors = ["article", ".article-content", ".post-content", ".entry-content", ".content", "main", ".main-content", "[role=\"main\"]"];
for (const selector of selectors) {
const el = $(selector);
if (el.length > 0) {
content = el.text().trim();
if (content.length > 100) break;
}
}
if (!content || content.length < 100) content = $("body").text().trim();
content = content.replace(/\s+/g, " ").replace(/
\s*
/g, "
").trim();
if (content.length > 5000) content = content.substring(0, 5000) + "...";
console.log(`Successfully fetched content, length: ${content.length} characters`);
return content;
} catch (error) {
console.error(`Failed to fetch web content: ${error instanceof Error ? error.message : String(error)}`);
return `Failed to fetch web content: ${error instanceof Error ? error.message : String(error)}`;
}
};
mcp.addTool({
name: "get_article_by_link",
description: "获取网页内容",
parameters: z.object({ expression: z.string().describe("要获取的网页内容") }),
execute: async ({ expression }) => {
try {
const result = await get_article(expression);
return { content: [{ type: "text", text: `Web content from ${expression}:
${result}` }] };
} catch (error) {
return { content: [{ type: "text", text: `Error fetching web content: ${error instanceof Error ? error.message : "未知错误"}` }], isError: true };
}
}
});Resource
Add a resource that provides system information to the LLM:
mcp.addResource({
uri: "system://info",
name: "系统信息",
description: "当前系统的信息",
mimeType: "application/json",
load: async () => {
const os = await import('os');
return {
uri: "system://info",
mimeType: "application/json",
text: JSON.stringify({
platform: os.platform(),
arch: os.arch(),
nodeVersion: process.version,
uptime: os.uptime(),
totalMemory: os.totalmem(),
freeMemory: os.freemem(),
cpus: os.cpus().length
}, null, 2)
};
}
});Prompt
Define a prompt that asks the LLM to analyse supplied code:
mcp.addPrompt({
name: "analyze-code",
description: "分析代码以获得潜在改进",
arguments: [
{ name: "language", description: "编程语言", required: true },
{ name: "code", description: "要分析的代码", required: true }
],
load: async ({ language, code }) => {
return {
messages: [{
role: "user",
content: {
type: "text",
text: `请分析以下${language}代码,并提供改进建议:
\\`\
${code}
\\
请从以下几个方面分析:
1. 代码质量和可读性
2. 性能优化建议
3. 最佳实践建议
4. 潜在的安全问题
5. 代码结构改进建议`
}
}]
};
}
});Debugging
The MCP Inspector is an interactive developer tool for testing and debugging MCP servers:
npx @modelcontextprotocol/inspector <command> <arg1> <arg2>Running the inspector opens a local web UI at http://localhost:6274/:
Configure the server in development tools such as Claude Desktop, IDE, or Cursor using a configuration object:
const mcpConfig = new MCPConfiguration({
servers: {
deepWiki: { type: "sse", url: "https://mcp.deepwiki.com/sse", timeout: 25000 },
"mcp-test": { type: "stdio", command: "yarn", args: ["--cwd", "D:\\code\\个人项目\\mcp\\gpt-demo", "mcp"] }
}
});Switching to the tools tab reveals the custom tool functions for debugging, such as the calculate tool shown in the following screenshots:
Conclusion
This article provides a quick overview of MCP concepts and fundamentals, and demonstrates how to develop an MCP Server within five minutes. The protocol simplifies tool integration, optimises testing workflows, enhances task planning and collaboration, and enables cross‑ecosystem service composition.
Simplify tool integration: MCP defines a unified interface, allowing AI models to perceive environments and operate tools without bespoke adapters; developers can spin up a custom MCP agent in minutes, dramatically boosting productivity.
Optimise testing processes: In software testing, MCP lets AI agents automatically create, execute, and maintain test cases, adapt strategies for complex distributed systems, and auto‑repair broken locators or scripts, reducing maintenance costs and accelerating CI/CD pipelines.
Boost task planning and collaboration: Enterprise‑grade MCP servers enable AI assistants to participate in requirement breakdown, PR creation and review, cutting manual communication overhead and improving team efficiency.
Facilitate cross‑ecosystem integration: MCP can chain multiple services—e.g., map, text generation, and Notion storage—to automatically generate travel itineraries, with dynamic routing ensuring precise task execution.
Goodme Frontend Team
Regularly sharing the team's insights and expertise in the frontend field
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
