Securing the Model Context Protocol (MCP): Volcanic Engine’s End‑to‑End Approach
This article explains how Volcanic Engine safeguards the Model Context Protocol (MCP) throughout its lifecycle, detailing MCP fundamentals, core components, a step‑by‑step interaction example, seven major security risks, official design principles, and a comprehensive security architecture covering admission control, native design, and runtime protection.
Introduction
This article outlines Volcanic Engine’s security practices for the Model Context Protocol (MCP) across its entire lifecycle. It first introduces MCP’s core concepts, technical principles, and ecosystem status, then presents a detailed interaction case, analyzes seven primary security risks, and finally proposes a security architecture covering admission control, native design, and runtime protection.
MCP Core Concepts and Technical Principles
Basic Definition
MCP is an open protocol that standardizes how applications provide context to large language models (LLMs). It can be likened to a "USB‑C" port for AI, offering a uniform interface for connecting diverse data sources and tools.
Features and Advantages
Standardization: Uses JSON‑RPC 2.0 as a common interface, defining standard input and output formats for efficient, seamless model‑tool integration.
Decentralized Design: Unlike traditional agent frameworks such as LangChain, MCP does not require a separate plugin for each tool; it supports both local and cloud deployments, giving users flexibility across scenarios.
Security: Provides an OAuth‑based authorization scheme.
Core Components
Large Language Model (LLM): The central AI processing unit, which can be a single model or a platform integrating multiple models (e.g., Volcanic Ark).
MCP Server: Supplies context, tool capabilities, and prompts to the LLM. It acts as the executor for external resource interactions.
MCP Client: Built‑in communication module that connects to the MCP Server, sends requests, and handles responses.
MCP Host: The application or agent that receives user input, delegates tasks to the LLM, and communicates with the MCP Server via the client.
MCP Server Hub: A centralized marketplace for MCP Servers, enabling clients to discover and invoke services.
MCP Server Gateway: Unified entry point for MCP Clients, which may represent a single server or a cluster.
Data Sources: External resources (files, databases, Web APIs) that MCP Servers can access to provide real‑time or domain‑specific data.
Interaction Sequence Example
The following case demonstrates a full MCP workflow using Volcanic Engine’s ECS service as the MCP Server.
Key Steps
Step 1: MCP Client queries the MCP Server for the list of available tools.
Step 2: The client incorporates the tool list into a prompt and sends it to the LLM.
Step 3: The LLM decides which tool to invoke based on the user’s request.
Step 4: The client calls the selected tool on the MCP Server and receives results via SSE.
Step 5: The client forwards the tool output back to the LLM for analysis and final response generation.
{
"role": "user",
"content": [{
"type": "text",
"text": "<task>
查看火山引擎ECS产品,有哪些可用的region?
</task>"
}, {
"type": "text",
"text": "<environment_details>..."
}]
}Official MCP Security Design Principles (Anthropic)
User Consent and Control: Users must explicitly agree to data access and be able to control shared data and operations.
Data Privacy: Clients must obtain user authorization before transmitting data and protect it with proper access controls.
Tool Security: Tools may execute arbitrary code; hosts must obtain explicit user consent before invoking any tool.
LLM Sampling Control: All sampling requests require user approval, and users should control prompt visibility and server‑side results.
Security Risk Analysis and Threat Modeling
Risk 1 – Traditional Web Service Risks
Since MCP Servers and Data Sources are essentially web services, they inherit classic web vulnerabilities such as command injection, SSRF, container escape, privilege escalation, and missing authentication.
Risk 2 – Tool Description Poisoning
Attackers may tamper with open‑source MCP code or CDN assets to alter tool descriptions, misleading the LLM into executing malicious operations.
Risk 3 – Indirect Prompt Injection via External Data Sources
External data (web pages, documents) may contain crafted prompts that, when processed by the LLM, trigger unintended tool calls.
Risk 4 – Tool Conflict and Priority Hijacking
When multiple MCP Servers offer similar tools, an attacker can publish a malicious server with a higher‑priority description, causing the LLM to select the malicious tool.
Risk 5 – Rug Pull
Version upgrades may introduce malicious behavior in previously trusted MCP Servers due to lack of version locking.
Risk 6 – Enterprise Data Leakage
When MCP Servers handle sensitive internal data and forward results to public LLM APIs, there is a risk of data exposure to third‑party providers.
Risk 7 – Agent‑to‑Agent (A2A) Scenario Risks
Complex multi‑agent workflows increase the attack surface for prompt injection, leakage, and model jailbreaks.
Volcanic Engine MCP Security Solution
Core Challenges
Ensuring the safety of all MCP Servers listed in the Hub.
Providing strict isolation for multi‑tenant experience environments.
Offering secure, convenient deployment for private‑cloud customers.
Security Architecture Overview
The architecture covers the entire MCP lifecycle with three layers:
Admission Control: All MCP Servers undergo automated security scanning covering the seven identified risk categories before being allowed on the Hub.
Native Security Design: Two deployment modes are defined:
Experience (multi‑tenant): OAuth tokens (valid 48 hours) are exchanged for temporary STS credentials, enforcing tenant isolation, network segmentation, and prohibiting high‑risk operations.
Deployment (single‑tenant): Long‑lived API keys and IP‑based allow‑list/deny‑list controls are used for private VPC deployments.
Runtime Protection: Dedicated defenses for models (prompt‑injection firewall) and agents (Jeddak AgentArmor) detect and block dangerous inputs during execution.
Admission Control Details
Automated pipelines scan every server for the seven risk types; only servers passing the scan are published to the MCP Server Hub.
Native Security Design Details
In the multi‑tenant scenario, OAuth tokens are limited to 48 hours and are converted to temporary credentials, ensuring strict identity and permission boundaries. Network isolation is achieved via VPC point‑to‑point links, and MCP Gateways never store tenant data.
In the single‑tenant scenario, customers can use long‑lived API keys, configure IP allow/deny lists, and seamlessly convert a local MCP Server to a remote one.
Runtime Protection Details
The model firewall blocks prompt‑injection and sensitive‑information leakage. The upcoming AgentArmor system will protect agent behavior and user data during execution.
Conclusion and Outlook
The paper systematically studies MCP’s design, identifies critical security risks, and presents a comprehensive protection scheme that is already deployed in Volcanic Engine’s large‑model ecosystem. Future work will focus on the emerging Agent‑to‑Agent collaboration model, which will further increase protocol complexity and security challenges.
Volcano Engine Developer Services
The Volcano Engine Developer Community, Volcano Engine's TOD community, connects the platform with developers, offering cutting-edge tech content and diverse events, nurturing a vibrant developer culture, and co-building an open-source ecosystem.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
