How MCP Simplifies AI Tool Integration with JSON‑RPC and Spring AI

This article explains the MCP framework’s architecture, execution flow, JSON‑RPC communication, and lifecycle, showing how it standardizes AI function calling and tool integration using Spring AI, with code examples and comparisons of communication methods.

Zhuanzhuan Tech
Zhuanzhuan Tech
Zhuanzhuan Tech
How MCP Simplifies AI Tool Integration with JSON‑RPC and Spring AI

1 Introduction

With the rapid development of AI, technologies such as Retrieval‑Augmented Generation (RAG) and function calling have greatly enhanced large language model (LLM) conversational capabilities. However, implementing function calling is complex because each system and model requires a custom adapter. MCP was created to standardize and simplify reliable external tool invocation for AI applications.

2 Execution Flow

When a user asks a question (e.g., “What’s the weather in Beijing?”), the application packages the user query and a pre‑identified list of tools into a prompt sent to the LLM. The LLM, using its function‑calling ability, selects the appropriate tool and returns a structured call. The application then invokes the tool, feeds the result back to the LLM, and finally returns the LLM’s answer to the user.

3 MCP Architecture

3.1 Architecture Design

MCP follows a classic client‑server (C/S) model.

Host : Receives user requests, interacts with the LLM, and calls tools; essentially an AI agent.

Client : Implements MCP rules and communicates with the MCP server.

Server : Implements the tool logic and returns execution results to the client.

3.2 Basic Functions

MCP standardizes three core elements of LLM interaction:

Resource : Data such as files or database records that the client can read.

Tools : Functions that the LLM can invoke.

Prompt : Pre‑written templates that guide the LLM to complete specific tasks.

Sampling : Allows the server to request data from the LLM for sampling purposes.

4 MCP Communication Principles

4.1 JSON‑RPC

MCP uses JSON‑RPC as its underlying communication protocol. JSON‑RPC is a lightweight, JSON‑based remote‑call protocol that is more concise and efficient than HTTP.

Request structure
{
  "jsonrpc": "2.0",
  "id": number | string,
  "method": string,
  "params": object?
}
Response structure
{
  "jsonrpc": "2.0",
  "id": number | string,
  "result": object?,
  "error": {
    "code": number,
    "message": string,
    "data": unknown?
  }?
}

4.2 Communication Methods

STDIO : The server runs as a child process of the client, communicating via stdin/stdout. This method is fast, has no external dependencies, and works offline, but it is synchronous, has limited concurrency, and cannot handle multi‑process communication.

SSE (Server‑Sent Events): A streaming, HTTP‑based approach where the client initiates a request to /sse, receives a session ID, and then uses endpoints such as tools/list and tools/call to interact with the server. SSE provides asynchronous, non‑blocking communication but requires a persistent connection that may drop.

Recently, MCP introduced a streamable http mode that replaces SSE, offering similar benefits without the connection‑stability issues.

5 Lifecycle

After the MCP client and server establish a connection, the client immediately requests the list of available tools, demonstrating MCP’s dynamic plug‑in capability. The following sections illustrate how to set up the environment and invoke a tool using Spring AI.

5.1 Environment Setup

Expose a tool on the server side, for example a weather‑retrieval function:

/** Build a tool that gets weather by city
 * @param city City name
 * @return Weather information
 */
@Tool(name = "getWeather", description = "Get weather by city")
public String getWeather(String city) {
    return new String(city.getBytes(), StandardCharsets.UTF_8) + " weather is sunny 25℃";
}

Spring AI automatically registers methods annotated with @Tool into the ToolCallbackProvider. The client must configure the MCP server address:

spring:
  ai:
    mcp:
      client:
        sse:
          connections:
            server1:
              # SSE server
              url: http://127.0.0.1:8080

5.2 Establish Connection and Get Tool List

When the application starts, Spring injects McpClient and ToolCallbackProvider, which then request the tool list from the server via the tools/list JSON‑RPC method.

public Mono<McpSchema.ListToolsResult> listTools(String cursor) {
    return this.withInitializationCheck("listing tools", initializedResult -> {
        if (this.serverCapabilities.tools() == null) {
            return Mono.error(new McpError("Server does not provide tools capability"));
        }
        return this.mcpSession.sendRequest(McpSchema.METHOD_TOOLS_LIST, new McpSchema.PaginatedRequest(cursor), LIST_TOOLS_RESULT_TYPE_REF);
    });
}

5.3 Call Tool

When the user asks “What’s the weather in Beijing?”, the client sends the user input and the tool list to the LLM. The LLM returns a function call payload, which the client executes via the tools/call JSON‑RPC method.

public Mono<McpSchema.CallToolResult> callTool(McpSchema.CallToolRequest request) {
    return this.withInitializationCheck("calling tools", initializedResult -> {
        if (this.serverCapabilities.tools() == null) {
            return Mono.error(new McpError("Server does not provide tools capability"));
        }
        return this.mcpSession.sendRequest(McpSchema.METHOD_TOOLS_CALL, request, CALL_TOOL_RESULT_TYPE_REF);
    });
}

The LLM’s response includes a tool call such as:

[{
  "assistantMessage": {
    "toolCalls": [{
      "id": "call_b4a9cb0f04a3495d941b71",
      "type": "function",
      "name": "spring_ai_mcp_client_server1_getWeather",
      "arguments": "{\"city\": \"北京\"}"
    }],
    "chatGenerationMetadata": {
      "finishReason": "TOOL_CALLS"
    }
  }
}]

After the tool execution, the client sends the result back to the LLM, which may request further processing until it decides the conversation is complete. The final answer is then returned to the user.

6 Summary

MCP provides a standardized interaction protocol for AI models, reducing integration effort and offering a clear path for tool‑driven AI applications. While it improves developer productivity, it also introduces higher token consumption due to frequent LLM interactions and presents security challenges such as prompt injection that still need robust solutions.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

microservicesMCPFunction CallingSpring AIAI Tool IntegrationJSON-RPC
Zhuanzhuan Tech
Written by

Zhuanzhuan Tech

A platform for Zhuanzhuan R&D and industry peers to learn and exchange technology, regularly sharing frontline experience and cutting‑edge topics. We welcome practical discussions and sharing; contact waterystone with any questions.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.