Six AI Frameworks Supporting Model Context Protocol (MCP)

This guide explains the Model Context Protocol (MCP), compares six Python and TypeScript AI frameworks that implement MCP, demonstrates their architectures, registries, and code integrations—including OpenAI Agents SDK, Praison AI, LangChain, Chainlit, Agno, and Upsonic—while also discussing the benefits, challenges, and future standardization of MCP in AI agent development.

AI Algorithm Path
AI Algorithm Path
AI Algorithm Path
Six AI Frameworks Supporting Model Context Protocol (MCP)

The AI Agent Toolkit provides diverse APIs for building AI solutions, but integrating these tools into AI applications often becomes chaotic. This article introduces the Model Context Protocol (MCP) as an industry‑standard practice for supplying context to large language models (LLMs) and agents.

LLM Context Specification

Without proper context, LLMs cannot retrieve real‑time information, execute code, or call external tools. MCP addresses this limitation by allowing developers to connect IDE servers (e.g., Cursor, Claude, Windsurf) to AI agents.

What Is MCP?

MCP is the third evolution of LLMs: the first relied solely on training data, the second added tool‑based context, and the third combines LLMs with a robust architecture that enables seamless integration with external applications while maintaining system maintainability.

Anthropic open‑sourced MCP to help enterprises link cloud‑hosted data with AI systems.

Advantages of MCP

Architecture: A clear, flexible design for tool and API interaction.

Improved External Tool Access: Standardized interfaces bridge LLMs and third‑party systems.

Eliminates Custom Implementations: Reduces weeks‑long development cycles for each new tool.

Community‑Driven: Numerous open‑source servers and registries foster ecosystem growth.

Authentication: Built‑in auth and permission controls (e.g., Google Sheets or Gmail verification).

Tool Discovery: Simplifies finding and using external tools compared to manual installation.

Scalability: Handles hundreds of tools and large user bases.

Industry Standard: Provides a uniform way for agents to obtain required context.

Different MCP Server Types

Server‑Sent Events (SSE): Communicates via an HTTP connection.

Standard Input/Output (STDIO): Executes local commands and communicates through stdin/stdout.

Popular MCP Tool Registries

GitHub MCP Servers: Community‑maintained collection (https://github.com/modelcontextprotocol/servers).

Glama Registry: Production‑ready open‑source MCP servers (https://glama.ai/mcp/servers).

Smithery Registry: Access to over 2,000 MCP servers.

OpenTools: Generative API offering hundreds of ready‑made MCP tools (https://opentools.com/).

PulseMCP Registry: Curated MCP applications (https://www.pulsemcp.com/).

mcp.run: Commercial MCP applications.

Composio Registry: SSE‑based MCP servers for easy framework integration.

guMCP: Free, open‑source, fully hosted MCP server from Gumloop.

Building MCP Agents with Six Frameworks

The tutorial surveyed six leading MCP client platforms built on Python and TypeScript and provided concrete examples for each.

1. OpenAI Agents SDK

Using the SDK’s MCPServerStdio and MCPServerSse classes, the example creates a Streamlit UI that queries a local Git repository via an MCP server. The code installs required packages, exports the OpenAI API key, and runs the agent, producing interactive results.

import asyncio
import shutil
import streamlit as st
from agents import Agent, Runner, trace
from agents.mcp import MCPServer, MCPServerStdio

async def query_git_repo(mcp_server: MCPServer, directory_path: str, query: str):
    agent = Agent(
        name="Assistant",
        instructions=f"Answer questions about the local git repository at {directory_path}, use that for repo_path",
        mcp_servers=[mcp_server],
    )
    with st.spinner(f"Running query: {query}"):
        result = await Runner.run(starting_agent=agent, input=query)
        return result.final_output

# Streamlit UI omitted for brevity

2. Praison AI

Praison AI (Python) adds MCP support with a single line of code. The example integrates the Airbnb MCP server into a Streamlit app that collects booking details and queries the agent.

import streamlit as st
from praisonaiagents import Agent, MCP

@st.cache_resource
def get_agent():
    return Agent(
        instructions="You help book apartments on Airbnb.",
        llm="gpt-4o-mini",
        tools=MCP("npx -y @openbnb/mcp-server-airbnb --ignore-robots-txt"),
    )

3. LangChain

LangChain supports tool‑calling with MCP. The sample creates an async workflow that connects to a file‑system MCP server via StdioServerParameters and uses MCPToolkit to invoke tools.

async def run(tools: list[BaseTool], prompt: str) -> str:
    model = ChatGroq(model_name="llama-3.1-8b-instant")
    tools_map = {tool.name: tool for tool in tools}
    tools_model = model.bind_tools(tools)
    messages = [HumanMessage(prompt)]
    ai_message = await tools_model.ainvoke(messages)
    messages.append(ai_message)
    for tool_call in ai_message.tool_calls:
        selected_tool = tools_map[tool_call["name"].lower()]
        tool_msg = await selected_tool.ainvoke(tool_call)
        messages.append(tool_msg)
    return await (tools_model | StrOutputParser()).ainvoke(messages)

4. Chainlit

Chainlit (Python) includes built‑in MCP support. Handlers @cl.on_mcp_connect and @cl.on_mcp_disconnect manage the connection lifecycle, and the configuration specifies the server type (SSE or STDIO) and command line (e.g., npx -y linear-mcp-server --tools=all --api-key=YOUR_KEY).

@cl.on_mcp_connect
async def on_mcp_connect(connection, session: ClientSession):
    """Initialize MCP connection"""
    # custom init code

@cl.on_mcp_disconnect
async def on_mcp_disconnect(name: str, session: ClientSession):
    """Cleanup after MCP disconnect"""
    # optional cleanup

5. Agno AI

Agno (Python) enables multi‑agent collaboration. The example builds a team of four MCP agents (Airbnb, Google Maps, web search, weather) using StdioServerParameters and AsyncExitStack to manage multiple servers.

airbnb_server_params = StdioServerParameters(
    command="npx",
    args=["-y", "@openbnb/mcp-server-airbnb", "--ignore-robots-txt"],
)
# similar params for maps, etc.
async with contextlib.AsyncExitStack() as stack:
    airbnb_client, _ = await stack.enter_async_context(stdio_client(airbnb_server_params))
    # create agents with these clients

6. Upsonic

Upsonic (Python) offers a task‑oriented framework. The sample defines a HackerNewsMCP tool using the uvx command and combines it with a fallback Search tool to analyze top HackerNews stories.

class HackerNewsMCP:
    command = "uvx"
    args = ["mcp-hn"]

task = Task(
    "Analyze the top 5 HackerNews stories for today...",
    tools=[HackerNewsMCP, Search],
)
agent = Agent("Tech News Analyst", company_url="https://news.ycombinator.com/")
agent.print_do(task)

Challenges and Outlook

While MCP simplifies tool integration, developers face difficulties evaluating tool quality, discovering suitable tools, and dealing with inconsistent server configurations across providers. The ecosystem is actively discussing standardization, and future work may deliver a unified installation experience similar to pip for MCP‑based applications.

Conclusion

This tutorial introduced MCP, explained its growing popularity, and demonstrated concrete implementations across six Python/TypeScript frameworks for building LLM‑driven agents and AI assistants. By understanding MCP’s architecture, registries, and integration patterns, developers can create more maintainable and extensible AI systems.

TypeScriptPythonAI agentsMCPLangChainOpenAIModel Context Protocol
AI Algorithm Path
Written by

AI Algorithm Path

A public account focused on deep learning, computer vision, and autonomous driving perception algorithms, covering visual CV, neural networks, pattern recognition, related hardware and software configurations, and open-source projects.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.