LangGraph vs Semantic Kernel: Comparing State‑Graph and Kernel‑Plugin Architectures

This article updates the comparison of LangGraph and Semantic Kernel for Python AI agents, outlines their recent releases, explains their core architectural models, shows side‑by‑side code examples, and provides concrete decision criteria for choosing the appropriate framework.

DeepHub IMBA
DeepHub IMBA
DeepHub IMBA
LangGraph vs Semantic Kernel: Comparing State‑Graph and Kernel‑Plugin Architectures

One‑Sentence Decision Rule

Stateful, persistent, recoverable Agent workflows need explicit control: LangGraph. Protocol‑first, plugin‑composable Agent platforms: Semantic Kernel.

Both frameworks have matured significantly in the past six months, making older comparison articles outdated.

LangGraph: Graph Runtime

LangGraph models an Agent system as a typed state graph where developers explicitly define states, nodes (Python callables or sub‑graphs), and edges (state transitions). The state object is a typed entity that updates at each step.

The v1.0 documentation centers on three concepts: persistent execution, controllability, and human‑in‑the‑loop collaboration. Checkpointing enables crash recovery, manual insertion of review steps, and parallel sub‑Agent branching without workarounds.

Since v1, LangChain's create_agent runs on the LangGraph runtime, establishing a clear stack: create_agent handles the standard tool‑call loop, while custom workflow topologies fall back to raw LangGraph.

Semantic Kernel: Kernel‑Plugin Middleware

Semantic Kernel introduces a Kernel abstraction that hosts AI services, plugins, and functions. Plugins expose functions to the model and Agents and can be native Python code, prompt templates, or external schemas.

"Any Plugin available to an Agent is managed within its respective Kernel instance — this enables each Agent to access distinct functionalities based on its specific role."

Agent orchestration emerges from the Agent choosing functions and the Planner arranging call order, rather than a pre‑drawn graph topology.

The Kernel acts as a dependency container; the @kernel_function decorator makes Python methods discoverable by the model; FunctionChoiceBehavior.Auto() directs the model to invoke functions as needed. State is stored in a developer‑maintained ChatHistory object, not persisted by the runtime.

Architectural Differences

Main Abstraction : LangGraph → typed state graph (nodes + edges); Semantic Kernel → Kernel + plugins + Agent.

Workflow Control : LangGraph → developer defines topology; Semantic Kernel → topology emerges from Agent function calls.

State Management : LangGraph → first‑class typed state with checkpoints; Semantic Kernel → externalized, developer‑managed.

Best Mental Model : LangGraph → persistent state machine; Semantic Kernel → composable AI middleware.

Same Agent, Two Implementations

LangGraph – Weather Agent with Checkpoints

pip install -U langgraph "langchain[openai]"
from langgraph.prebuilt import create_react_agent
from langgraph.checkpoint.memory import InMemorySaver
from langchain.chat_models import init_chat_model

# Tool
def get_weather(city: str) -> str:
    """Get the current weather for a given city."""
    return f"It's sunny and 28°C in {city}."

model = init_chat_model("openai:gpt-4o-mini", temperature=0)
checkpointer = InMemorySaver()
agent = create_react_agent(
    model=model,
    tools=[get_weather],
    prompt="You are a helpful weather assistant.",
    checkpointer=checkpointer,
)
config = {"configurable": {"thread_id": "user-session-1"}}
# Turn 1
response = agent.invoke({"messages": [{"role": "user", "content": "What is the weather in Mumbai?"}]}, config=config)
print(response["messages"][-1].content)
# Turn 2 – context is remembered automatically
followup = agent.invoke({"messages": [{"role": "user", "content": "How about Delhi?"}]}, config=config)
print(followup["messages"][-1].content)

The create_react_agent compiles a StateGraph that includes the model‑tool loop. The checkpointer persists state per thread_id, allowing automatic recovery after crashes.

Semantic Kernel – Weather Agent with Plugin

pip install semantic-kernel
import asyncio
from semantic_kernel import Kernel
from semantic_kernel.agents import ChatCompletionAgent
from semantic_kernel.connectors.ai.open_ai import (
    OpenAIChatCompletion,
    OpenAIChatPromptExecutionSettings,
)
from semantic_kernel.connectors.ai import FunctionChoiceBehavior
from semantic_kernel.functions import kernel_function
from semantic_kernel.contents import ChatHistory

class WeatherPlugin:
    @kernel_function(name="get_weather", description="Get the weather for a city.")
    def get_weather(self, city: str) -> str:
        return f"It's sunny and 28°C in {city}."

kernel = Kernel()
kernel.add_service(OpenAIChatCompletion(ai_model_id="gpt-4o-mini"))
settings = OpenAIChatPromptExecutionSettings()
settings.function_choice_behavior = FunctionChoiceBehavior.Auto()
kernel.add_plugin(WeatherPlugin(), plugin_name="WeatherPlugin")
agent = ChatCompletionAgent(
    kernel=kernel,
    name="WeatherAssistant",
    instructions="You are a helpful weather assistant.",
)
async def run_agent():
    history = ChatHistory()
    # Turn 1
    history.add_user_message("What is the weather in Mumbai?")
    async for msg in agent.invoke(history):
        print(f"Agent: {msg.content}")
        history.add_message(msg)
    # Turn 2
    history.add_user_message("How about Delhi?")
    async for msg in agent.invoke(history):
        print(f"Agent: {msg.content}")
        history.add_message(msg)
asyncio.run(run_agent())

Here the Kernel holds services and plugins; the @kernel_function decorator makes get_weather discoverable. State lives in a manually maintained ChatHistory object.

Six Lines That Highlight the Core Difference

# LangGraph – runtime provides persistence
checkpointer = InMemorySaver()
config = {"configurable": {"thread_id": "session-1"}}
agent.invoke(messages, config)  # auto‑resume from last checkpoint

# Semantic Kernel – developer manages state
history = ChatHistory()
agent.invoke(history)  # explicit state passing and maintenance

LangGraph delegates persistence to the runtime; Semantic Kernel leaves state handling to the developer.

Protocol Support: MCP and A2A

Semantic Kernel recently added first‑class MCP support in Python SDK v1.28.1, allowing the SDK to act as both MCP host and server with multiple transports (stdio, SSE, WebSocket). This is a substantial architectural upgrade for teams needing cross‑service orchestration.

LangGraph’s MCP approach focuses on deployment: after deploying to LangGraph Platform, each Agent is exposed at a /mcp endpoint. Self‑hosted scenarios use the langchain-mcp-adapters package.

Choose SK when you need native MCP/A2A inside a Python process; choose LangGraph when the Agent is a deployed service consumed via MCP.

Stability

LangGraph v1 (Oct 2025) kept core graph API stable; the only migration is deprecating create_react_agent in favor of create_agent. The team promises no breaking changes before v2.0.

Semantic Kernel 1.x saw major restructuring at 1.0, but subsequent releases (2025‑mid) follow an incremental, additive path with no structural breaks.

"The claim that 'LangGraph breaks compatibility every version' is no longer true. Both frameworks now prioritize stability."

When to Choose Which

✅ Choose LangGraph when

Agent logic involves complex branching, retries, human review, or approval steps that benefit from an explicit graph topology.

Workflows must survive crashes, provide checkpoint recovery, and retain an auditable step history.

The team is already deep in the LangChain ecosystem and wants a clear upgrade path via create_agent → LangGraph.

Fine‑grained observability of node‑level execution flow is required.

✅ Choose Semantic Kernel when

Building a platform or SDK where capabilities are composed as plugins and different Agents consume different tool sets.

MCP or A2A interoperability is a core need, with native Python SDK support.

The architecture follows DI / service‑oriented patterns that align with the kernel‑plugin model.

Lightweight deployment is preferred and state is managed externally.

Conclusion

If an Agent must behave like a persistent state machine, use LangGraph. If an Agent should act as a protocol‑aware platform component, use Semantic Kernel.

Hope this helps.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

PythonAI agentsMCPLangGraphSemantic KernelKernel PluginState Graph
DeepHub IMBA
Written by

DeepHub IMBA

A must‑follow public account sharing practical AI insights. Follow now. internet + machine learning + big data + architecture = IMBA

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.