Introducing LangGraph: A Low‑Level Framework for Building Stateful AI Agents

This article explains why modern LLM‑based applications need agent capabilities, introduces LangGraph’s core features such as stateful execution, graph‑based orchestration, tool integration, human‑in‑the‑loop and multi‑agent support, and provides a step‑by‑step Python example that builds a simple chat‑bot agent.

Network Intelligence Research Center (NIRC)
Network Intelligence Research Center (NIRC)
Network Intelligence Research Center (NIRC)
Introducing LangGraph: A Low‑Level Framework for Building Stateful AI Agents

With the rise of large language models (LLM) like ChatGPT and Qwen, developers are moving beyond simple chat interfaces toward agents that can decide which tools to use, maintain context across turns, and coordinate multiple sub‑agents. Traditional LLM + prompt setups lack workflow orchestration, state management, error recovery, and asynchronous interaction, prompting the need for a more robust framework.

LangGraph, released by the LangChain team, is described as "a low‑level orchestration framework for building, managing, and deploying long‑running, stateful agents." It extends the basic LLM + prompt model by providing infrastructure for process orchestration, state handling, tool calls, and monitoring.

Core features of LangGraph include:

Stateful execution : agents can run for extended periods, span multiple steps, and resume after interruptions.

Node + Edge (graph) model : the workflow is expressed as a graph where nodes represent actions (LLM calls, tool invocations, decisions) and edges define the direction or conditional jumps.

Tool integration : an LLM can invoke external tools and continue the workflow based on tool output; both short‑term (session) and long‑term memory are supported.

Human‑in‑the‑loop : the process can pause for manual approval or feedback, which is crucial for production environments.

Scalable multi‑agent support : multiple agents can cooperate, or a supervisor agent can coordinate child agents.

Compatibility with LangChain : although optional, LangGraph works seamlessly with the LangChain ecosystem for retrieval, tool usage, and LLM handling.

The article then walks through a concrete example that builds a basic chat‑bot agent using LangGraph:

Install the required packages: pip install -U langgraph langsmith Create a StateGraph with a custom State type that stores a list of messages.

from typing import TypedDict, List, Dict
class State(TypedDict):
    messages: List[Dict[str, str]]

graph_builder = StateGraph(State)
now_state = {"messages": []}

Add a chatbot node that calls an LLM (e.g., Qwen‑Turbo) and appends the assistant’s response to the state.

os.environ["OPENAI_API_KEY"] = "sk-..."
llm = ChatOpenAI(model="qwen-turbo", base_url="https://dashscope.aliyuncs.com/compatible-mode/v1", api_key=os.getenv("DASHSCOPE_API_KEY"))

def chatbot(state: State):
    message_single_turn = [{"role": "assistant", "content": llm.invoke(state["messages"]) }]
    return {"messages": state["messages"] + message_single_turn}

graph_builder.add_node("chatbot", chatbot)

Define the start edge: graph_builder.add_edge(START, "chatbot") Compile the graph: graph = graph_builder.compile() Run the agent in a loop, updating the state with user input, invoking the graph, and printing the assistant’s reply.

def runner(user_input: str):
    now_state["messages"] = now_state["messages"] + [{"role": "user", "content": user_input}]
    result = graph.invoke(now_state)
    now_state["messages"] = now_state["messages"] + [{"role": "assistant", "content": result["messages"][-1]["content"].content}]
    print(result["messages"][-1]["content"].content)
    return result

while True:
    try:
        user_input = input("User: ")
        if user_input.lower() in ["quit", "exit", "q"]:
            print("Goodbye!")
            break
        runner(user_input)
    except Exception:
        print("error")
        break
print(now_state)

Additional basic functionalities are demonstrated:

Tool nodes : defining a custom BasicToolNode class to encapsulate tool logic and adding it to the graph.

Conditional routing : using add_conditional_edges to direct the workflow to a tool node when needed.

Memory checkpoint : employing MemorySaver as a checkpointer so that the graph can persist and reload state across invocations using a thread_id.

Human assistance : showing how to pause execution with interrupt and resume after a human provides input.

The article concludes that LangGraph offers a flexible, low‑level foundation for constructing sophisticated, stateful AI agents, and that its features—graph‑based orchestration, persistent state, tool integration, and human‑in‑the‑loop control—address the limitations of simple LLM‑only applications.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

tool integrationLLM agentsLangGraphHuman-in-the-Loopgraph orchestrationPython examplestateful execution
Network Intelligence Research Center (NIRC)
Written by

Network Intelligence Research Center (NIRC)

NIRC is based on the National Key Laboratory of Network and Switching Technology at Beijing University of Posts and Telecommunications. It has built a technology matrix across four AI domains—intelligent cloud networking, natural language processing, computer vision, and machine learning systems—dedicated to solving real‑world problems, creating top‑tier systems, publishing high‑impact papers, and contributing significantly to the rapid advancement of China's network technology.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.