Build and Debug LangGraph Workflows with Alibaba Qwen in Minutes
This article walks through creating a LangGraph workflow in Python, first using OpenAI’s GPT‑5‑nano model, then swapping to Alibaba’s Qwen 3.5‑plus model, showing how to suppress warnings, filter out thinking responses, visualize the graph, and troubleshoot common errors, all without any prior AI coding experience.
In the AI era, learning a new technology can be done quickly, and this tutorial demonstrates how to build a simple LangGraph workflow for a chatbot using Python.
1. Basic LangGraph with OpenAI
First, import the necessary packages and define a node that calls the OpenAI model:
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, MessagesState, START, END
def chatbot(state: MessagesState):
return {"messages": [ChatOpenAI(model="gpt-5-nano").invoke(state["messages"]) ]}
graph = StateGraph(MessagesState)
graph.add_node("chatbot", chatbot)
graph.add_edge(START, "chatbot")
graph.add_edge("chatbot", END)
app = graph.compile()
res = app.invoke({"messages": [("user", "你好,请用一句话介绍 LangGraph") ]})
print(res["messages"][-1].content)You can visualize the graph with:
from IPython.display import Image, display
display(Image(app.get_graph().draw_mermaid_png()))2. Switching to Alibaba Qwen
Replace the OpenAI import with the Alibaba‑compatible ChatAnthropic class and provide your own API key and endpoint:
from langchain_anthropic import ChatAnthropic
from langgraph.graph import StateGraph, MessagesState, START, END
llm = ChatAnthropic(
model="qwen3.5-plus",
api_key="sk-05bd3dxxxxxxxxxxxxxxxxxxxxxxx",
base_url="https://dashscope.aliyuncs.com/apps/anthropic",
max_tokens=1024,
)
def chatbot(state: MessagesState):
return {"messages": [llm.invoke(state["messages"]) ]}
graph = StateGraph(MessagesState)
graph.add_node("chatbot", chatbot)
graph.add_edge(START, "chatbot")
graph.add_edge("chatbot", END)
app = graph.compile()
res = app.invoke({"messages": [("user", "你好,请用一句话介绍 LangGraph") ]})
print(res["messages"][-1].content)3. Handling Warnings and Filtering Thinking Output
To silence unrelated warnings and extract only the final text answer (ignoring the model’s internal "thinking" list), add a warning filter and post‑process the response:
import warnings
warnings.filterwarnings("ignore")
from langchain_anthropic import ChatAnthropic
from langgraph.graph import StateGraph, MessagesState, START, END
llm = ChatAnthropic(...)
def chatbot(state: MessagesState):
response = llm.invoke(state["messages"])
# If the response contains a list of parts, keep only the text part
if isinstance(response.content, list):
for item in response.content:
if item["type"] == "text":
response.content = item["text"]
break
return {"messages": [response]}
# Build and run the graph as before4. Running the Final Workflow
After applying the warning filter and content‑filtering logic, the workflow runs cleanly and returns the expected one‑sentence description of LangGraph. The graph can be visualized in a Jupyter notebook using the same Image(app.get_graph().draw_mermaid_png()) call.
5. Key Takeaways
LangGraph lets you compose LLM calls as nodes in a directed graph.
Switching between providers (OpenAI → Alibaba Qwen) only requires changing the model class and credentials.
Suppressing irrelevant warnings and filtering out the model’s internal thinking stream yields concise user‑facing answers.
The entire process—from writing a few lines of Python to obtaining a working chatbot—can be completed in under 20 minutes, even for beginners.
Senior Tony
Former senior tech manager at Meituan, ex‑tech director at New Oriental, with experience at JD.com and Qunar; specializes in Java interview coaching and regularly shares hardcore technical content. Runs a video channel of the same name.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
