Can Chatbox Connect to LangGraph Agents? Full Integration Using Trae Solo
This article demonstrates how to use ByteDance's Trae Solo to automatically generate a FastAPI service that wraps a LangChain/LangGraph weather‑assistant agent, enabling seamless OpenAI‑style access from the Chatbox client, with step‑by‑step code, testing, and deployment details.
1. Introduction
The author, a long‑time developer of LangChain and LangGraph agents, notes that deploying agents for direct use can be cumbersome, especially when the official agent‑chat‑ui requires extra services. They wonder whether a lightweight FastAPI wrapper could expose the agent to the popular Chatbox AI client.
2. Trae Solo Overview
Trae Solo, described as a "domestic Cursor", allows users to describe a development task in natural language (e.g., "build an online flash‑card app") and automatically performs requirement analysis, code generation, testing, and deployment.
3. Integration Strategy
Based on Chatbox’s OpenAI‑compatible API, the integration consists of three clear steps:
Adapter Layer : Write a FastAPI endpoint that receives OpenAI‑format requests from Chatbox.
Format Conversion & Invocation : Translate the request into the message format expected by a LangChain/LangGraph agent and invoke the agent.
Result Return : Re‑package the agent’s response into OpenAI‑style JSON and send it back to Chatbox.
4. Using Trae Solo to Build the Weather Assistant
The author starts from an existing LangChain 1.0 weather‑assistant agent ( create_react_agent) that calls the free XinZhi Weather API. They then launch Trae Solo in SOLO mode and provide the following natural‑language prompt:
你是一名python编程专家,熟练掌握langchain1.0开发智能体, fastapi编写接口的能力,可以满足用户提出的任何需求。现在项目中的langchain_weather.py文件中包含一个天气助手的智能体,我希望使用fastapi编写openai 风格的接口,注意需要使用流式接口。使该智能体可以被chatbox等客户端以api格式接入,请实现我的需求并本地启动接口,请完成我的需求Trae Solo automatically performs the following sub‑steps:
Analyzes langchain_weather.py and discovers an undefined type WeatherQuery.
Generates a new main.py with a FastAPI service that includes CORS middleware, OpenAI‑style request/response models, and both streaming and non‑streaming handling.
Creates a virtual environment with uv and resolves import issues.
Detects port conflicts and adjusts the launch configuration.
Writes a test‑api.py script and uses curl commands to verify the service.
5. FastAPI Service Code (excerpt)
from fastapi import FastAPI, HTTPException
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import StreamingResponse
from pydantic import BaseModel, Field
from typing import List, Optional, Union, Generator
import uvicorn, time, json
from langchain_weather import agent
app = FastAPI(title="Weather Assistant API", description="OpenAI‑style weather assistant API for Chatbox", version="1.0.0")
app.add_middleware(CORSMiddleware, allow_origins=["*"], allow_credentials=True, allow_methods=["*"], allow_headers=["*"])
class Message(BaseModel):
role: str
content: str
class ChatCompletionRequest(BaseModel):
model: str = "deepseek-chat"
messages: List[Message]
temperature: Optional[float] = 0.7
max_tokens: Optional[int] = None
stream: Optional[bool] = False
# ... (response models omitted for brevity) ...
@app.post("/chat/completions")
async def chat_completions(request: ChatCompletionRequest):
user_message = request.messages[-1].content
# invoke the LangChain agent
result = agent.invoke({"messages": [{"role": "user", "content": user_message}]})
# extract assistant content and return OpenAI‑style JSON or stream
...6. Configuring Chatbox
After starting the FastAPI server (default http://localhost:8001), the author adds a new model provider in Chatbox settings, pointing the API host to the server URL and the path to /chat/completions.
7. Testing the Integration
Queries such as "北京的天气" and "上海的天气" are sent from Chatbox. The agent returns weather data spoken in a sweet Lin Zhi‑ling‑style tone, confirming that the end‑to‑end pipeline—from agent code to deployed API—was generated almost entirely by Trae Solo.
8. Outlook
The experiment shows that Trae Solo can automatically handle code generation, interface adaptation, testing, and deployment, embodying a "describe‑to‑develop" workflow. While the demo is simple, the author believes the tool’s potential will grow for more complex production scenarios and even for automatically creating LangChain agents from natural‑language specifications.
Fun with Large Models
Master's graduate from Beijing Institute of Technology, published four top‑journal papers, previously worked as a developer at ByteDance and Alibaba. Currently researching large models at a major state‑owned enterprise. Committed to sharing concise, practical AI large‑model development experience, believing that AI large models will become as essential as PCs in the future. Let's start experimenting now!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
