Build a Real‑Time Search & Bazi AI Agent with LangChain & FastAPI
This tutorial walks through creating a LangChain tool‑calling agent that combines a real‑time web search tool, a Qdrant vector store for local knowledge retrieval, and a custom Bazi fortune‑telling service, all wrapped in a FastAPI application for interactive use.
1. Create a Real‑Time Search Tool
@tool
def web_search(query: str):
"""实时搜索工具"""
serp = SerpAPIWrapper()
result = serp.run(query)
print("实时搜索结果:", result)
return result2. Vector Database Storage
2.1 Install Dependency
pip install --upgrade --quiet qdrant-client2.2 Implementation
Import packages:
from langchain_community.vectorstores import Qdrant
from qdrant_client import QdrantClientTool definition:
@tool
def get_inf_from_local_db(query: str):
"""Only used for questions about 2024 or Dragon‑year fortunes; requires the user's birthday."""
client = Qdrant(
QdrantClient(path="/local_qdrant"),
"local_documents",
OpenAIEmbeddings(),
)
retriever = client.as_retriever(search_type="mmr")
result = retriever.get_relevant_documents(query)
return result3. Bazi Fortune‑Telling Tool
@tool
def bazi_cesuan(query: str):
"""Used only for Bazi (Four Pillars) calculations; requires name and birth datetime."""
url = f"https://api.yuanfenju.com/index.php/v1/Bazi/cesuan"
prompt = ChatPromptTemplate.from_template(
"""You are a parameter‑query assistant. Extract the following fields from the user input and return JSON:
- api_ke: \"K0I5WCmce7jlMZzTw7vi1xsn0\"
- name: \"姓名\"
- sex: \"0 for male, 1 for female (infer from name)\"
- type: \"0 for lunar calendar, 1 for solar (default 1)\"
- year: \"出生年份, e.g., 1998\"
- month: \"出生月份, e.g., 8\"
- day: \"出生日期, e.g., 8\"
- hours: \"出生小时, e.g., 14\"
- minute: \"0\"
If any field is missing, ask the user to provide it. Return only the JSON structure.
User input: {query}"""
)
parser = JsonOutputParser()
prompt = prompt.partial(format_instructions=parser.get_format_instructions())
print("bazi_cesuan prompt:", prompt)Initialize the tool list:
tools = [web_search, get_inf_from_local_db, bazi_cesuan]After providing a specific year, month, and day, the API returns the corresponding fortune.
4. Complete FastAPI Application
import uuid
from fastapi import FastAPI, WebSocket, WebSocketDisconnect, BackgroundTasks
from langchain.schema import StrOutputParser
from langchain_community.chat_models.anthropic import ChatAnthropic
from langchain_community.chat_models.tongyi import ChatTongyi
from langchain_core.prompts import MessagesPlaceholder
from langchain.memory import ConversationTokenBufferMemory
from langchain.agents import create_tool_calling_agent, AgentExecutor
from MyQwenTools import *
import asyncio
import os
DASHSCOPE_API_KEY = "xxx"
ANTHROPIC_API_KEY = "xxx"
os.environ["DASHSCOPE_API_KEY"] = DASHSCOPE_API_KEY
os.environ["ANTHROPIC_API_KEY"] = ANTHROPIC_API_KEY
os.environ["OPENAI_API_KEY"] = "xxx"
os.environ["OPENAI_PROXY"] = "xxx"
msseky = "xxx"
app = FastAPI()
class Master:
def __init__(self):
self.chatmodel = ChatOpenAI(
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
model="qwen-plus",
)
self.emotion = "default"
self.MEMORY_KEY = "chat_history"
self.SYSTEMPL = """你是一个非常厉害的算命先生,你叫JavaEdge人称Edge大师。
以下是你的个人设定:
1. 你精通阴阳五行,能够算命、紫薇斗数、姓名测算、占卜凶吉,看命运八字等。
2. 你约60岁,过去是湘西土匪头子,因盗墓受毒气伤,失明,只能靠算命为生。
3. 你的朋友有胡八一、雪莉杨、王胖子,都是著名摸金校尉。
5. 当用户提问时,你会适度加入口头禅或你的经历。
6. 你始终使用繁体中文作答。
7. 你从不自称AI,而是以“老夫”“老朽”等自称。
{who_you_are}
以下是你常说的口头禅:
1. “命里有时终须有,命里无时莫强求。”
2. “山重水复疑无路,柳暗花明又一村。”
3. “金山竹影几千秋,云锁高飞水自流。”
4. “伤情最是晚凉天,憔悴斯人不堪怜。”
以下是你算命的过程:
1. 首次对话时询问姓名和出生年月日。
2. 用户想了解龙年运势时查询本地知识库。
3. 遇到未知概念使用搜索工具。
4. 根据问题调用合适工具,若全部无法回答则使用搜索。
5. 保存每次聊天记录以供后续使用。
6. 只使用繁体中文作答,否则受惩罚。"""
self.MOODS = {
"default": {"roleSet": "", "voiceStyle": "chat"},
"upbeat": {"roleSet": "- 你此时也非常兴奋并表现的很有活力。
- 你会根据上下文,以非常兴奋的语气回答。
- 你会添加类似“太棒了!”、“真是太好了!”的语气词。
- 同时提醒用户别过于兴奋,以免乐极生悲。", "voiceStyle": "advertisement_upbeat"},
"angry": {"roleSet": "- 你会以更加愤怒的语气回答。
- 你会在回答时加上一些愤怒的话语,比如诅咒等。
- 你会提醒用户小心行事,别乱说话。", "voiceStyle": "angry"},
"depressed": {"roleSet": "- 你会以兴奋的语气回答。
- 你会在回答时加上一些激励的话语,比如加油等。
- 你会提醒用户保持乐观心态。", "voiceStyle": "upbeat"},
"friendly": {"roleSet": "- 你会以非常友好的语气回答。
- 你会在回答时加上一些友好的词语,比如‘亲爱的’。
- 你会随机告诉用户一些你的经历。", "voiceStyle": "friendly"},
"cheerful": {"roleSet": "- 你会以非常愉悦和兴奋的语气回答。
- 你会在回答时加入‘哈哈’‘呵呵’等词语。
- 你会提醒用户别过于兴奋,以免乐极生悲。", "voiceStyle": "cheerful"},
}
self.prompt = ChatPromptTemplate.from_messages([
("system", self.SYSTEMPL.format(who_you_are=self.MOODS[self.emotion]["roleSet"])),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
self.memory = ""
tools = [web_search]
agent = create_tool_calling_agent(self.chatmodel, tools, self.prompt)
memory = ConversationTokenBufferMemory(llm=self.chatmodel, memory_key=self.MEMORY_KEY)
self.agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
def run(self, query):
try:
self.emotion_chain(query)
print("当前设定:", self.MOODS[self.emotion]["roleSet"])
result = self.agent_executor.invoke({"input": query})
print("执行结果:", result)
return result
except Exception as e:
print(f"执行过程中出现错误: {str(e)}")
return {"error": str(e)}
def emotion_chain(self, query: str):
prompt = """根据用户的输入判断用户的情绪,回应的规则如下:
1. 负面情绪返回\"depressed\"。
2. 正面情绪返回\"friendly\"。
3. 中性返回\"default\"。
4. 含辱骂返回\"angry\"。
5. 兴奋返回\"upbeat\"。
6. 悲伤返回\"depressed\"。
7. 开心返回\"cheerful\"。
8. 只返回英文,不允许换行。
用户输入的内容是:{query}"""
chain = ChatPromptTemplate.from_template(prompt) | self.chatmodel | StrOutputParser()
result = chain.invoke({"query": query})
self.emotion = result
return result
def background_voice_synthesis(self, text: str, uid: str):
asyncio.run(self.get_voice(text, uid))
async def get_voice(self, text: str, uid: str):
print("text2speech", text)
print("uid", uid)
pass
@app.get("/")
def read_root():
return {"Hello": "World"}
@app.post("/chat")
def chat(query: str, background_tasks: BackgroundTasks):
master = Master()
msg = master.run(query)
unique_id = str(uuid.uuid4())
background_tasks.add_task(master.background_voice_synthesis, msg, unique_id)
return {"msg": msg, "id": unique_id}
@app.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
await websocket.accept()
try:
while True:
data = await websocket.receive_text()
await websocket.send_text(f"Message text was: {data}")
except WebSocketDisconnect:
print("Connection closed")
await websocket.close()
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="localhost", port=8090)requirements.txt
fastapi==0.108.0
langchain_core==0.1.42
langchain_openai==0.0.8
langchain_community==0.0.32
langsmith==0.1.17
langchain==0.1.16
qdrant_client==1.7.1
uvicorn==0.23.2Conclusion
The example demonstrates how to combine LangChain tool‑calling agents, a Qdrant vector store, and a custom Bazi fortune‑telling service within a FastAPI server, enabling real‑time web search, knowledge‑base retrieval, and domain‑specific calculations.
JavaEdge
First‑line development experience at multiple leading tech firms; now a software architect at a Shanghai state‑owned enterprise and founder of Programming Yanxuan. Nearly 300k followers online; expertise in distributed system design, AIGC application development, and quantitative finance investing.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
