How BettaFish Uses Multi‑Agent AI to Break the Information Filter Bubble
BettaFish is a Go‑based, AI‑driven multi‑agent opinion analysis platform that tackles information silos, overload, and bias by aggregating data from diverse sources, iteratively refining results through reflection loops, and delivering visualized, actionable reports for scientific decision‑making.
What Is Opinion Analysis and Why BettaFish Exists?
In the era of information explosion, three major challenges arise: (1) Information Silos – algorithms only show what they want you to see; (2) Information Overload – massive data drowns the truth; (3) Subjective Bias – human cognition limits objectivity.
BettaFish (AI‑Driven Opinion Analysis Solution)
BettaFish (Betta + Fish) means “micro‑opinion” – observing the whole picture from fine details. Its core mission is to break silos, restore the true state of public opinion, predict future trends, and assist scientific decisions.
🌐 Break information silos – multi‑dimensional web‑wide collection.
🎯 Restore the original opinion – multi‑view objective analysis.
🔮 Predict future direction – data‑driven trend forecasting.
💡 Assist scientific decisions – provide executable suggestions.
Project Background: Honoring the Original Python BettaFish with a Go Port
The original BettaFish is a pioneering multi‑agent opinion analysis system written in Python. This Go version re‑implements all core ideas using the lightweight LangGraphGo framework, without relying on heavyweight libraries.
Go Implementation Based on LangGraphGo
Why LangGraphGo? It is a lightweight Go library for multi‑agent orchestration, offering:
✅ Light‑weight – only essential orchestration capabilities.
✅ Go‑native – leverages Go’s concurrency model.
✅ Type‑safe – compile‑time checks.
✅ Easy to understand – concise API, low learning curve.
// 1. Graph‑based agent orchestration
workflow := graph.NewStateGraph()
workflow.AddNode("query_engine", "查询引擎", QueryEngineNode)
workflow.AddNode("media_engine", "媒体引擎", MediaEngineNode)
workflow.AddNode("insight_engine", "洞察引擎", InsightEngineNode)
workflow.AddNode("forum_engine", "圆桌会议引擎", ForumEngineNode)
workflow.AddNode("report_engine", "报告生成引擎", ReportEngineNode)
workflow.SetEntryPoint("query_engine")
workflow.AddEdge("query_engine", "media_engine")
workflow.AddEdge("media_engine", "insight_engine")
workflow.AddEdge("insight_engine", "forum_engine")
workflow.AddEdge("forum_engine", "report_engine")
workflow.AddEdge("report_engine", graph.END)
app, err := workflow.Compile()
if err != nil { log.Fatalf("编译图失败: %v", err) }
result, err := app.Invoke(context.Background(), initialState)
if err != nil { log.Fatalf("运行图失败: %v", err) }Core Engines and Their Roles
QueryEngine (Information Hunter) – multi‑platform search, deep mining, AI‑based quality filtering, and a reflection‑loop for iterative improvement.
MediaEngine (Visualization Expert) – image search, filtering, and visual charts (trend graphs, heatmaps, word clouds).
InsightEngine (Trend Prophet) – transforms raw signals into layered insights and forecasts using time‑series, sentiment curves, and diffusion models.
ForumEngine (Virtual Round‑Table) – simulates a multi‑agent discussion (Moderator, QueryAgent, MediaAgent) over five rounds, preserving history so later agents can build on earlier statements.
ReportEngine (Decision Assistant) – assembles a markdown report with core findings, visualizations, deep analysis, trend predictions, and concrete action recommendations.
Key Technical Features
1. State Sharing Mechanism (schema/state.go)
type BettaFishState struct {
Query string
ReportTitle string
Paragraphs []*Paragraph
NewsResults []string
FinalReport string
MediaResults []string
InsightResults []string
Discussion []string
}2. Reflection Loop in QueryEngine – after each search the LLM scores quality (e.g., 72 → 88 → 95) and automatically refines the query.
// Multi‑round reflection
maxReflections := 3
for i := 0; i < maxReflections; i++ {
var reflection struct{ SearchQuery, SearchTool, Reasoning string }
generateJSON(ctx, llm, SystemPromptReflection, input, &reflection)
newResults := ExecuteSearch(ctx, reflection.SearchQuery, ...)
var summary struct{ UpdatedParagraphLatestState string }
generateJSON(ctx, llm, SystemPromptReflectionSummary, input, &summary)
p.Research.LatestSummary = summary.UpdatedParagraphLatestState
}3. Multi‑Round Discussion in ForumEngine – five turns (Moderator → QueryAgent → MediaAgent → QueryAgent → Moderator) with full history passed to each turn.
turns := []struct{ Speaker, Prompt string }{
{"Moderator", SystemPromptModerator},
{"QueryAgent", SystemPromptNewsAgent},
{"MediaAgent", SystemPromptMediaAgent},
{"QueryAgent", SystemPromptNewsAgent},
{"Moderator", SystemPromptModerator},
}
for _, turn := range turns {
// generate response based on accumulated history
// append to history for next turn
}4. Parallel Paragraph Processing – Go goroutines handle each paragraph concurrently, dramatically speeding up large‑scale analyses.
var wg sync.WaitGroup
for i := range s.Paragraphs {
wg.Add(1)
go func(idx int) {
defer wg.Done()
processParagraph(ctx, llm, s.Paragraphs[idx])
}(i)
}
wg.Wait()Quick‑Start Guide (Three Steps)
Prepare API Keys – set OPENAI_API_KEY and TAVILY_API_KEY (or alternatives like DeepSeek or local Ollama).
Run the Analysis
# Basic usage
go run showcases/BettaFish/main.go "Analyze the public reaction to a new smartphone launch"View the Report – generated markdown files appear in final_reports/, containing overall sentiment, trend graphs, deep insights, and actionable recommendations.
Typical Application Scenarios
Brand Crisis Monitoring – real‑time tracking of negative spikes, crisis level assessment, and rapid response suggestions.
Competitor Comparison – side‑by‑side sentiment, user satisfaction ranking, advantage/weakness analysis, and market opportunity identification.
Policy Impact Evaluation – multi‑angle policy interpretation, emotion distribution, industry impact, and corporate response strategies.
Product Reputation Tracking – continuous monitoring of user reviews, sentiment trends, and improvement priorities.
Industry Trend Forecasting – macro‑level development status, hot technology directions, investment heat, risk identification, and future outlook.
Advanced Configuration
Model Selection – choose between high‑quality (gpt‑4o), fast (gpt‑4o‑mini), cost‑optimized (deepseek‑chat), or privacy‑preserving local models (llama3.1).
# Deep analysis
export OPENAI_MODEL="gpt-4o"
# Fast daily monitoring
export OPENAI_MODEL="gpt-4o-mini"
# Cost‑optimized
export OPENAI_API_BASE="https://api.deepseek.com/v1"
export OPENAI_MODEL="deepseek-chat"
# Local privacy‑preserving
export OPENAI_API_BASE="http://localhost:11434/v1"
export OPENAI_MODEL="llama3.1"Adjust Analysis Depth – modify constants in query_engine/agent.go to control reflection iterations and satisfaction thresholds.
// Fast mode
const maxReflectionIterations = 1
const satisfactionThreshold = 0.7
// Deep mode
const maxReflectionIterations = 3
const satisfactionThreshold = 0.9Design Philosophy of LangGraphGo
LangGraphGo provides a concise graph‑based orchestration layer while leaving business logic fully in the developer’s hands, resulting in:
✅ Minimal dependencies – only the core graph API.
✅ Transparent control – explicit node/edge definitions.
✅ Flexible customization – plug any Go code as node logic.
✅ Native Go concurrency – effortless parallelism.
The BettaFish architecture therefore combines the power of AI multi‑agent reasoning with the performance and simplicity of native Go.
Resources
Original Python BettaFish: https://github.com/666ghj/BettaFish
LangGraphGo framework: https://github.com/smallnest/langgraphgo
LangChainGo (LLM utilities): https://github.com/tmc/langchaingo
Tavily Search API: https://www.tavily.com/
OpenAI API docs: https://platform.openai.com/docs
Community support is available via GitHub Issues and Discussions for LangGraphGo.
BirdNest Tech Talk
Author of the rpcx microservice framework, original book author, and chair of Baidu's Go CMC committee.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
