Blackboard System: Enabling Dynamic Collaboration Among Expert AI Agents
This article compares a rigid sequential multi‑agent pipeline with a flexible blackboard architecture, showing how shared memory and a dynamic controller let specialist AI agents cooperate opportunistically, obey conditional user instructions, and achieve higher efficiency and instruction‑following scores.
Why Sequential Multi‑Agent Systems Fail
When an AI team is asked to fetch the latest news about a company and then perform either technical analysis (if the news is positive) or financial analysis (if negative), a traditional sequential multi‑agent system executes every step regardless of the condition, wasting resources and producing irrelevant output.
Blackboard System Overview
The blackboard system draws inspiration from experts gathering around a physical board. In AI it consists of shared memory (the blackboard) and a dynamic scheduler (the controller) that decides which specialist agent should act next based on the current state.
Blackboard : a central data store where agents write and read partial results.
Specialist agents : e.g., news analyst, technical analyst, financial analyst, report writer. They act only when the blackboard triggers their expertise.
Controller : continuously observes the blackboard, the user request, and selects the next best agent, forming an opportunistic, emergent workflow.
Strengths and Weaknesses
Flexibility & Adaptability : workflow emerges from the problem, not hard‑coded.
Modularity : agents can be added or removed without redesign.
Controller design difficulty : a weak controller leads to inefficiency or loops.
Debugging complexity : non‑linear execution paths are harder to trace.
Hands‑On Implementation
Stage 0 – Preparation
We use Nebius as the LLM provider, Tavily for web search, and LangSmith for tracing.
# Install dependencies
# !pip install -q -U langchain-nebius langchain langgraph rich python-dotenv langchain-tavily import os
from typing import List, TypedDict, Optional
from dotenv import load_dotenv
from langchain_nebius import ChatNebius
from langchain_tavily import TavilySearch
from langchain_core.messages import BaseMessage, SystemMessage, HumanMessage
from pydantic import BaseModel, Field
from langchain_core.prompts import ChatPromptTemplate
from langgraph.graph import StateGraph, END
from rich.console import Console
from rich.markdown import Markdown
load_dotenv()
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = "Agentic Architecture - Blackboard (Nebius)"
for key in ["NEBIUS_API_KEY", "LANGCHAIN_API_KEY", "TAVILY_API_KEY"]:
if not os.environ.get(key):
print(f"⚠️ Missing {key} in .env")
print("✅ Environment loaded")
console = Console()Stage 1 – Baseline Sequential System
We build a pipeline: news analyst → technical analyst → financial analyst → report writer, each node receiving the previous node’s output.
class SequentialState(TypedDict):
user_request: str
news_report: Optional[str]
technical_report: Optional[str]
financial_report: Optional[str]
final_report: Optional[str]
def news_analyst_node_seq(state: SequentialState):
console.print("--- (Sequential) Calling news analyst ---")
prompt = f"You are a news analyst. Find the latest major news for the user request and give a concise summary.
User request: {state['user_request']}"
agent = llm.bind_tools([search_tool])
result = agent.invoke(prompt)
return {"news_report": result.content}
# similar definitions for technical_analyst_node_seq, financial_analyst_node_seq, report_writer_node_seq
seq_graph_builder = StateGraph(SequentialState)
seq_graph_builder.add_node("news", news_analyst_node_seq)
seq_graph_builder.add_node("tech", technical_analyst_node_seq)
seq_graph_builder.add_node("finance", financial_analyst_node_seq)
seq_graph_builder.add_node("writer", report_writer_node_seq)
seq_graph_builder.set_entry_point("news")
seq_graph_builder.add_edge("news", "tech")
seq_graph_builder.add_edge("tech", "finance")
seq_graph_builder.add_edge("finance", "writer")
seq_graph_builder.add_edge("writer", END)
sequential_app = seq_graph_builder.compile()
print("✅ Sequential system compiled")Stage 2 – Test Sequential System with Conditional Query
Query: "Find the latest major news about NVIDIA. If the sentiment is neutral or positive, do technical analysis; if negative, do financial analysis."
dynamic_query = "Find the latest major news about NVIDIA. If the sentiment is neutral or positive, do technical analysis; if negative, do financial analysis."
final_seq_output = sequential_app.invoke({"user_request": dynamic_query})
console.print(Markdown(final_seq_output['final_report']))Result: the sequential system produces a complete report but runs both technical and financial analyses, ignoring the conditional logic.
Stage 3 – Build Blackboard System
Key components:
Shared blackboard (list of strings).
Intelligent controller that reads the blackboard and decides the next agent.
Dynamic routing back to the controller after each specialist finishes.
class BlackboardState(TypedDict):
user_request: str
blackboard: List[str]
available_agents: List[str]
next_agent: Optional[str]
class ControllerDecision(BaseModel):
next_agent: str = Field(description="One of ['新闻分析师','技术分析师','财务分析师','报告撰写员','完成']")
reasoning: str = Field(description="Why this agent is chosen")
def create_blackboard_specialist(persona: str, agent_name: str):
system_prompt = f"You are a specialist AI: {persona}. Read the user request and current blackboard, use tools, and publish a concise markdown report signed as '{agent_name}'."
prompt_template = ChatPromptTemplate.from_messages([
("system", system_prompt),
("human", "User request: {user_request}
Blackboard (previous reports):
{blackboard_str}")
])
agent = prompt_template | llm.bind_tools([search_tool])
def specialist_node(state: BlackboardState):
console.print(f"--- (Blackboard) Agent '{agent_name}' working... ---")
blackboard_str = "
---
".join(state["blackboard"])
result = agent.invoke({"user_request": state["user_request"], "blackboard_str": blackboard_str})
report = f"**Report from {agent_name}:**
{result.content}"
return {"blackboard": state["blackboard"] + [report]}
return specialist_node
def controller_node(state: BlackboardState):
console.print("--- Controller: analyzing blackboard... ---")
controller_llm = llm.with_structured_output(ControllerDecision)
blackboard_content = "
".join(state["blackboard"])
prompt = f"You are the central controller. Based on the original user request and the current blackboard, decide which specialist should run next.
User request: {state['user_request']}
Current blackboard: {blackboard_content if blackboard_content else 'Blackboard is empty.'}
Available agents: {', '.join(state['available_agents'])}
Return the next agent name and a brief reasoning."
decision = controller_llm.invoke(prompt)
console.print(f"--- Controller decides to call '{decision.next_agent}'. Reason: {decision.reasoning} ---")
return {"next_agent": decision.next_agent}
# Build graph
bb_graph_builder = StateGraph(BlackboardState)
bb_graph_builder.add_node("Controller", controller_node)
news_analyst_bb = create_blackboard_specialist("新闻分析师", "新闻分析师")
technical_analyst_bb = create_blackboard_specialist("技术分析师", "技术分析师")
financial_analyst_bb = create_blackboard_specialist("财务分析师", "财务分析师")
report_writer_bb = create_blackboard_specialist("负责从黑板综合出最终答案的报告撰写员", "报告撰写员")
bb_graph_builder.add_node("新闻分析师", news_analyst_bb)
bb_graph_builder.add_node("技术分析师", technical_analyst_bb)
bb_graph_builder.add_node("财务分析师", financial_analyst_bb)
bb_graph_builder.add_node("报告撰写员", report_writer_bb)
bb_graph_builder.set_entry_point("Controller")
def route_to_agent(state: BlackboardState):
return state["next_agent"]
bb_graph_builder.add_conditional_edges(
"Controller",
route_to_agent,
{
"新闻分析师": "新闻分析师",
"技术分析师": "技术分析师",
"财务分析师": "财务分析师",
"报告撰写员": "报告撰写员",
"完成": END,
},
)
# Return to controller after each specialist
for name in ["新闻分析师", "技术分析师", "财务分析师", "报告撰写员"]:
bb_graph_builder.add_edge(name, "Controller")
blackboard_app = bb_graph_builder.compile()
print("✅ Blackboard system compiled")Stage 4 – Direct Comparison
Running the same dynamic_query on the blackboard system yields a concise execution trace:
Controller activates the news analyst, fetches the news.
Seeing a positive sentiment, the controller selects the technical analyst (skipping finance).
After technical analysis, the controller calls the report writer to synthesize the final answer.
Controller then marks the task as completed.
The blackboard system respects the conditional logic and avoids unnecessary work.
Stage 5 – Quantitative Evaluation
We let an LLM act as a judge, scoring two dimensions: instruction‑following and process efficiency (1‑10). Sample schema:
class ProcessLogicEvaluation(BaseModel):
instruction_following_score: int = Field(description="Score 1‑10 for obeying conditional user instructions.")
process_efficiency_score: int = Field(description="Score 1‑10 for avoiding unnecessary steps.")
justification: str = Field(description="Brief explanation based on the trace.")Evaluation results (illustrative):
Sequential system: low scores (≈2/10 instruction following, ≈3/10 efficiency) because it runs all analysts.
Blackboard system: near‑perfect scores (≈10/10 for both) as the controller selects only the needed agents.
These numbers confirm the blackboard architecture’s superiority for conditional tasks.
Final Thoughts
The blackboard system transforms a rigid pipeline into a flexible round‑table collaboration, leveraging shared memory and a smart controller to achieve dynamic, efficient workflows. Designing a robust controller and keeping blackboard content clear are the main challenges.
Core Recap Principle : Shared memory + dynamic scheduling lets experts emerge the optimal workflow. Practice : Implemented with LangGraph, compared against a sequential baseline, demonstrating flexibility and efficiency. Pitfalls : Controller quality caps performance; ambiguous blackboard entries hinder cooperation.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Data STUDIO
Click to receive the "Python Study Handbook"; reply "benefit" in the chat to get it. Data STUDIO focuses on original data science articles, centered on Python, covering machine learning, data analysis, visualization, MySQL and other practical knowledge and project case studies.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
