Why nanobot’s Minimal Agent Runtime Outperforms OpenClaw’s 430k‑Line Codebase
The article dissects nanobot’s lean 4‑5k‑line architecture, contrasting it with OpenClaw’s 430k‑line implementation, and explains how its message‑bus, AgentLoop, ContextBuilder, tool registry, and proactive Cron/Heartbeat components create a readable, controllable, and extensible AI agent runtime.
nanobot is a minimal yet fully functional agent runtime that fits in roughly 4‑5 k lines of code, compared with OpenClaw’s massive 430 k‑line codebase. It demonstrates exactly what has been removed and what has been retained, providing a clear, readable, and controllable skeleton for building agents.
https://github.com/HKUDS/nanobot
Core Design Goals
You can read the entire code chain from start to finish because the codebase is small.
You can pinpoint where a problem occurs thanks to clear boundaries.
You can safely add capabilities piece by piece because components are replaceable.
Architecture Overview
nanobot structures an agent as a pipeline driven by a MessageBus and an AgentLoop , with Cron and Heartbeat handling proactive behavior.
Key Components
Entry : Channels unify messages from Telegram, WhatsApp, Feishu into InboundMessage.
Core : MessageBus decouples receiving and sending messages.
AgentLoop : Drives the LLM ↔ tool loop.
ContextBuilder : Assembles system prompts from markdown files (AGENTS.md, SOUL.md, USER.md, TOOLS.md, IDENTITY.md) and optional memory files.
Extensibility : ToolRegistry registers tools with JSON‑Schema validation; tools become plug‑in, verifiable components.
Proactivity : Cron runs scheduled jobs; Heartbeat wakes the agent periodically.
AgentLoop Details
Take a message from the inbound queue.
Load or create a session and fetch recent history.
Use ContextBuilder to build the system prompt and combine it with history and the current user message.
Call the LLM, passing tool function schemas.
If the model returns tool_calls, execute each tool, feed the results back as tool messages, and continue the loop.
If no more tool calls, write the final content to the session and push an OutboundMessage.
The loop separates "thinking" (LLM decides what to do) from "doing" (tools execute), providing a reliable evidence chain for debugging and audit.
MessageBus
Two asyncio queues ( self.inbound and self.outbound) decouple channels from the agent. Channels only produce InboundMessage; the AgentLoop consumes them and produces OutboundMessage, which the ChannelManager routes back to the appropriate platform.
Sub‑Agent (Spawn) Mechanism
The spawn tool launches a child agent to handle a sub‑task (e.g., reading a batch of files). The child returns its result as a system message, which the parent agent then summarizes for the user. This keeps each tool’s scope narrow and its permissions minimal.
Gateway
Running nanobot gateway starts four processes in one runtime: AgentLoop, ChannelManager (Telegram/WhatsApp/Feishu), CronService (reading ~/.nanobot/cron/jobs.json), and HeartbeatService (default 30‑minute wake‑up using HEARTBEAT.md).
ContextBuilder & Bootstrap Files
Bootstrap files (AGENTS.md, SOUL.md, USER.md, TOOLS.md, IDENTITY.md) are loaded and concatenated with optional memory files to form the system prompt. This file‑based approach makes rules versionable, context editable, and session‑independent.
BOOTSTRAP_FILES = ["AGENTS.md", "SOUL.md", "USER.md", "TOOLS.md", "IDENTITY.md"]
def build_system_prompt(self, skill_names: list[str] | None = None) -> str:
parts = []
parts.append(self._get_identity())
bootstrap = self._load_bootstrap_files()
if bootstrap:
parts.append(bootstrap)
memory = self.memory.get_memory_context()
if memory:
parts.append(f"# Memory
{memory}")
return "
---
".join(parts)SkillsLoader (Progressive Loading)
Only always‑true skills are placed directly into the system prompt. Other skills are summarized (name, description, path) and loaded on demand via read_file, preventing prompt overflow.
MemoryStore
Provides two levels of persistence:
Daily notes stored as memory/YYYY-MM-DD.md.
Long‑term notes in memory/MEMORY.md.
While sufficient for basic recall, it lacks a searchable long‑term memory layer.
def get_memory_context(self) -> str:
parts = []
long_term = self.read_long_term()
if long_term:
parts.append("## Long-term Memory
" + long_term)
today = self.read_today()
if today:
parts.append("## Today's Notes
" + today)
return "
".join(parts) if parts else ""Tool System
All tools are registered in ToolRegistry, which generates JSON‑Schema function definitions for the LLM. Parameter validation ensures type safety and produces readable error messages that the model can correct.
def _register_default_tools(self) -> None:
self.tools.register(ReadFileTool())
self.tools.register(WriteFileTool())
self.tools.register(EditFileTool())
self.tools.register(ListDirTool())
self.tools.register(ExecTool(...))
self.tools.register(WebSearchTool(...))
self.tools.register(WebFetchTool())
self.tools.register(MessageTool(...))
self.tools.register(SpawnTool(...))Web Fetch Details
The web_fetch tool returns a JSON object containing finalUrl, status, extractor, truncated, and text, giving the agent structured evidence about the fetched content.
LiteLLM Provider (Multi‑Model Routing)
nanobot uses LiteLLM to route requests to OpenRouter, Anthropic, OpenAI, DeepSeek, Gemini, Groq, or a local vLLM instance. Configuration is a simple JSON file specifying provider API keys and default models.
{
"providers": {
"openrouter": {"apiKey": "sk-or-v1-xxx"},
"vllm": {"apiKey": "dummy", "apiBase": "http://localhost:8000/v1"}
},
"agents": {
"defaults": {"model": "anthropic/claude-opus-4-5"}
}
}Proactivity: Cron & Heartbeat
Cron adds scheduled jobs that send a synthetic user message to the agent at a specified cron expression. Heartbeat runs every 30 minutes, reads HEARTBEAT.md, and triggers the agent only if there is pending work.
# Add a daily job
nanobot cron add --name "daily" --message "Good morning!" --cron "0 9 * * *"
# List jobs
nanobot cron list
# Remove a job
nanobot cron remove JOB_IDChannel Implementations
Telegram : Long polling, media saved to ~/.nanobot/media, optional Whisper transcription.
WhatsApp : Uses a Node.js bridge ( @whiskeysockets/baileys); voice not yet downloadable.
Feishu : WebSocket long‑connection, message deduplication, automatic read‑receipt reactions.
All channels abstract their specifics into InboundMessage / OutboundMessage and respect an allowFrom whitelist for security.
What to Copy & What to Improve
Copy the message‑bus decoupling, clear tool‑call lifecycle, file‑based context, and lightweight proactivity.
Improve long‑term memory retrieval, strengthen exec guardrails (allowlist, audit), make session storage workspace‑portable, and add parallel tool execution strategies.
Conclusion
nanobot offers a compact, readable skeleton that lets you run a closed‑loop agent quickly and then iteratively add safety, memory, and workflow features. Recommended reading order: nanobot/agent/loop.py – understand the main loop. nanobot/agent/context.py – see how prompts are built. nanobot/agent/tools/* – explore the tool system and security. nanobot/cron/* and nanobot/heartbeat/* – learn the proactivity mechanisms. nanobot/channels/* – add new entry points when needed.
Architect
Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
