Why OpenClaw Gets Smarter Over Time: Inside Its Self‑Evolving Markdown Engine
The article explains how OpenClaw’s “use‑more‑you‑use‑more” effect stems from a self‑evolving markdown file system that records agent identity, user profile, skills, memory, and lessons, enabling continuous learning, precise retrieval, and personalized AI behavior without changing the underlying model.
1. Why many think OpenClaw is hard to use
Most complaints arise from misuse rather than flaws in the product. Three common pitfalls are:
Choosing the wrong model – the same prompt yields vastly different results with different models; OpenClaw merely orchestrates the model.
Treating an agent as a generalist – assigning all tasks to a single agent ignores the benefit of specialized agents with isolated workspaces, memory, and sessions.
Not "training" the agent – an agent improves only after you repeatedly interact, correct mistakes, and let it record those corrections as SOPs (standard operating procedures) in markdown files.
2. Core mechanism: a self‑evolving markdown file system
OpenClaw loads a set of core markdown files into the system prompt before each conversation and writes new knowledge back to those files afterward, forming a powerful feedback loop.
The loop can be visualised as:
Conversation start
→ Load all core .md files into system prompt
→ Agent performs memory search
→ Agent executes task
→ Agent writes new insights, errors, or preferences back to AGENTS.md, USER.md, memory/*.md, MEMORY.md
→ File changes trigger SQLite FTS5 + vector index rebuild
Conversation end
Next conversation start
→ Load updated .md files
→ Retrieve newly indexed memories
→ Agent behaves more accurately
→ RepeatThe architecture relies on seven predefined markdown files:
SOUL.md – defines the agent’s persona, tone, values, and evolves as the agent learns about itself.
USER.md – stores a dynamic profile of the user (name, timezone, preferences, etc.).
AGENTS.md – records behavior rules, known pitfalls, and lessons learned.
TOOLS.md – lists environment details such as hostnames, device names, and file‑path conventions.
SKILL.md (multiple) – contains domain‑specific operation manuals; OpenClaw ships with 52 built‑in skills but users can add custom ones.
memory/*.md – daily logs created automatically, indexed in SQLite for full‑text and vector search.
MEMORY.md – a distilled long‑term memory derived from the daily logs and loaded on every prompt.
These files form a simple yet extensible knowledge base that grows with use.
Example workspace layout:
workspace/
├── SOUL.md
├── USER.md
├── AGENTS.md
├── TOOLS.md
├── MEMORY.md
├── memory/
│ ├── 2026-03-01.md
│ └── 2026-03-02.md
├── projects/
│ ├── project-alpha/
│ │ ├── progress.md
│ │ ├── decisions.md
│ │ └── risks.md
│ └── project-beta/
│ └── progress.md
├── templates/
│ ├── weekly-report.md
│ └── meeting-notes.md
└── contacts/
└── team-preferences.md3. The self‑evolution closed loop
Two nested loops drive learning:
Outer loop – read markdown files at conversation start, write updates at conversation end, thereby accumulating experience.
Inner loop – perform a hybrid memory search (≈70% vector similarity + 30% keyword match) with optional MMR diversification and time decay, ensuring relevant recent memories are prioritized.
4. Key implementation details
Bootstrap loading – the function resolveBootstrapContextForRun() reads core files, filters by session type, allows plugin hooks, and enforces a 20 KB per‑file and 150 KB total size limit, forcing agents to summarise important information.
Skill priority chain – skills are discovered from six sources, lowest to highest priority: plugin‑provided, built‑in, hosted (~/.openclaw/skills/), personal (~/.agents/skills/), project‑specific ({workspace}/.agents/skills/), and workspace‑level ({workspace}/skills/). The highest‑priority skill can override any lower‑level behaviour.
Self‑destructing bootstrap – on first run the agent follows BOOTSTRAP.md to create IDENTITY.md, USER.md, and SOUL.md, then deletes BOOTSTRAP.md, marking the initialization as complete.
5. What this means
Your agent’s value lives in the workspace markdown files; they encode preferences, workflows, and project context.
Training the agent is as simple as writing or editing markdown – no programming or prompt‑engineering required.
Differences between agents are differences in their markdown knowledge bases.
The approach is likely a universal pattern for AI‑agent products: persistent, searchable markdown files drive continual improvement.
6. Practical recommendations
Proactively create SOPs – write a SKILL.md for any repeatable task so the agent follows it automatically.
Regularly audit workspace files – correct outdated entries in AGENTS.md or USER.md.
Leverage multiple agents – assign distinct domains to separate agents to keep knowledge vertical and pure.
Choose a strong underlying model – a well‑tuned model is essential; markdown alone cannot compensate for a weak model.
Back up your workspace – treat it as a valuable digital asset; OpenClaw tracks changes with Git, so push to a remote repository regularly.
Conclusion
OpenClaw’s source code spans hundreds of thousands of lines, but the “use‑more‑you‑use‑more” effect boils down to a simple markdown read‑write cycle. The code provides the pipeline (model calls, tool execution, memory indexing); the markdown files supply the evolving knowledge that determines how well the system performs.
Alibaba Cloud Developer
Alibaba's official tech channel, featuring all of its technology innovations.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
