Why Hermes’ Three‑Layer Learning Loop Outperforms OpenClaw’s Memory Design

This article dissects Hermes’ three‑layer learning mechanism—fact memory, session‑search SQLite/FTS5, and procedural skill management—contrasting it with OpenClaw’s architecture, and explains how placing auto‑summarized skills at the right runtime layer determines true agent learning capability.

AI Architecture Hub
AI Architecture Hub
AI Architecture Hub
Why Hermes’ Three‑Layer Learning Loop Outperforms OpenClaw’s Memory Design

1. Introduction

Both OpenClaw and Hermes store long‑term information using Markdown files and SQLite/FTS retrieval, but the key question is whether adding automatic skill summarization to OpenClaw makes the two systems equivalent.

The decisive factor is not the presence of memory files, but where an agent treats long‑term assets and at which runtime layer those assets are stored. This is the core of Hermes’ three‑layer learning mechanism.

2. Core Summary of Hermes

Hermes stores built‑in memory in MEMORY.md and USER.md (small Markdown files), not SQLite.

SQLite + FTS5 is used for session_search, acting as an archive rather than a personal notepad.

External memory providers (Honcho, Mem0, Supermemory, etc.) are additive layers and do not replace the built‑in files.

The most valuable component is skill_manage, which records “how to do” tasks as procedural memory, separate from factual memory.

The learning loop is best‑effort: primary tasks finish first, then a background review decides whether anything should be saved.

OpenClaw now includes Honcho, memory_search, and experimental dreaming, but it should not be reduced to static Markdown.

Understanding these points clarifies why many agents confuse three different kinds of “memory”. Hermes explicitly separates them, forming the basis of its three‑layer learning mechanism.

3. Three Types of Long‑Term Assets

3.1 Fact Memory (short cards)

Definition: Information the agent must always know (preferences, environment constants, project paths). Example: user preferences, server configuration, test procedures. Role: answers “who am I / what is the environment”, the foundation for any work.

3.2 Session Retrieval (archive)

Definition: Historical conversation or actions that are not kept in the active context but can be fetched on demand. Example: “How did we fix that Docker network?” Role: avoids repeated description and saves time.

3.3 Procedural Memory (skills)

Definition: Reusable workflows for recurring tasks, turning preferences into concrete procedures. Example: PR review checklist, deployment troubleshooting steps, data‑cleaning pipeline. Role: defines “how to do this next time”, enabling experience reuse.

4. Hermes Three‑Layer Learning Mechanism

4.1 Layer 1 – Fact Memory (short cards)

Implemented as two Markdown files: ~/.hermes/memories/MEMORY.md: personal notes, environment facts, tool quirks, learned experience. ~/.hermes/memories/USER.md: user profile, communication style, expectations.

Default size limits: MEMORY.md ≈ 2,200 characters (≈ 800 tokens); USER.md ≈ 1,375 characters (≈ 500 tokens).

Frozen snapshot: at session start the files are loaded into a snapshot stored in system_prompt_snapshot. Writes are persisted to disk but do not alter the current system prompt, protecting the prompt cache.

Security scan: tools/memory_tool.py defines MEMORY_THREAT_PATTERNS to block invisible Unicode, prompt‑injection, or credential‑leak patterns before writing.

Atomic write: MemoryStore.write_file() writes to a temporary file, calls os.fsync(), then atomically replaces the original with os.replace(). File locking via fcntl.flock ensures exclusive access.

Storage priority: MEMORY_SCHEMA orders saving as user corrections > environment facts > procedural knowledge, while task progress and transcripts stay in the session history.

4.2 Layer 2 – SQLite + FTS5 (archive)

Purpose: dedicated storage for cross‑session history retrieval, separate from fact memory.

Implementation details:

Database at ~/.hermes/state.db uses WAL mode for concurrent reads/writes.

Virtual table messages_fts provides full‑text search; triggers keep the index in sync with messages.

Retrieval flow:

FTS5 matches up to 50 results.

Parent session IDs are resolved to root sessions and aggregated.

Full transcript is formatted for display.

Matches are truncated around the hit point (default 100 k characters) with a preference order: phrase match > nearby co‑occurrence > single‑word fallback.

Summaries are generated asynchronously with a focused model via asyncio.gather.

Hidden mode returns recent titles and timestamps without invoking an LLM when no query is supplied.

4.3 Layer 3 – Procedural Memory (skill management)

Implemented by the skill_manage tool ( tools/skill_manager_tool.py), storing reusable workflows as “skills”. create: new skill with YAML front‑matter and Markdown body. patch: precise string replacement. edit: full rewrite of the skill file. delete: remove the skill directory. write_file / remove_file: manage supporting assets under references/, templates/, scripts/, assets/.

Lifecycle governance:

Skill creation is triggered after complex tasks (≥ 5 tool calls) or after fixing hard‑to‑reproduce errors; outdated skills are patched immediately.

Review counters: memory review every 10 turns; skill review after 10 tool‑call iterations.

Background review agent ( spawn_background_review()) forks a separate thread, shares the memory_store, runs up to 8 iterations, and selects the appropriate review prompt (memory, skill, or combined).

Security guard ( tools.skills_guard) scans every create/patch/edit/write operation; if should_allow_install returns false, the operation is rolled back.

Limitations: automatically generated skills may embed errors or over‑fit to a specific project; schema enforces “no simple one‑off tasks”, “confirm with user before creating/deleting”, and “immediate patch for outdated skills”.

5. External Memory Providers

Hermes can load a single external provider (Honcho, Mem0, OpenViking, etc.) as an additive layer that never replaces the built‑in memory.

Provider integration points are defined in agent/memory_provider.py.

During the tool loop, prefetch_all() gathers results from the provider.

Results are sanitized via sanitize_context(), wrapped in a system‑prompt block, and injected alongside the user message.

After the main response, sync_all() writes back to the external store and queues the next prefetch.

Lifecycle hooks ( on_turn_start, on_session_end, on_pre_compress, on_memory_write, on_delegation) allow providers to participate at key runtime moments.

Example – Honcho: provides dialectic reasoning, user modeling, semantic search, and persistent conclusions, complementing built‑in memory.

6. OpenClaw vs. Hermes: Migration & Convergence

Even after adding Honcho, memory_search, and experimental dreaming, OpenClaw differs from Hermes mainly in where the auto‑summarized experience is placed.

6.1 Two Transformation Paths

Path 1 – Offline archive layer: after a session, generate a summary file and decide whether to persist it. This remains an “experience archive” and does not integrate with the runtime learning loop.

Path 2 – Runtime integration: after task completion, a background review routes facts to MEMORY.md, history to SQLite, and procedures to skill_manage, achieving a true learning loop.

The decisive test is whether the saved experience can be invoked, patched, and governed by the agent’s tooling and prompts.

6.2 Migration Support

Hermes offers an openclaw-migration optional skill and a hermes claw migrate command, enabling direct mapping of OpenClaw assets ( MEMORY.md, USER.md, SOUL.md, allowlists, workspace instructions, and many skills) to Hermes equivalents. Some components (custom backends, plugins, multi‑agent hooks) require manual review.

6.3 Steps for OpenClaw to Reach Hermes‑level Learning Loop

Automatically decide whether a completed task merits persistence.

Separate fact, history, and workflow assets instead of dumping everything into a single file.

Expose create, patch, edit, delete operations for skills.

Encode “facts go to memory, history goes to search, workflow goes to skill” in the agent guidance.

Allow immediate patching of outdated or erroneous skills.

Run every generated skill through security scanning and permission checks.

Enable retrieval, loading, and execution of similar‑task skills with continuous correction.

6.4 Core Differences Before Convergence

OpenClaw excels at gateway/control, multi‑session handling, plugins, and experimental dreaming, but its skills act mainly as offline archives.

Hermes focuses on a self‑improving runtime, integrating session search, curated memory, skill management, background review, and external providers into a cohesive learning chain.

Analogy: OpenClaw is a “smart dispatcher” that remembers more over time; Hermes is an “executor that writes its own post‑mortem reports”.

7. Engineering Takeaways from Hermes’ Learning Mechanism

Long‑term assets must be split: facts (short‑term context), history (on‑demand archive), and procedures (governed workflows). Mixing them leads to token waste, context pollution, or stale SOPs.

Procedural memory requires a full lifecycle: creation, patching, deletion, validation, and failure handling; merely summarizing to Markdown is insufficient.

A learning loop needs concrete engineering: tools, configuration, nudges, background review, and file‑system structures guarantee stability beyond prompt‑only tricks.

8. Conclusions

Hermes’ value lies not in the specific technologies (SQLite, Honcho) but in the disciplined three‑layer asset model and its engineering implementation. Future agent work should aim to reliably manage long‑term assets, provide lifecycle governance for procedural knowledge, and continuously validate the quality of automatically generated skills.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

AI agentsHermeslearning loopMemory Architectureskill managementOpenClawprocedural memory
AI Architecture Hub
Written by

AI Architecture Hub

Focused on sharing high-quality AI content and practical implementation, helping people learn with fewer missteps and become stronger through AI.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.