From Manual Agents to Self‑Improving AI: My OpenClaw vs Hermes Experiment
A senior Google Cloud AI product manager shares a hands‑on study comparing OpenClaw and the open‑source Hermes agent, revealing how a disciplined prompt‑engineering feedback loop can turn static agents into self‑improving systems while highlighting ownership, back‑tracking, and practical deployment considerations.
A senior Google Cloud AI product manager observed that while his AI agents were running autonomously, they were not evolving on their own. To investigate, he introduced the recently popular Hermes agent alongside his existing OpenClaw setup, creating a controlled experiment to compare their behaviors.
Using OpenClaw for several months, he let agents write skill files, summarize failure manuals, and leave reusable traces. He coined this iterative process "prompt‑engineering correction loop": observe agent output, identify issues, write fixes into memory or instructions, and wait for the behavior to stabilize. Over time, the maintenance effort often exceeded the agents' self‑improvement.
He then added Hermes (an open‑source project from Nous Research) on the same machine. Installation required a single command:
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bashAfter running hermes setup, Hermes detected the existing OpenClaw configuration, imported settings, memories, and API keys, and guided the Telegram BotFather integration.
Key differences emerged:
OpenClaw: The user discovers problems, teaches the agent a fix, and the agent stores the correction; progress depends on both parties being present.
Hermes: After completing a complex task, the agent evaluates what happened, decides what to retain, and writes it into a skill file. The user can review or edit, but does not need to initiate the improvement.
Another distinction is Hermes' built‑in back‑tracking capability. When searching for terms like "telegram OR gateway OR restart OR stuck," Hermes surfaced a complete troubleshooting conversation from weeks earlier, including conflict details and actionable fixes, without any manual memory curation.
Both systems adhere to the agentskills.io standard, allowing skill files to be transferred among OpenClaw, Hermes, Claude Code, and Cursor without lock‑in.
Practical takeaway: keep fully controlled, predictable agents in OpenClaw, and run agents that you want to observe autonomously evolving on Hermes. This division lets each system play to its strengths—manual oversight versus self‑sustaining improvement.
DataFunSummit
Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
