How OpenClaw’s Modular System Prompts Turn AI Agents into Reliable Assistants

This article dissects OpenClaw’s system prompts, showing how a modular, file‑based prompt architecture—covering tools, skills, constraints, memory layers, personality rules, heartbeat checks, and context management—creates a maintainable, versionable, and iterative AI agent that behaves like a disciplined assistant.

Architect
Architect
Architect
How OpenClaw’s Modular System Prompts Turn AI Agents into Reliable Assistants

TL;DR

OpenClaw’s system prompt is not a single block but an assembled runtime: skeleton + tool specs + behavior rules + workspace files (personality/user/memory).

Key principle: Text > Brain – the conversation can be lost, files persist; continuity relies on durable "facts".

Personality is an executable specification, not a prose description.

Memory uses a three‑layer structure: working memory → daily logs → long‑term memory, with write and maintenance mechanisms.

Heartbeat provides periodic checks while limiting interruptions.

Security boundary: internal actions are bold, external actions require confirmation.

To replicate, turn agent behavior into a reviewable, versioned, and iterable configuration system.

Context should be an operable list, not a monolithic dump.

1) System Prompt as a "Assembly"

Many assume the system prompt is a single introductory paragraph, but a functional agent’s prompt is assembled from several stable rule blocks that are injected into the model at each run.

OpenClaw’s system prompt is split into six parts:

Tool System : list of tools, parameters, and invocation methods.

Core Skills : connections to GitHub, browser usage, note publishing, message alerts (Mac‑optimized).

Prohibited Actions : what the agent must not do (e.g., restart, update).

Memory System : three‑layer memory + read/write rules.

Personality Constraints : values and boundaries defined in SOUL.md.

Heartbeat Mechanism : when to actively check and when to stay silent.

The end of the prompt also records the current model and environment name, which differs between local and container execution.

Some prompt fragments are disguised as user input to separate content and emphasize certain actions, allowing the agent to initiate conversations.

Practical Takeaways

To change output style, edit the personality file instead of tool rules.

To adjust interruption strategy, modify the heartbeat configuration, not the memory.

To add tools, extend the tool layer rather than embedding tool specs in the personality.

This separation reduces cross‑contamination and makes the system more maintainable.

2) Workspace Files: Persistent Fact Sources

OpenClaw stores long‑term information in files rather than relying on the model’s fleeting context.

Typical files: SOUL.md – values and behavior guidelines (loaded each start). IDENTITY.md – name, persona, style (loaded each start). USER.md – user information and preferences (loaded each start). AGENTS.md – workflow and conventions (loaded each start). MEMORY.md – long‑term knowledge base (available only in private sessions). memory/YYYY-MM-DD.md – daily logs (available for today + yesterday).

Three common problems with monolithic prompts are solved by file‑based configuration: uncontrolled growth, drift in tone, and loss of continuity. By versioning stable files and allowing selective changes, the agent remains both stable and adaptable.

"Fact files on disk become the agent’s "facts source" – a pattern now common across serious agent platforms."

3) AGENTS.md: Defining the Startup Workflow

Many agent failures stem from workflow issues rather than model limitations. OpenClaw’s startup checklist reads:

Read SOUL.md – defines who the agent is.

Read USER.md – defines who the agent helps.

Read recent daily logs ( memory/YYYY-MM-DD.md).

If in a main session, also read MEMORY.md.

Key rule: Do not ask for permission; just execute the required steps.

"Mental notes don’t survive session restarts. Files do."

Persisting important facts, learned experiences, and mistakes in files ensures traceability and prevents forgetting.

4) SOUL.md: Executable Personality Specification

Instead of vague character bios, SOUL.md contains hard rules:

Principle 1 – Be genuinely helpful, not performatively helpful : skip empty pleasantries and give concise answers.

Principle 2 – Have opinions : provide concrete recommendations (e.g., "Recommend PostgreSQL for large datasets").

Principle 3 – Be resourceful before asking : try to solve, read files, search context, then ask only if stuck.

Principle 4 – Remember you’re a guest : treat user data with respect and privacy.

Security boundaries are explicit:

Private data stays private.

External actions require confirmation.

No half‑baked replies.

Do not speak as the user in group chats.

When writing SOUL.md, focus on four aspects:

Answer strategy – conclusion first or clarification?

What to avoid – boilerplate, marketing tone.

Confirmation strategy – which actions need explicit approval.

Failure strategy – how to admit uncertainty and provide verification paths.

## How you want me to answer
- Give conclusion first, then reasoning; keep it brief.
- If ambiguous, ask one key question; otherwise proceed with assumptions and label them.

## What I don’t want to see
- Three‑part "first/second/also" structures or overly empathetic service tone.
- "Both options have pros and cons" avoidance.

## Safety & Confirmation
- Read files, organize notes, draft: do it directly.
- Send messages, modify online resources, pay/delete: confirm first.

## When uncertain
- State unknown points and give verification paths (commands/log/file locations).

Short, maintainable templates are easier to enforce.

5) USER.md: Rich Context Generates Compounding Benefits

USER.md

answers "Who am I working for?" The depth of this file directly impacts relevance.

Include detailed items such as:

Current projects.

Key people in the organization.

Interaction relationships.

Family situation.

Priorities.

Obstacles.

More detail yields compounding benefits: the agent can reference background without repeated prompts and adopt the user’s tone in messages. USER.md is the fastest‑decaying file; update it nightly to keep the agent useful.

Relationship with SOUL.md: SOUL.md defines communication style. USER.md provides contextual background.

If USER.md is missing, SOUL.md becomes decorative.

6) Memory: Three‑Layer Structure

OpenClaw treats memory like a database:

Working Memory – the context window (tens of thousands of tokens), cleared after each session.

Log Memory – daily logs ( memory/YYYY-MM-DD.md), unlimited size, persisted forever.

Long‑Term Memory – distilled knowledge in MEMORY.md, also persisted.

Typical Daily Memory Flow

Read: SOUL.md + USER.md + memory/2026-02-03.md + memory/2026-02-04.md
Working memory: empty
User: "Help me analyze this Redis timeout issue"
AI: Analyze and propose solution
Working memory: conversation (≈5000 tokens)
Working memory 80% full → Memory Flush triggered
AI writes to memory/2026-02-04.md:
- "User encountered Redis timeout"
- "Suggest increasing pool size to 50"
AI reviews last 3 days, extracts to MEMORY.md:
- "User prefers direct code, dislikes long explanations"
- "Redis timeout recurring, prefers connection pool solution"

Memory Flush triggers when context usage reaches ~80%, rescuing important info before forgetting.

Benefits:

Traceability – you can verify decisions against specific logs.

Maintainability – long‑term memory is a revisionable "conclusion bank"; you can correct undesired behavior by editing the file.

Practical habit: tell the agent "store this in MEMORY.md" to avoid repetitive copy‑pasting.

7) Context Management

Instead of dumping the entire conversation, OpenClaw treats context as a manageable list: list – show current materials (files/links/snippets). add – insert a needed file or resource. reset – clear context when switching tasks.

Prompt mode can be full (rich rules, higher token cost), minimal (lighter, requires clearer boundaries), or none (bare, for debugging).

Default is full; switch to minimal/none only for targeted experiments.

8) Heartbeat: Periodic Assistant‑Like Checks

Every 30 minutes the agent sends a heartbeat, performing checks such as:

Unread urgent emails.

Upcoming calendar events within 24 hours.

Important notifications.

If nothing notable, it replies HEARTBEAT_OK; otherwise it notifies the user.

When to disturb the user:

Urgent email arrives.

Calendar event within 2 hours.

No contact for >8 hours.

When to stay silent:

Night hours (23:00‑08:00).

User is clearly busy.

Just checked (<30 min ago).

Heartbeat relies on USER.md (to define "urgent") and SOUL.md (to define reminder timing).

9) Tools & Security: Bold Internals, Cautious Externals

OpenClaw distinguishes actions:

Internal actions (reading files, organizing notes, drafting) are automated.

External actions (sending messages, creating meetings, making payments) require explicit confirmation.

"Earn trust through competence. Your human gave you access to their stuff. Don't make them regret it. Be careful with external actions (emails, tweets, anything public). Be bold with internal ones (reading, organizing, learning)."

10) Social Rules in Group Chats

Agent should participate without dominating:

If directly mentioned – reply.

Casual chat – stay silent.

Someone else already answered – stay silent.

Critical error – give a brief correction.

Use a single emoji or "OK" to acknowledge without expanding.

11) Common Configuration Pitfalls

Rich SOUL.md but empty memory → polite tone but no continuity.

Aggressive heartbeat with a thin USER.md → technically correct alerts that miss user priorities.

Correct alignment: SOUL.md values ↔ USER.md background ↔ memory content ↔ heartbeat priorities.

12) Minimal Viable Agent Configuration

Start with three files: SOUL.md – answer strategy, prohibitions, confirmation rules. USER.md – detailed background, preferences, current projects. memory/YYYY-MM-DD.md – daily facts and decisions.

Two engineering disciplines:

On each start, load SOUL.md, USER.md, and the last two daily logs.

Require confirmation for any external action (messages, deletions, payments, online changes).

After a week you’ll notice the agent feels more like a reliable collaborator rather than a fickle chatbot.

Core Takeaways

Memory matters more than raw intelligence; persistent fact files give lasting value.

Personality emerges from explicit constraints, not vague bios.

Simple technologies (Markdown, JSONL, SQLite) can solve core engineering challenges.

Broader Impact

The file‑system architecture behind OpenClaw—persistent personality files, file‑based memory, scheduled proactive processes, and human‑in‑the‑loop checkpoints—has become the de‑facto pattern for modern AI agents. When new tools appear, you only need to migrate your Markdown files, preserving your investment in prompt engineering.

Learning to configure these files equips you with a transferable skill that remains valuable as models and platforms evolve.

Automation
Architect
Written by

Architect

Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.