Building a Stable OpenClaw Workflow: Turning Ambiguous Prompts into Program Calls

The article explains how ambiguous natural‑language prompts cause unstable AI behavior and proposes a workflow where deterministic tasks are encapsulated in stable Python programs exposed as APIs, letting OpenClaw agents call them for reliable news fetching and email management while saving tokens and simplifying debugging.

Black & White Path
Black & White Path
Black & White Path
Building a Stable OpenClaw Workflow: Turning Ambiguous Prompts into Program Calls

Core principles

Deterministic tasks such as network requests, file I/O, email send/receive, and calculations are delegated to dedicated programs that include logging, error handling, and retries. The program is written once and remains stable.

The LLM performs only two actions: (①) understand user intent and decide which program to invoke (the "compilation" step); (②) filter, summarize, and recommend the structured data returned by the program.

Programs are uniformly wrapped as HTTP APIs or MCP tools, allowing every Agent to share the same capabilities and communicate indirectly through these services.

Example A – News fetching and filtering

Original (pure LLM) approach

User: "What tech news is worth reading today?"

AI attempts to crawl multiple sites, may be blocked by anti‑scraping measures, may return raw HTML consuming thousands of tokens, and finally produces a brief summary that can miss important sources.

New approach – stable news program + LLM filtering

1. Write a stable news program ( news_fetcher.py) that calls NewsAPI or a fixed RSS source and returns a structured JSON.

# news_fetcher.py – fetches latest news and returns JSON
import requests

def get_latest_news():
    try:
        response = requests.get('https://gnews.io/api/v4/top-headlines?token=YOUR_KEY&lang=zh')
        data = response.json()
        articles = [{
            'title': a['title'],
            'source': a['source']['name'],
            'url': a['url']
        } for a in data['articles']]
        return articles
    except Exception as e:
        log_error(f"新闻获取失败: {e}")
        return []  # return empty list on failure

2. Expose it as an API ( GET /api/news/latest) that returns:

{
  "status": "success",
  "news": [{"title": "...", "source": "...", "url": "..."}]
}

3. OpenClaw workflow:

User command → OpenClaw calls GET /api/news/latest (only a few dozen tokens for the title list).

The title list is sent to the LLM with the prompt: "Select the three most worth‑reading items and explain why."

The LLM returns concise recommendations, which are presented to the user.

Effect : Success rate approaches 100 %; each call consumes only tens of tokens for the list plus a few hundred for the summary, and any failure can be diagnosed via the news program’s logs.

Example B – Unified email management and Agent communication

Pain points

Multiple Agents (news assistant, monitoring bot, customer‑service bot) each need to send or read emails, leading to duplicated SMTP/IMAP configurations and scattered passwords.

Changing the mail server requires updating every Agent.

Agent‑to‑Agent email collaboration forces each Agent to know the other's address and protocol.

New approach – a single mail service exposing simple APIs

1. Email program ( mail_service.py) implements reliable send, read, and optional delete operations with retries and logging.

# mail_service.py – encapsulates send, read, mark‑read, etc.
import smtplib, imaplib, email

class MailService:
    def __init__(self, config):
        self.smtp_server = config['SMTP_SERVER']
        self.imap_server = config['IMAP_SERVER']
        self.username = config['EMAIL']
        self.password = config['PASSWORD']

    def send_email(self, to, subject, body):
        try:
            with smtplib.SMTP_SSL(self.smtp_server, 465) as server:
                server.login(self.username, self.password)
                msg = f"Subject: {subject}

{body}"
                server.sendmail(self.username, to, msg)
            return {"status": "sent", "to": to}
        except Exception as e:
            log_error(f"发送失败: {e}")
            return {"status": "error", "message": str(e)}

    def read_unread(self, folder="INBOX"):
        # Returns a list of unread emails as structured dicts
        return [{"from": x, "subject": y, "body_preview": z}]

2. API definitions: POST /api/mail/send – parameters to, subject, body. GET /api/mail/unread – returns a list of unread messages.

Optional delete/archive API.

3. All Agents call the same endpoints:

News Agent: after selecting good articles, calls POST /api/mail/send to email the summary.

Monitoring Agent: on high CPU, calls the same API to send an alert email.

Customer‑service Agent: periodically polls GET /api/mail/unread, feeds new tickets to the LLM, and replies via POST /api/mail/send.

4. Agent‑to‑Agent communication is achieved by sending messages through the shared mail service, eliminating the need for each Agent to know the other's address.

Comparison of unified mail program vs. individual configurations

Configuration cost : N agents require N configurations → configure once, shared by all.

Security : passwords scattered across agents → password stored only in the mail program.

Agent communication : need to know each other's address and protocol → decoupled via shared mail service.

Server change : modify every agent → change in one place.

Debugging : each agent checks its own logs → all mail operations centralized in one log.

FastAPI wrapper for the mail service

# mail_api.py – FastAPI entry point
from fastapi import FastAPI, HTTPException, Header
from pydantic import BaseModel
from mail_service import MailService

app = FastAPI()
mail = MailService(load_config())  # singleton

class SendRequest(BaseModel):
    to: str
    subject: str
    body: str

@app.post("/api/v1/mail/send")
async def send_email(req: SendRequest, x_api_key: str = Header(...)):
    if not verify_key(x_api_key):
        raise HTTPException(401)
    return mail.send_email(req.to, req.subject, req.body)

@app.get("/api/v1/mail/unread")
async def get_unread(x_api_key: str = Header(...)):
    return {"status": "ok", "emails": mail.read_unread()}

Agents can invoke these endpoints with a generic http_request action.

Advanced internal communication (non‑mail)

POST /api/mq/publish

– publish a message to a channel. GET /api/mq/subscribe?channel=xxx – long‑poll or WebSocket to receive messages.

This enables millisecond‑level Agent‑to‑Agent messaging while preserving the "write once, use everywhere" principle.

Summary

Every vague natural‑language instruction (e.g., "fetch news", "send email") should map to a deterministic program that handles unstable, permission‑heavy, or repetitive work.

The LLM becomes a commander that only decides which API to call and how to summarize results, dramatically reducing token consumption and increasing success rates.

All Agents share a common toolbox; adding a new capability is as simple as writing a new API, after which every Agent can use it immediately.

The OpenClaw robot cluster thus behaves like a micro‑service architecture: each Agent is a lightweight decision unit backed by stable capability APIs, making debugging, scaling, and extension straightforward.

pythonautomationAPIAgent orchestrationOpenClaw
Black & White Path
Written by

Black & White Path

We are the beacon of the cyber world, a stepping stone on the road to security.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.