Java Rewrites OpenClaw: An Architecture‑Level Translation, Not a Simple Port
A Java team rebuilt the popular Node.js AI‑Agent platform OpenClaw from scratch, replacing AI‑generated “vibe code” with a carefully refactored architecture that leverages Spring AI, JobRunr, and Spring Modulith, and demonstrates how to run the new Java version with just a few commands.
Java developers' six‑month observation of the AI‑Agent boom
In late 2025 the open‑source project Clawdbot (later OpenClaw) exploded on GitHub, amassing 145 000 stars and integrating 25 instant‑messaging channels, voice wake‑up, a live canvas, and mobile nodes. The original codebase is pure Node.js/TypeScript, which left Java developers watching, applauding, and then returning to Spring Boot.
Tearing down and rebuilding – vibe code vs. refined translation
The JobRunr team first used AI‑assisted “vibe code” to prototype a Java version, but CTO Ronald shut the IDE after ten minutes, calling the result “AI slop” – outdated dependencies and opaque design patterns. He then discarded the prototype, spent two weeks rewriting the system from scratch, producing 27 commits across three modules that constitute a true architectural translation rather than a line‑by‑line port.
The art of translation – Spring trio makes the Java edition cleaner
Reference 1: Spring AI – turning "API calls" into dependency‑injected beans
In OpenClaw the LLM is called via a raw HTTP request (e.g., axios.post(url, body)). Spring AI replaces this with a @Autowired private ChatClient chatClient; and a method
public String ask(String prompt) { return chatClient.prompt().user(prompt).call().content(); }, making the LLM a first‑class Spring bean.
Reference 2: JobRunr – from simple cron jobs to full task‑lifecycle management
OpenClaw’s Node.js cron library handles scheduling but lacks robust failure handling. JavaClaw adopts JobRunr, which manages task creation, retries, and state transitions (todo → in_progress → completed/awaiting_human_input). A task’s lifecycle is recorded in a Markdown file with YAML front‑matter, enabling version‑controlled, file‑based state.
Agent创建任务 → TaskManager写Markdown文件 → JobRunr接管调度
→ 到时间触发TaskHandler → TaskHandler调Agent执行任务
→ 成功?标记completed
→ 失败?标记回todo → 自动重试 → 最多三次Reference 3: Spring Modulith – turning a monorepo into four compile‑time‑checked modules
The original Node.js monorepo contains 15 interdependent directories. JavaClaw restructures this into four modules (base, app, providers, plugins) with strict compile‑time boundaries enforced by Spring Modulith, preventing accidental cross‑module references.
JavaClaw/
├── base/ ← Agent engine, task management, tool & channel abstractions
├── app/ ← Spring Boot entry point, UI, chat channel
├── providers/ ← LLM providers: Anthropic, OpenAI, Ollama, Gemini
└── plugins/ ← Plugins: Telegram, Discord, Brave search, PlaywrightOther translation highlights
Channel architecture reduced to a single interface
All inbound messages are represented by ChannelMessageReceivedEvent. Adding a new channel only requires implementing the interface and publishing the event, cutting maintenance from 25 integrations to 3 core channels.
Task storage as Markdown files
Instead of a database, tasks are stored as files like workspace/tasks/2026-03-21/143022-提醒我买菜.md with YAML front‑matter describing task, status, and description. State changes are simple file edits, and Git diff tracks task evolution.
Frontend rendered on the server
The original React SPA is replaced by an htmx + Bulma UI that serves HTML fragments from the server. JavaScript is optional; the UI works even when JavaScript is disabled, providing graceful degradation and network resilience.
From zero to a running Java AI assistant
Clone the repository and start the application:
git clone https://github.com/jobrunr/JavaClaw.git
cd JavaClaw
./gradlew :app:bootRunOpen localhost:8080/onboarding and follow the seven‑step wizard to configure the LLM provider, agent persona, MCP servers, and messaging channels. After configuration, the assistant can be accessed via localhost:8080/chat, Telegram, or Discord.
A typical daily workflow (“summarize new Gmail messages at 8 am”) triggers a cron job (0 8 * * *), creates a Markdown task, processes it through JobRunr, calls the LLM via Spring AI, and sends the result back through the selected channel. Failures automatically retry up to three times.
Agent writes a Markdown file under workspace/tasks/recurring/.
JobRunr registers the cron expression.
At 8 am the recurring handler creates a todo task.
JobRunr moves the task to in_progress.
Agent reads Gmail via MCP, summarizes with the LLM, and marks completed.
The summary is sent to the user via Telegram.
If any step fails, the task reverts to todo and is retried.
Comprehensive tests (TaskManagerTest, TelegramChannelTest, OnboardingControllerTest, JavaClawApplicationTests) verify the integration, ensuring the system is industrial‑grade rather than a prototype.
The value of the translation
The Java version targets developers familiar with dependency injection, compile‑time checks, and declarative configuration. By expressing the same concepts with Spring AI, JobRunr, and Spring Modulith, JavaClaw shows that a Java ecosystem can host a full‑featured AI‑Agent platform without resorting to the original Node.js toolchain.
In short, this is not a mere port; it is an architecture‑level translation that preserves the original’s functionality while adapting it to the idioms and strengths of the Java world.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
ZhiKe AI
We dissect AI-era technologies, tools, and trends with a hardcore perspective. Focused on large models, agents, MCP, function calling, and hands‑on AI development. No fluff, no hype—only actionable insights, source code, and practical ideas. Get a daily dose of intelligence to simplify tech and make efficiency tangible.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
