Unlock Production‑Grade AI Agents with the OpenHarness Python Framework
This article introduces OpenHarness, an open‑source Python implementation that simplifies building production‑level AI agents by providing lightweight core infrastructure, detailed feature breakdown, architecture overview, and sample code to help researchers and developers understand and create custom intelligent agents.
OpenHarness Overview
OpenHarness is an open‑source Python implementation of the Harness agent framework. It provides a lightweight, production‑grade infrastructure for building AI agents that can use tools, maintain contextual memory, enforce permission policies, and coordinate multiple agents.
Key Capabilities
Agent Loop – Streams model responses, detects tool calls, retries with exponential back‑off, executes tools in parallel, and tracks token usage.
Toolbox – Ships with more than 40 built‑in tools (file I/O, shell commands, web search, MCP protocol, etc.) and supports on‑demand loading of skill definitions from Markdown files.
Context Memory – Automatically injects a CLAUDE.md system prompt, compresses long contexts, persists knowledge in MEMORY.md, and restores sessions across runs.
Permission Governance – Multi‑level permission modes (default, automatic, scheduled), path‑based allow/deny rules, and interactive approval hooks for safe tool execution.
Group Coordination – Generates sub‑agents, maintains a team registry, and manages the lifecycle of background tasks for collaborative workflows.
Repository
https://github.com/HKUDS/OpenHarness
Core Architecture
The repository is organized into ten subsystems that together implement the agent framework:
openharness/
engine/ # Agent loop – query → stream → tool call → loop
tools/ # 40+ tools – file I/O, shell, search, web, MCP
skills/ # Knowledge base – load skill Markdown files on demand
plugins/ # Extension points – commands, hooks, custom agents, MCP server
permissions/ # Security – multi‑level modes, path rules, command denial
hooks/ # Lifecycle – pre/post tool‑use event hooks
commands/ # 54 built‑in commands (e.g., /help, /commit, /plan, /resume)
mcp/ # Model Context Protocol client implementation
memory/ # Persistent cross‑session memory store
tasks/ # Background task scheduler and manager
coordinator/ # Multi‑agent coordination – sub‑agent creation, team registry
prompts/ # System prompt assembly, CLAUDE.md injection, skill integration
config/ # Hierarchical configuration and migration utilities
ui/ # React‑based terminal UI (backend protocol + frontend)Core Loop Logic
The main execution loop continuously streams model output, processes tool calls, and feeds results back to the model:
while True:
response = await api.stream(messages, tools)
if response.stop_reason != "tool_use":
break # Model finished its reasoning
for tool_call in response.tool_uses:
# Permission check → pre‑hook → tool execution → post‑hook → result
result = await harness.execute_tool(tool_call)
messages.append(result)
# Continue the loop so the model can act on the new messagesGetting Started
After cloning the repository, the entire framework can be launched with a single command: oh This command starts the OpenHarness server and loads all default components, allowing immediate interaction with the agent via the built‑in CLI or programmatic API.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
