How AI Agents Built 40K Lines of TypeScript in 8 Days – Inside the Agent Orchestrator

The article details how an AI‑driven Agent Orchestrator autonomously generated 40,000 lines of TypeScript code, managed 17 plugins, ran 3,288 tests, and achieved self‑healing CI in just eight days by orchestrating parallel AI agents, plugin architecture, and a feedback loop.

High Availability Architecture
High Availability Architecture
High Availability Architecture
How AI Agents Built 40K Lines of TypeScript in 8 Days – Inside the Agent Orchestrator

Background

The author had a backlog of issues and insufficient time, so multiple AI coding agents were run in parallel, each in its own Git worktree and tmux session. Human effort was limited to reviewing pull requests (PRs) and merging them, while the orchestrator handled coordination.

Architecture

The system is built around an AI‑driven orchestrator that manages eight interchangeable plugin slots and follows a defined session lifecycle:

Tracker : pulls an issue from GitHub or Linear.

Workspace : creates an isolated worktree or clone.

Runtime : starts a tmux session (or a process) for the agent.

Agent : runs a coding model such as Claude Code or Aider.

Terminal : provides live terminal access via iTerm2 or a web dashboard.

SCM : creates a PR with enriched context.

Reactions : automatically restarts agents on CI failures or review comments.

Notifier : pings a human only when a decision is required.

The plugin system allows any component (tracker, agent, runtime, notifier, etc.) to be swapped without changing the core orchestrator.

Self‑healing CI

When a CI run fails, the orchestrator spawns an agent, injects the failure logs, and asks the agent to fix the issue and push a new commit. The reaction logic is configured in ao.config.yaml:

reactions:
  ci_failed:
    action: spawn_agent
    prompt: "CI failed on this PR. Read the logs and fix the issue."
  changes_requested:
    action: spawn_agent
    prompt: "Review comments received. Address each comment and push fixes."
  approved:
    action: notify
    channel: slack
    message: "PR approved and ready to merge."

Automated code review cycle

Agent creates a PR and pushes code.

Cursor Bugbot automatically reviews the PR and posts inline comments.

Agent reads the comments, fixes the code, and pushes again.

Bugbot re‑reviews the updated PR.

This loop generated roughly 700 automated review comments, with the bugbot fixing about 68 % of reported issues.

Activity detection

Agents emit structured JSONL event files (e.g., agent-claude-code) that the orchestrator parses to determine whether the agent is generating tokens, waiting on a tool, idle, or completed. This avoids relying on the agent’s self‑reporting.

Web dashboard

Implemented with Next.js 15 and Server‑Sent Events, the dashboard provides:

Attention zones : groups sessions by status (CI failure, awaiting review, running normally).

Live terminal : browser‑based xterm.js view of the agent’s terminal output.

Session detail : current file, recent commits, PR status, CI status.

Config discovery : automatically finds ao.config.yaml and lists available sessions.

Self‑improving loop (ao‑52)

Each session records signals such as prompt effectiveness, CI failures, and merge conflicts. The ao‑52 subsystem aggregates these signals, learns which task specifications succeed on the first try, and adjusts future prompts and constraints, creating a recursive improvement cycle where agents build features, the orchestrator observes outcomes, and future agent tasks are refined.

Results

In eight days the system produced:

≈ 40,000 lines of TypeScript code

17 plugins

3,288 test cases

102 PRs (86 merged)

700 automated review comments (human comments ≈ 1 %)

Overall CI success rate ≈ 84.6 %

Each commit includes a Git trailer indicating which AI model authored it, providing clear attribution.

Getting started

git clone https://github.com/ComposioHQ/agent-orchestrator.git
cd agent-orchestrator
pnpm install && pnpm build
ao init --tracker github --agent claude-code --runtime tmux
ao start

After launching, the orchestrator creates agents, generates PRs, monitors CI, routes review feedback, and notifies the human only for high‑level decisions.

Open‑source resources

Repository: https://github.com/ComposioHQ/agent-orchestrator

Metrics report: https://github.com/ComposioHQ/agent-orchestrator/releases/tag/metrics-v1

Interactive visualizations: https://pkarnal.com/ao-labs/

Diagram of orchestrator architecture
Diagram of orchestrator architecture
code generationAI agentsopen-sourceself‑healing CI
High Availability Architecture
Written by

High Availability Architecture

Official account for High Availability Architecture.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.