Redefining the Backend: How Workers, Triggers, and Functions Turn Agents into First-Class Workers

The article argues that the traditional separation between AI agent harnesses and back‑ends creates debugging complexity, and proposes redefining the backend with three primitives—worker, trigger, and function—so that agents become equivalent to services or queues, enabling real‑time discovery, scalable extensibility, and unified observability across heterogeneous components.

High Availability Architecture
High Availability Architecture
High Availability Architecture
Redefining the Backend: How Workers, Triggers, and Functions Turn Agents into First-Class Workers

Problem Statement

Current AI infrastructure treats the agent harness as a separate layer from the backend, leading to fragmented traces, difficult debugging, and exponential growth of random paths as agents increase (e.g., 1 agent + 5 services → 5 paths; 4 agents + 5 services → 80 paths).

Existing Harness Designs

Anthropic, OpenAI, CrewAI, and LangGraph each define a harness with varying thickness: Anthropic uses a thin loop of prompt‑model‑tool; OpenAI adds instruction stacks and explicit hand‑off; CrewAI employs deterministic flows for routing; LangGraph encodes every decision as a node, making the harness heavy.

Assumption Challenge

The prevailing assumption is that the harness lives outside the traditional backend. The article argues this is temporary and that the harness should be considered part of the backend.

Proposed Backend Primitives

The author defines three fundamental primitives that replace the conventional backend:

Function : a stable, identifiable unit of work (e.g., orders::validate) that receives input and may return output, runnable in any process or language.

Trigger : the declarative mechanism that invokes a function. It can be an HTTP endpoint, cron schedule, queue subscription, state change, or stream event.

Worker : any process that connects to the engine and registers functions and triggers. Examples include a TypeScript API service, a Python ML pipeline, a Rust microservice, or an agent itself.

Concrete Example (iii.dev)

const iii = registerWorker('ws://localhost:49134', { workerName: 'agentic-backend' })

iii.registerFunction('agents::researcher', async (data) => {
  // Python Worker: requests + duckduckgo-search
  const sources = await iii.trigger({
    function_id: 'web::search',
    payload: { query: data.topic, limit: 10 }
  })
  // Rust Worker: scraper + tokio, fetched in parallel
  const pages = await iii.trigger({
    function_id: 'web::scrape',
    payload: { urls: sources.map(s => s.url) }
  })
  // TypeScript Worker: wraps the OpenAI SDK
  const findings = await iii.trigger({
    function_id: 'llm::summarize',
    payload: { topic: data.topic, documents: pages }
  })
  await iii.trigger({
    function_id: 'state::set',
    payload: { scope: 'research-tasks', key: data.task_id, value: findings }
  })
  iii.trigger({
    function_id: 'agents::critic',
    payload: { task_id: data.task_id },
    action: TriggerAction.Enqueue({ queue: 'agent-tasks' })
  })
  return findings
})

iii.registerTrigger({
  type: 'http',
  function_id: 'agents::researcher',
  config: { api_path: '/agents/research', http_method: 'POST' }
})

iii.registerTrigger({
  type: 'state',
  function_id: 'agents::researcher',
  config: { scope: 'research-tasks', condition: 'status == "pending"' }
})

This snippet shows how a single function can be bound to multiple triggers (HTTP endpoint and state change) without modifying the function itself.

Benefits of the Primitive Model

Real‑time discovery : When a worker connects, it receives the full catalog of functions from all workers; new functions are broadcast instantly.

Real‑time scalability : New workers and capabilities can be added at runtime without redeployment or restarts.

Unified observability : Every function and trigger call carries an OpenTelemetry trace ID, enabling end‑to‑end tracing across languages, workers, and queues.

Category Collapse

Traditional platforms treat queues, HTTP, cron, and agents as separate categories with distinct integrations. In the primitive model, all are simply workers registering functions and triggers, collapsing categories into a single abstraction.

Recursive Workers and Sandboxing

Workers can create other workers, including microVM sandbox workers that provide hardware isolation. An agent can launch a sandbox worker, register its own functions, and later terminate it, treating sandbox creation as just another worker addition.

Design‑Space Implications

When harnesses are built from the same primitives as the backend, the distinction between “thin” and “thick” harnesses reduces to how many functions are registered and how they are composed. Removing scaffolding merely means deleting or recombining functions, not redesigning integration layers.

Conclusion

By redefining the backend with the three primitives—worker, trigger, function—any component (service, queue, agent, browser, edge device) can participate uniformly, providing real‑time discovery, scaling, and observability. The open‑source project iii.dev demonstrates this approach and offers SDKs for TypeScript, Python, and Rust.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

triggerAI infrastructureagent architecturefunctionworkerbackend primitives
High Availability Architecture
Written by

High Availability Architecture

Official account for High Availability Architecture.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.