How Claude Code’s Agentic Loop Works: Four Layers from QueryEngine to UI

The article breaks down Claude Code’s persistent agentic loop into four layers—QueryEngine, Tool System, Permission/Hook, and React + Ink—explaining how each turn gathers context, makes model decisions, executes actions, verifies results, handles errors, and renders a terminal UI.

AI Step-by-Step
AI Step-by-Step
AI Step-by-Step
How Claude Code’s Agentic Loop Works: Four Layers from QueryEngine to UI

Claude Code runs a persistent Agentic Loop: the model evaluates the current state, issues tool calls, the runtime validates and executes them, and the results are fed back to the model for the next turn. Complex engineering tasks may require dozens of turns, relying on the loop’s ability to manage context growth, permission checks, tool failures, API rate limits, and user interruptions.

Four Layers of the Large Loop

1. Turn Sequencing

Each turn consists of three steps:

Model decision: using the system prompt, tool schemas, conversation history, and project context, the model decides the next action.

Runtime execution: the Tool System validates parameters, applies permission rules, and runs the requested action (file read, edit, Bash command, sub‑agent, etc.).

Feedback: the tool’s output, error, or rejection is packaged as a message for the next turn.

The loop terminates when the model produces a response without any tool calls, at which point it emits the final result, token usage, cost, and session ID.

2. Gather Context

Every turn is supplied with a complete view of the engineering state: system prompt, tool definitions, conversation history, project rules, skill descriptions, and required context files (e.g., CLAUDE.md, tool schemas). Context accumulates within the session; reads, command outputs, tool results, and prior model judgments all influence subsequent turns. When the token window approaches its limit, older history is compressed into a summary that preserves recent actions and key decisions.

3. Take Action – Tool System and Permission Layer

The model only proposes an intent; the Tool System carries out the action. Each tool is defined by a name, input schema, permission declaration, execution function, and result format. At runtime the system looks up the implementation, validates inputs, and checks whether execution is allowed.

The permission layer intercepts calls based on risk level (read file, search code, write file, run command, external service). Allow/deny rules, permission modes, and hook callbacks are evaluated before execution. Rejected calls are not discarded; they are transformed into structured rejection messages that return to the model, allowing it to handle them like a “file not found” error.

Action entry: Read/Glob/Grep, Edit/Write, Bash, Agent/Skill

Runtime handling: path/parameter validation, permission checks, command rule matching, sub‑agent launch

Feedback to model: file contents, match lists, diffs, stdout/stderr, exit codes, rejection reasons

4. Verify Results – Self‑Correction

After each tool execution the model receives concrete evidence: test output, exit code, file diff, search results, tool errors, permission denials, or API retry events. This evidence moves the system to a new engineering state (e.g., file modified, test failed, permission denied, budget near limit), and the model decides the next step based on that state.

Verification is integrated into every action: reading a file validates assumptions, running a command validates changes, searching validates impact, and hooks validate dangerous actions. Thus the model’s output is grounded in observable facts rather than pure text suggestions.

Error Interception and Translation

Errors are captured, classified, and translated into structured feedback instead of causing the process to abort. Examples: ENOENT becomes “The file you tried to read does not exist, please check the path.”

Permission denials become rejection messages.

Transient API errors trigger retry logic.

Turn or budget limits generate ResultMessage with a subtype.

Four error exit paths are defined:

Continuable error – tool returns failure; the model adjusts strategy in the next turn.

Waitable error – API rate‑limit or connection error; retry logic exposes a wait state.

Explainable termination – max turns, budget, or retry limit reached; ResultMessage includes a subtype.

Unrecoverable interruption – cancellation, runtime crash, or unhandled exception ends the session with an execution error.

Persistent Retry for Long‑Running Tasks

Long‑running agents encounter network jitter, service throttling, OAuth expiration, 5xx gateway errors, or connection resets. Persistent retry distinguishes error types, refreshes authentication for 401/403, respects rate‑limit headers for 429/529, and applies exponential back‑off. During extended waits the loop reports system status to avoid host‑process idle detection. When recovery occurs, the same session resumes with preserved user description, read files, and completed verification steps.

React + Ink UI Rendering

The terminal UI is built as a React application rendered with Ink. AppState holds runtime state; React context and hooks distribute it to terminal components (input box, message list, permission prompts, tool progress, status bar, diff view, error alerts). The core loop only exchanges messages and shared state objects; it does not draw UI, and the UI does not execute tools.

This separation makes interaction details maintainable: tool progress, permission waits, API retries, and context compression each trigger specific UI updates without scattering logic across print statements.

Takeaways for Building Robust Agent Systems

Separating the loop into four concerns—context construction, action execution, result verification, and error translation—allows the model to focus on decision making while the runtime provides real‑world feedback in a form the model can consume. Production‑grade terminal agents must continue processing after failures, exposing structured error information and maintaining session state, unlike demo‑level agents that simply abort or return opaque error messages.

tool_call: Bash("npm test")
runtime:
  exit_code: 1
  stderr: "AuthService should reject expired token..."
loop feedback:
  type: "tool_result"
  translation: "exit_code + stderr → readable failure summary"
  content: "Test failed, failure cases and stack trace..."
next turn:
  model reads failure → opens auth.ts → edits condition → reruns tests
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Error handlingClaude CodeReact InkTool SystemAgentic LoopPermission HookPersistent Retry
AI Step-by-Step
Written by

AI Step-by-Step

Sharing AI knowledge, practical implementation records, and more.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.