How Claude Code Reads 10 Files Simultaneously – Full Dissection of Its Tool Concurrency Engine
The article analyzes Claude Code's dual execution engines—StreamingToolExecutor and batch orchestrator—detailing read/write separation, concurrency‑safety checks, Bash error cascade cancellation, progress‑message channels, abort‑controller hierarchy, and ordered result collection that together enable ten files to be read in parallel while preserving correctness and UI responsiveness.
When using Claude Code to analyze a legacy project, a typical request is to read all components in the src directory and search for deprecated APIs. Instead of processing the ten Read calls and five Grep calls sequentially, Claude Code dispatches all read‑only operations concurrently, completing them in seconds.
1. Two Execution Engines, Separate Responsibilities
Claude Code contains two tool‑execution engines: StreamingToolExecutor – a streaming executor that runs each tool as soon as it arrives from the model, enabling real‑time progress. toolOrchestration – a batch orchestrator that first partitions a known set of tool calls, then executes them batch‑wise. It also supports a Context Modifier that can adjust the execution context before a batch runs.
The streaming executor favors latency, while the batch orchestrator favors optimal scheduling; one optimizes responsiveness, the other maximizes efficiency.
2. StreamingToolExecutor – Run While Streaming
The core idea is: “When a tool streams in, decide instantly whether it can run, and if so, run it.” Each incoming tool is wrapped in a TrackedTool object that moves through a four‑stage state machine:
type ToolStatus = 'queued' | 'executing' | 'completed' | 'yielded'queued : tool is enqueued, waiting for execution conditions.
executing : tool is currently running.
completed : execution finished, result ready.
yielded : result has been consumed by the caller.
The transition is strictly forward: queued → executing → completed → yielded.
Concurrency‑Safety Decision
The executor calls canExecuteTool(isConcurrencySafe) for every new tool:
private canExecuteTool(isConcurrencySafe: boolean): boolean {
const executingTools = this.tools.filter(t => t.status === 'executing');
return (
executingTools.length === 0 ||
(isConcurrencySafe && executingTools.every(t => t.isConcurrencySafe))
);
}In plain language: a tool can run if no other tool is executing, or if all currently executing tools are marked as concurrency‑safe and the new tool is also concurrency‑safe. By default, tools are not safe ( isConcurrencySafe: false), so they must explicitly declare safety. Read‑only tools like Read, Glob, and Grep are marked safe; tools with side effects such as Edit, Write, and Bash are not.
Bash Error Cascade Cancellation
When a Bash tool fails, the executor triggers a sibling‑abort that cancels all other Bash‑related tools:
// Only Bash errors cancel siblings. Bash commands often have implicit
// dependency chains (e.g. mkdir fails → subsequent commands pointless).
if (tool.block.name === BASH_TOOL_NAME) {
this.hasErrored = true;
this.erroredToolDescription = this.getToolDescription(tool);
this.siblingAbortController.abort('sibling_error');
}Read‑only tools are independent, so a failure in one does not affect the others.
Progress Messages on a Separate Channel
TrackedToolstores pendingProgress separately from results. Progress updates are pushed immediately via progressAvailableResolve, allowing UI feedback long before the final result arrives.
Streaming Fallback – Discard All
If the model changes its mind mid‑stream, the executor calls discard() to mark the whole operation as discarded, causing getCompletedResults and getRemainingResults to ignore all results:
discard(): void { this.discarded = true; }3. Batch Orchestration – Partition and Schedule
The batch orchestrator’s partitionToolCalls groups consecutive concurrency‑safe tools into a single batch and isolates side‑effect tools into their own batches:
function partitionToolCalls(toolUseMessages, toolUseContext): Batch[] {
return toolUseMessages.reduce((acc, toolUse) => {
const isConcurrencySafe = /* determine concurrency safety */;
if (isConcurrencySafe && acc[acc.length - 1]?.isConcurrencySafe) {
acc[acc.length - 1]!.blocks.push(toolUse); // merge into current read‑only batch
} else {
acc.push({ isConcurrencySafe, blocks: [toolUse] }); // start new batch
}
return acc;
}, []);
}Example sequence: Read A, Read B, Grep C, Edit D, Read E, Write F yields four batches:
Batch 1 – Read A, Read B, Grep C (concurrent)
Batch 2 – Edit D (serial)
Batch 3 – Read E (concurrent, single)
Batch 4 – Write F (serial)
The split occurs because Edit D breaks the read‑only chain, forcing the following Read E into a new batch.
Maximum Concurrency
The upper limit is controlled by getMaxToolUseConcurrency(), which reads the environment variable CLAUDE_CODE_MAX_TOOL_USE_CONCURRENCY (default 10):
function getMaxToolUseConcurrency(): number {
return parseInt(process.env.CLAUDE_CODE_MAX_TOOL_USE_CONCURRENCY || '', 10) || 10;
} runToolsConcurrentlythen uses this limit with a custom all primitive (similar to Promise.all but bounded).
Context Modifier Queue
Read‑only tools that need to modify the execution context are queued; the modifications are applied only after the entire read‑only batch finishes, preventing interference among parallel tools.
4. Three‑Level AbortController Design
Claude Code employs a hierarchy of AbortController objects:
parent AbortController (toolUseContext.abortController)
└── siblingAbortController (Bash error cascade)
└── per‑tool AbortController (individual tool)The three abort reasons are: sibling_error: triggered only by Bash failures, aborts all sibling tools. user_interrupted: manual cancellation (e.g., Ctrl+C), propagates to all running tools. streaming_fallback: model changes its mind, discards every result.
Interrupt Behavior: cancel vs. block
Tools can define interruptBehavior() returning either 'cancel' (interruptible) or 'block' (must finish before stopping). Operations that could leave inconsistent state—such as a partially written file—use 'block', similar to database WAL semantics.
5. Result Collection – Balancing Order and Real‑Time Feedback
Concurrent execution can reorder completion. To preserve the user‑visible order, Claude Code provides two generators: getCompletedResults() yields results in the original call order, skipping unfinished tools. getRemainingResults() waits for either a tool to finish or a progress message using Promise.race, ensuring UI updates as soon as possible.
*getCompletedResults(): Generator<MessageUpdate, void> { ... } async *getRemainingResults(): AsyncGenerator<MessageUpdate, void> {
// Promise.race: tool completion vs. progress message
await Promise.race([...executingPromises, progressPromise]);
}Result priority is clear: progress messages are emitted first, then full results are yielded in call order, guaranteeing ordered output while keeping the UI responsive.
Conclusion
The deep dive reveals three core design decisions:
Read/write separation : All tools default to non‑concurrency‑safe; only those explicitly marked safe participate in parallel execution, favoring correctness over raw speed.
Bash cascade cancellation : Only Bash errors trigger sibling abort, reflecting real‑world command dependencies.
Ordered output via generators : Non‑blocking progress channels and a call‑order‑preserving result collector solve the classic out‑of‑order problem of concurrent tool execution.
These patterns illustrate that AI‑agent tool concurrency is far more sophisticated than a naïve Promise.all and will become a decisive factor as agents grow more capable.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Shuge Unlimited
Formerly "Ops with Skill", now officially upgraded. Fully dedicated to AI, we share both the why (fundamental insights) and the how (practical implementation). From technical operations to breakthrough thinking, we help you understand AI's transformation and master the core abilities needed to shape the future. ShugeX: boundless exploration, skillful execution.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
