Designing AI Agents: Balancing Tools, Action Space, and Progressive Disclosure

This article shares practical insights on designing AI agents, covering action‑space choices, tool selection strategies, progressive disclosure techniques, and evolving task management, while emphasizing observation, experimentation, and balancing tool complexity with model capabilities.

High Availability Architecture
High Availability Architecture
High Availability Architecture
Designing AI Agents: Balancing Tools, Action Space, and Progressive Disclosure

Introduction

Effective AI‑agent design requires balancing prompt caching, tool calling, and context management. A progressive disclosure mechanism can load configuration from JSON to Markdown only when needed, reducing token consumption and treating the file system as the agent’s natural interface.

Action Space and Tool Choices

The hardest part of building an agent harness is defining its action space . Claude can act via tool calling , which may be Bash scripts, predefined skills, or the newer code‑execution capability. When many possible tools exist, the designer must decide between a single universal tool (e.g., code execution) and a set of specialized tools. The decision should be guided by observing the model’s behavior, reading its output, and iterating experimentally.

Improving the AskUserQuestion Tool

Attempt #1 – Modify ExitPlanTool

Adding a parameter to ExitPlanTool to output a plan together with follow‑up questions caused ambiguity: Claude had to generate both a plan and questions in one call, leading to confusion about whether the tool needed to be invoked twice.

Attempt #2 – Change Output Format

Forcing Claude to emit questions in a slightly altered Markdown list (e.g., bracketed options) required minimal code changes but proved unstable. Claude frequently added extra sentences, omitted options, or switched to a completely different format.

Attempt #3 – Dedicated AskUserQuestion Tool

A dedicated tool was created that Claude can call at any time, especially in “Plan Mode”. When invoked, the tool displays a modal with the question and blocks the agent’s loop until the user answers. This enforces structured output, allows flexible composition via the Agent SDK, and is readily adopted by Claude.

Evolving Task Management: Tasks vs. Todos

Early releases used a TodoWrite tool to maintain a simple todo list that kept the model focused. The list was refreshed every few rounds with system reminders. As model capabilities advanced (e.g., Opus 4.5), the todo reminders became restrictive. Newer models preferred a Task tool that supports dependencies, shared progress across sub‑agents, and mutable state. The shift illustrates that tools useful for one generation can become constraints for a more capable model.

Search Interface and Progressive Disclosure

A critical capability is a search tool . Initially a RAG vector database was used, which required indexing and configuration. To let Claude search its own codebase, a Grep tool was introduced, enabling file‑system searches and context building.

Observations showed that as Claude becomes smarter, providing the right tools allows it to autonomously construct context. The concept of Progressive Disclosure was formalized: the agent can explore and gradually reveal relevant context instead of loading everything upfront.

Over a year, Claude evolved from barely constructing context to performing nested, multi‑layer file searches, making progressive disclosure a standard technique for adding functionality without inflating the toolset.

Applying Progressive Disclosure in Claude Code

Claude Code ships with roughly 20 tools. Adding more tools raises the model’s cognitive load, so documentation links are exposed for on‑demand loading. When a user asks about Claude Code itself, a sub‑agent called Claude Code Guide is invoked, providing precise search instructions and answers without expanding the main toolset. This improves documentation‑related query handling while keeping the core action space lean.

Artistry Over Rigid Rules

Designing tools for an AI model is not a static checklist. It requires continuous experimentation, careful observation of model output, and adaptation to evolving capabilities. Developers should adopt a mindset of “observing like an agent” and iterate constantly.

Community Contribution Example

A reader created a skill named agent-design based on the article’s guidelines. The skill resides at ~/.claude/skills/agent-design/ with a SKILL.md file that captures the core philosophy, experiences, design guidelines, framework, and anti‑patterns.

To invoke the skill during development:

/agent-design

Further Reading

https://mp.weixin.qq.com/s?__biz=MzAwMDU1MTE1OQ==∣=2653564891&idx=1&sn=4ac1795ced4474be96f1a62f585acfab&scene=21#wechat_redirect

https://mp.weixin.qq.com/s?__biz=MzAwMDU1MTE1OQ==∣=2653564883&idx=1&sn=bea05c22b9989316fcd463afdf7953e9&scene=21#wechat_redirect

https://mp.weixin.qq.com/s?__biz=MzAwMDU1MTE1OQ==∣=2653564878&idx=1&sn=6b7aeb0be4476f372bea09796822c7ad&scene=21#wechat_redirect

https://mp.weixin.qq.com/s?__biz=MzAwMDU1MTE1OQ==∣=2653564871&idx=1&sn=fe5a5247dfb5dab958a033c2d1090476&scene=21#wechat_redirect

AI agentsClaudetool designProgressive Disclosure
High Availability Architecture
Written by

High Availability Architecture

Official account for High Availability Architecture.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.