Designing Products for Agents: Beyond APIs and MCPs

The article argues that building products for AI agents requires more than swapping UI pages for APIs or adding MCPs; it demands reorganizing product capabilities into actions that agents can understand, invoke, be constrained by, and audit, while addressing semantics, governance, and reliability.

Architect
Architect
Architect
Designing Products for Agents: Beyond APIs and MCPs

TL;DR

Designing products for agents isn’t just turning pages into APIs or quickly adding an MCP; the practical approach is to break product capabilities into actions that agents can understand, invoke, be constrained by, and audit.

Salesforce Headless 360 illustrates this shift: the platform exposes over 100 tools and skills as APIs, MCPs, or CLI commands, but the real change is making the underlying capabilities callable by agents.

Seeing like an Agent

Agents read tool names, parameters, context snippets, return values, and error messages, unlike humans who rely on visual cues. Therefore, tools must have single‑purpose responsibilities, progressive disclosure, and repeatable output formats. The author cites Claude Code’s "AskUserQuestion" tool as a concrete example of a clear, single‑purpose action.

First Step – Give Agents a Machine‑Readable Specification

Traditional UI design focuses on human concerns (clear pages, buttons, forms). For agents, the contract expands to include:

Tool name and description that allow correct selection.

Explicit parameter schema.

Clear boundaries on what fields agents may infer versus what must be provided.

Recoverable failure handling.

Return values suitable for downstream reasoning.

Embedded permission, audit, and billing considerations.

Traceability of why an agent invoked a tool.

Simply wrapping a UI action in an MCP without redesign leads to agents guessing and failing.

Second Step – Don’t Stop at Exposing Tools

Agents need more than a tool entry point; they need the context to execute it. The author compares Notion’s MCP (which proactively provides a Markdown spec) with Slack’s MCP (which leaves agents to guess the format), showing that tool availability does not guarantee agent success.

CLI conventions such as --help, --json, --dry-run, and --yes illustrate stable, machine‑readable interfaces that can be repurposed for agents.

Third Step – Determinism in Production

Agent Script combines flexible natural‑language instructions with deterministic business rules, offering a versioned, auditable flat file that defines if/else logic, state transitions, and action sequences. This addresses the tension between probabilistic model reasoning and enterprise requirements for explainability, replayability, and verification.

Fourth Step – Distinguish Agent Audiences

Two agent modes are identified:

Customer‑facing agents : require static, compliance‑heavy workflows (identity check, intent detection, tool call, handoff).

Employee‑facing agents : can use dynamic task graphs, allowing exploration and richer tool coverage.

Both should share the same underlying platform layers (data, permissions, tooling, observability) to avoid fragmented governance.

Fifth Step – Integrate Into a Five‑Layer Architecture

The proposed stack:

Surface layer : multiple entry points (Slack, Teams, mobile, etc.).

Invocation layer : APIs, CLI, MCP, hosted servers; stability, discoverability, auth, rate‑limiting.

Semantic layer : tool descriptions, schemas, policies, error contracts, examples – the key to reducing agent guesswork.

Business‑logic layer : data, processes, historic records, organizational rules.

Governance layer : authentication, authorization, audit, testing, evaluation, observability, rollback, billing.

Neglecting the semantic or governance layers results in agents that can be called but not trusted to perform critical actions.

Practical Checklist Before Shipping

Decompose high‑frequency capabilities into atomic actions with clear preconditions, failure handling, and side‑effects.

Expose stable CLI‑style flags ( --help, --json, --dry-run, --yes) for early agent experimentation.

Shift permission and audit checks from the UI to the platform so agents inherit the same security model.

Provide agents with a concise "rationale" field and a feedback tool to capture structured failure paths.

Implement offline testing, custom scoring, A/B testing, call‑chain observability, failure replay, and version rollback for production readiness.

Conclusion

Agents will not replace SaaS interfaces overnight, but they will increasingly automate low‑frequency, repetitive, cross‑system tasks. Products that expose their core capabilities in a machine‑readable, auditable, and governed manner will thrive in the Agent era, while those that merely add a new entry point without proper semantics and governance will falter.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

CLIProduct DesignAI agentsAPIagent architectureheadless
Architect
Written by

Architect

Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.