How Google’s Agent2Agent (A2A) Protocol Enables Seamless AI Agent Collaboration

Google’s newly released Agent2Agent (A2A) protocol provides a standardized framework for heterogeneous AI agents to discover, communicate, and collaborate, detailing its llms.txt specification, core components, task lifecycle, streaming mechanisms, security model, and its complementary relationship with Anthropic’s MCP protocol.

Architecture and Beyond
Architecture and Beyond
Architecture and Beyond
How Google’s Agent2Agent (A2A) Protocol Enables Seamless AI Agent Collaboration

On April 9, 2025, Google announced the Agent2Agent (A2A) protocol, a heavyweight project designed to address a core pain point in the AI agent ecosystem: how agents built with different vendors, frameworks, or technologies can effectively communicate and cooperate.

The official GitHub repository https://github.com/google/A2A contains a clear code layout, highlighted by a notable file named llms.txt. This file follows the emerging llms.txt specification, which acts like a machine‑readable, maintainable sitemap for large language models (LLMs), describing service capabilities, interface contracts, and metadata.

Roles of llms.txt in the A2A Project

Ability Declaration & Interface Specification : Provides a detailed description of A2A’s goals, core functions, supported interfaces, data structures (e.g., AgentCard, Task, Message, Artifact) and interaction patterns, serving as an authoritative reference for developers and automation tools.

Examples & Documentation Index : Points to JSON specifications, sample implementations, documentation, and demo applications to help developers get started quickly.

Metadata & Discovery : Supplies meta‑information that other systems can use for automatic discovery and capability matching.

Core Problems Addressed by A2A

Heterogeneous Agent Interoperability : Unifies disparate agent interfaces from frameworks such as LangGraph, CrewAI, Google ADK, and Genkit.

Capability Discovery & Dynamic Adaptation : The AgentCard mechanism standardizes ability descriptions, enabling automatic discovery of what an agent can do, required inputs, and authentication needs.

Task Management & Multi‑turn Dialogue Standardization : Defines a task lifecycle (submitted, working, input‑required, completed, etc.) and a multi‑turn conversation model.

Multimodal Content & Artifact Exchange : Supports text, files, and structured JSON through TextPart, FilePart, and DataPart.

Streaming & Push Updates : Uses Server‑Sent Events (SSE) and webhook mechanisms for real‑time status and artifact delivery.

Security & Authentication Standardization : Provides a unified way to declare authentication requirements.

Key Technical Components

AgentCard (Capability Discovery) : A standard JSON file, typically served at /.well-known/agent.json, describing an agent’s abilities, endpoints, and auth details.

Standard Message & Task Structures (JSON‑RPC 2.0) : All communication follows JSON‑RPC 2.0, defining Task, Message, Part, and Artifact structures.

Multicontent Support : TextPart, FilePart, and DataPart enable flexible payloads.

Streaming (SSE) : Endpoints tasks/sendSubscribe and tasks/resubscribe deliver live task status updates.

Push Notification (Webhook) : Configured via tasks/pushNotification/set to let agents proactively send updates to client URLs.

Unified Task Lifecycle : Clearly defined task states and transitions.

Authentication & Security : Declared in both AgentCard and push‑notification configurations.

Common Library & Sample Implementations : Provides Python and JavaScript/TypeScript libraries and integration examples for ADK, CrewAI, LangGraph, Genkit, LlamaIndex, Marvin, and Semantic Kernel.

Python Sample Implementation

The samples/python/common directory showcases reusable core components: types.py: Defines all core data structures. server/: Contains A2AServer (a Starlette‑based server entry) and TaskManager (abstract task manager). client/: Holds A2AClient (client‑side call wrapper) and A2ACardResolver (AgentCard discovery). utils/: Utilities for caching, push‑notification authentication, etc.

The common directory abstracts these implementations to reduce duplication and ensure consistency across A2A‑compatible applications.

Standard A2A Workflow

Agent Discovery : Client fetches the target agent’s AgentCard from /.well-known/agent.json to learn its capabilities.

Task Initiation : Client calls tasks/send (synchronous) or tasks/sendSubscribe (asynchronous streaming) to start a task.

Task Processing & State Transitions :

State moves from submittedworking.

If user input is needed, state becomes input‑required and the client re‑calls tasks/send.

SSE pushes TaskStatusUpdateEvent and TaskArtifactUpdateEvent to the client.

Final states: completed, failed, or canceled.

Query & Management : Client can query status ( tasks/get), cancel ( tasks/cancel), or resubscribe ( tasks/resubscribe).

Push Notification (Optional) : If a webhook is configured, the agent can push updates proactively.

Artifacts : Upon completion, the task returns results in various content types.

This standardized flow ensures predictable and reliable interactions between heterogeneous agents.

Relationship with MCP (Model Context Protocol)

Anthropic’s MCP, released in November 2024, standardizes how LLMs connect to external data sources and tools, essentially acting as a “tool specification” for models. A2A, by contrast, focuses on agent‑to‑agent collaboration, providing an application‑layer protocol for dynamic, conversational cooperation.

These protocols complement each other: MCP equips agents with the ability to fetch and manipulate data, while A2A enables those agents to coordinate and converse with one another to solve complex tasks.

Integration Point

Google recommends modeling an A2A agent as an MCP resource via its AgentCard. This dual‑layer approach lets a single agent framework both call external tools through MCP and communicate with other agents through A2A, creating a more powerful and flexible ecosystem.

Overall, A2A addresses interoperability at the agent collaboration layer, while MCP handles model‑to‑tool connectivity. Together they aim to drive a more interconnected, capable AI agent ecosystem.

A2A architecture diagram
A2A architecture diagram
A2A code structure diagram
A2A code structure diagram
A2A workflow diagram
A2A workflow diagram
A2A and MCP integration diagram
A2A and MCP integration diagram
AI agentsMCPStandardizationGoogleProtocolInteroperabilityllms.txt
Architecture and Beyond
Written by

Architecture and Beyond

Focused on AIGC SaaS technical architecture and tech team management, sharing insights on architecture, development efficiency, team leadership, startup technology choices, large‑scale website design, and high‑performance, highly‑available, scalable solutions.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.