From Function Calling to A2A: How AI Agents Evolve and Interact

This article analyzes the progressive evolution of AI tool‑integration mechanisms—Function Calling, MCP, and A2A—explaining their core concepts, engineering considerations, use‑case suitability, limitations, and how they complement each other to enable scalable multi‑agent workflows.

Architect
Architect
Architect
From Function Calling to A2A: How AI Agents Evolve and Interact

0. Introduction

Large language models (LLMs) are connected to external tools via three mechanisms that form a progressive stack: Function Calling, Model‑Tool Connectors (MCP), and Agent‑to‑Agent (A2A).

1. Evolution Overview

2023 Q3 – OpenAI Function Calling : GPT‑4 can invoke an API within a single turn.

2024 Q4 – Anthropic MCP : Unified protocol decouples models from tools, making integration cost linear.

2025 Q2 – Google A2A : Multiple agents form pipelines to complete long‑chain tasks.

2. Function Calling – Direct Model‑to‑Tool Interaction

Shortest path to connect an LLM with a single function.

Key Steps

Intent Recognition : LLM decides from natural language whether external data is needed.

Function Selection : Based on a JSON Schema supplied in the prompt.

Parameter Assembly : Pure JSON; idempotency must be ensured by business logic.

Result Return : LLM translates JSON response back into natural language.

Applicable Scenarios

Single model, few tools, rapid MVP.

Service calls mainly GET/POST, chain length < 1 step.

Limitations

N×M Adaptation : Changing model or adding tools requires rewriting schemas.

No Native Chaining : Multi‑step calls need explicit orchestration.

Interface Fragmentation : Different vendors use different syntax.

3. MCP – Standardized Model‑Tool Interface

MCP abstracts interaction between any model and any tool, turning the “model × tool” matrix into a linear “model + tool” cost.

Core Components

Host : User‑side entry point (e.g., Claude Desktop, IDE plugins). Focus: experience aggregation.

Client : Long‑connection management (official/third‑party SDKs). Focus: connection reuse, flow control.

Server : Tool wrapper (vector retrieval, database bridge). Focus: unified RPC & access control.

Data Source : Real API or file (local or cloud). Focus: security isolation.

Core Advantages

One‑time integration, multi‑client reuse: write the Server once, all Clients can use it.

Local‑first: sensitive data stays on the client side, satisfying compliance.

Native multi‑step: Servers can call other Servers internally, enabling chain calls.

Ecosystem accumulation: many open‑source Servers are available for plug‑and‑play.

4. A2A – Collaborative Multi‑Agent Teams

Key Terminology

A2A Client : Task initiator.

A2A Server : Task scheduler and status broadcaster.

Task State :

SUBMITTED → IN_PROGRESS → WAITING_INPUT? → SUCCEEDED/FAILED

Lifecycle

Submit Task : HTTP POST with task type and payload.

Immediate Receipt : Returns task_id and SUBMITTED status.

Streaming Updates : SSE/gRPC‑stream pushes progress.

Final Output : Callback or polling retrieves result or error stack.

Engineering Value

Horizontal scaling: adding an Agent adds a node without changing the central scheduler.

Long‑chain transparency: state machine + heartbeat simplifies SLA monitoring.

Capability collaboration: search, translation, summarization agents can be chained like a pipeline.

5. Comparative Matrix

Key dimensions across the three mechanisms:

Focus : Function Calling – model ↔ single tool; MCP – model ↔ multiple tools; A2A – agent ↔ agent.

Communication mode : Function Calling – single RPC; MCP – duplex JSON‑RPC / WebSocket; A2A – HTTP + SSE / gRPC‑stream.

Extension cost : Function Calling – M×N; MCP – M+N; A2A – K (number of agents).

Chaining : Function Calling – application‑level orchestration; MCP – server can recursively call other servers; A2A – native task DAG.

Typical role : “Hand‑out” (model uses tool), “Connector” (standardized interface), “Coordinator” (agents coordinate).

Relationship Summary Function Calling → MCP: first enable tool usage, then standardize the interface. MCP → A2A: MCP solves *how* to use tools, A2A solves *who* performs the work. A2A ↔ Function Calling: agents can still invoke local functions via Function Calling.

6. Conclusion

The three protocols are layered rather than competing. Function Calling provides the minimal viable capability, MCP removes interface fragmentation and unlocks ecosystem value, and A2A organizes capabilities into composable, scalable agent teams for complex tasks. Designing systems to be plug‑in and composable ensures resilience amid rapid AI advancements.

References

https://mcpcn.com

https://a2acn.com

https://mp.weixin.qq.com/s/lTAeVYczGYeDtIPNvdR3CA

https://agent2agent.info

Image
Image
Image
Image
Image
Image
AI agentsMCPFunction CallingA2AMulti-agent architectureLLM tool integration
Architect
Written by

Architect

Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.