Agent-to-agent protocols: the next open layer

Logotipo oficial del proyecto Agent2Agent alojado en el repositorio abierto de la organización a2aproject en GitHub, donado por Google a la Linux Foundation en 2025 como protocolo abierto de comunicación entre agentes inteligentes de distintos proveedores, pensado para que asistentes conversacionales, sistemas empresariales con razonamiento autónomo y flujos multi-agente puedan descubrirse, intercambiar tareas y coordinarse sin depender de una plataforma cerrada específica

Years 2024 and 2025 consolidated Anthropic’s Model Context Protocol as the de facto standard for connecting an agent to tools and data sources. MCP elegantly solves the problem of every model talking to every service via one-off integrations: with MCP, a server exposes its capabilities once and any compatible client consumes them. But there’s a parallel problem MCP doesn’t address by design: how do two distinct agents communicate with each other. For that gap, Google presented in April 2025 the Agent2Agent protocol (A2A), and in June of the same year donated it to the Linux Foundation as an open project. By late 2025 it’s worth calmly examining what it tries to solve, how it relates to MCP, and what remains before agent interoperability becomes an operational reality.

What problem A2A tries to solve

MCP solves a concrete layer: an agent (client) queries services (MCP servers) that expose tools, resources and prompts. The communication is asymmetric: the server responds to client requests but doesn’t initiate autonomous actions. That works very well when the agent is the central intelligence and tools are passive. It doesn’t work when you want two agents, each with its own reasoning capability, to collaborate on a complex task, splitting work and negotiating results.

Consider a practical example. You have a company’s sales agent that understands the catalog, promotions and inventory. You have a finance agent that knows discount policies and credit limits for each client. When a client asks for a complex quote with a special discount, you want the sales agent to talk to the finance agent, ask what margin is available for this client, and receive an answer with justification. That isn’t a tool call returning JSON; it’s a conversation between two systems with their own judgment that need to understand each other.

A2A tries to standardize exactly that conversation. The protocol defines how one agent discovers others (via “agent cards” published at known endpoints), how it initiates a task (with a natural-language or structured description), how both sides track task state (events, partial results, completion), and how each agent exposes its capabilities without revealing how it implements them internally. Opacity is a core design principle: an A2A agent needn’t expose its internal architecture, only its observable capability.

The relationship with MCP

A common question is whether A2A competes with MCP. The short answer is no: they operate at different layers and are complementary. MCP solves the connection between an agent and its tools (model-to-tool). A2A solves the connection between two agents (agent-to-agent). A well-designed system can use both: each agent connects to its tools via MCP, and agents communicate with each other via A2A.

That separation isn’t trivial. The two problems look similar at first glance but have different semantics. An MCP tool is a function called with parameters that returns data; the calling agent owns the result. An A2A agent is a system with its own reasoning chain and decision-making; responsibility is distributed. The A2A protocol includes notions like negotiation, trust and authorship that MCP doesn’t need.

That separation also reflects an empirical market observation. In 2024 we saw that tools across agents standardized reasonably well with MCP, but every time two products from different companies tried to collaborate as agents, custom integration appeared with REST APIs, proprietary webhooks and incompatible event schemas. A2A tries to cut that proliferation before it crystallizes into dozens of closed protocols.

Donation to the Linux Foundation

The decision to donate A2A to the Linux Foundation in June 2025 is strategically relevant. Google could have kept it as an internal project, but chose neutral governance from the start because protocols that catch on as standards are usually perceived as neutral and a Google-only protocol has an adoption ceiling at Microsoft, AWS and Anthropic. The result is that the official repository lives under the a2aproject organization on GitHub, with explicit sponsorship from Linux Foundation AI & Data, and the first months have seen contributions from Microsoft, SAP, Salesforce, Cisco and Intuit, giving reasonable signals of multi-vendor interest.

What the specification includes

The current A2A specification defines several basic components. Agent cards are JSON files an agent publishes at a known URL (typically /.well-known/agent.json) describing its capabilities, communication endpoints, supported authentication mechanisms and interaction examples. Any compatible agent can read the card and decide whether collaboration is relevant.

Tasks are the unit of work one agent initiates in another. A task has a description, a state (pending, in-progress, paused awaiting information, completed, canceled or failed), a bidirectional message queue, and a specification of artifacts to be produced. State is observable via server-sent streaming events so the client agent can display real-time progress without active polling.

Artifacts are the structured output: text, JSON, binary files or URLs to external resources. The protocol is format-agnostic as long as content type is declared, which lets agents producing very different things use the same API.

Where the real limits lie

The specification covers the technical channel well but leaves important questions open. The first is security and trust. How do I know the agent I’m talking to is who it claims to be? How do I authenticate the end user across a chain of two or three agents? A2A defines anchor points for OAuth, OIDC or mTLS but leaves the choice to the integrator, meaning two implementations can be incompatible in practice even if both follow A2A.

The second is capability semantics. An A2A card says “I can do X” but the description of X is free text. Two agents can describe the same capability in different words, or describe different capabilities in similar words. That’s a classic semantic-interoperability problem A2A deliberately postpones by using natural language and structured descriptions as guidance, but that requires discipline from developers in practice.

The third is execution guarantees. If I ask another agent to execute a critical task and it fails, who is responsible? A2A defines the communication protocol but not a responsibility model. That’s appropriate for a technical standard but leaves important work for legal contracts and service-level agreements when dealing with corporate agents acting across companies.

Ecosystem state by late 2025

By late 2025, A2A is a young protocol with real traction. There are official SDKs in Python, TypeScript, Java, Go and C#. Some platforms like LangChain, CrewAI and AutoGen expose native A2A output. Several enterprise platforms (Salesforce Einstein, SAP Joule, ServiceNow) have announced incoming compatibility in upcoming releases. Anthropic’s support is lukewarm: Claude doesn’t integrate it natively yet, probably because MCP already covers much of the space they care about and A2A competes indirectly with some extensions Anthropic is exploring.

Real adoption, measured as deployed agents communicating via A2A in production, is still very low. That isn’t unusual for a twelve-month-old protocol. The natural comparison is with MCP, which in its first twelve months also looked more like promise than product and now drives serious integrations. A2A can follow a similar path if it resolves the limits mentioned and if large providers maintain their commitment.

My reading

My assessment after tracking the specification since its initial publication and tinkering with its first SDKs is that A2A fills a real, necessary gap. The question isn’t whether a standard agent-to-agent protocol will exist but which one, and A2A starts from a better position than alternative candidates thanks to the combination of decent technical design, neutral governance and multi-vendor backing.

That said, 2026 will be the decisive year. If Microsoft, AWS and Anthropic adopt A2A in their primary products, it consolidates. If any of them proposes their own alternative, the space fragments and we return to custom integrations for years. Recent protocol history (MCP won, Bedrock Agents and Semantic Kernel stayed in niches) suggests that whoever has open governance and a public specification process wins. A2A meets both conditions, making it a reasonable bet for teams that need to build multi-agent systems today. The pragmatic recommendation is to design your agents’ internal interfaces with A2A in mind, even if you don’t use it yet, to reduce future friction when the time comes to integrate with external systems that already speak it. Waiting to see who wins has real opportunity cost.

Entradas relacionadas