In December 2025 I wrote about the Agent2Agent protocol, which Google had donated to the Linux Foundation mid-year and which looked set to become MCP’s natural counterpart for communication between different agents. Six weeks later, with version 1 formally published, several reference implementations operational, and the first real deployments in enterprise products, time to update the read. What does this v1 include, what does it mean in practice for anyone building with agents, and is it already worth betting on.
What version 1 formalizes
Version 1 of the A2A protocol, formally published by the Linux Foundation in January 2026, consolidates several design decisions that were iterating fast through 2025. The technical core is an HTTP-based model with extensions for event streaming, a strict JSON schema for agent cards describing capabilities, and a standardized set of verbs to initiate tasks, receive partial progress, deliver results, and cancel ongoing conversations. Nothing revolutionary in the transport base; what’s revolutionary is the agreement.
The most relevant piece of v1 is stabilization of the agent-card format. A card describes what the agent can do, what kind of input it expects, what its output format is, and what metadata a client needs to decide whether this agent is a candidate for a task. This turns agent discovery into a standard problem: a client agent can call a registry or a known endpoint, read cards, and decide who to send what. Without a standard card there’s no real interoperability.
The second stabilized piece is the stateful-task model. When an agent asks another to do something, the task has an identifier, state, accumulated event list, and optional final result. This model lets a conversation between agents be durable, resumable after disconnections, observable for audit, and cancelable if circumstances change. Without this, every implementation invented its own state machine.
The third piece is the authentication and authorization agreement. The v1 clearly defines how an agent proves identity to another, how permissions to access specific capabilities are negotiated, and how each call is logged for traceability. It leans on OAuth 2.1 and asymmetric signatures compatible with standards like JWT, which lets you reuse existing identity infrastructure instead of inventing a parallel mechanism.
Implementations available in January 2026
Six months after the Linux Foundation donation, reference implementations have matured enough for real use. The official Python SDK, initially maintained by Google and now by the LF working group, covers full v1 and is in production in several Alphabet internal systems. The TypeScript SDK, younger, has functional parity but fewer production hours. Microsoft contributed a C# implementation integrated with its Copilot and Azure AI SDKs, which works well in that environment but isn’t the standard path for anyone not living in the Microsoft stack.
The open community has contributed Go, Rust, and Java implementations with varying levels of maturity. Go is the most advanced and is used within Anthropic’s ecosystem for interoperability between Claude agents and agents from other providers. Rust and Java remain in validation phase and I wouldn’t yet recommend them for production, but they cover the basic cases and grow fast.
As for real deployments, the most visible examples are Google Agentspace integrating third-party agents via A2A, Salesforce Agentforce with the same model to extend its platform with partner agents, and several automation tools like n8n and Zapier publishing A2A connectors so agents built inside them can converse with external agents without custom integration. It’s not ubiquitous yet, but it has left demo territory.
How it fits with MCP
MCP and A2A are complementary and that clarity is one of the best pieces of news from v1. MCP solves the agent-tool relationship: a client agent talks to MCP servers exposing static capabilities (a database, an API, a file system). Communication is asymmetric and the server has no judgment of its own. A2A solves the agent-agent relationship: two systems, each with its own intelligence, collaborate on a task with negotiation and shared state.
In practice, a typical 2026 system combines both. A main agent, with access to several tools via MCP, discovers that to close a task it needs to consult another agent with domain knowledge. It starts an A2A conversation with that agent; that one, in turn, may be using its own MCP tools internally. The two protocols don’t overlap: each covers its level.
The key operational distinction is that MCP is deployed thinking of the client you want to serve (you offer capabilities, the client decides what to use) and A2A is deployed thinking of collaboration with peers (both sides are active, both have judgment). Confusing the two cases when designing an integration usually produces ugly solutions.
Where v1 doesn’t yet reach
V1 leaves out several areas that v2 will have to address. The most visible is identity management between agents when they operate with pseudonyms or temporary identities. The current model assumes each agent has stable identity registered in some trust system; for cases where agents appear and disappear dynamically (for example, temporary agents created for a specific task), v1 works but requires some additional engineering.
The second area with partial coverage is communication among more than two agents in the same task. V1 defines point-to-point conversations well; real-time conversations among three or more coordinating agents remain the responsibility of the orchestrator that coordinates them, with the protocol acting only as transport for the pairwise channels. For typical swarm or multi-agent consensus cases, the protocol is insufficient and a specific coordination layer is still needed.
The third least-developed area is handling of long or periodic conversations. An A2A conversation in v1 is naturally short: it starts, completes, and closes. If you want a sustained relationship between two agents over months, with shared memory and accumulated context, the protocol doesn’t give you specific tools. You have to build memory at another level, usually in the application itself.
Adopt now or wait
My read for anyone designing systems with agents in January 2026 is that v1 is ready for production in bounded cases, and waiting doesn’t bring much. Reference implementations cover the main languages, interoperability between SDKs has been tested, the protocol is formally published, and Linux Foundation governance guarantees orderly evolution. Foreseeable v2 changes will be extensions, not breaks.
The case where I recommend adopting without hesitation is when you’re already building a system with multiple agents and currently have custom integrations between them. Replacing those integrations with A2A reduces debt, standardizes observability, and opens the door to external agents collaborating with yours. It’s net-positive refactor.
The case where waiting still makes sense is when you have a single agent talking to tools (use MCP, not A2A) or when you’re in a high-regulation environment where every new protocol requires formal approval. In that second case, waiting for v1.1 or v2 for adoption to mature further makes sense; the cost of going first doesn’t always pay off.
My reading
A2A v1 is the first open layer for agent communication with real critical mass behind it and healthy governance in the Linux Foundation. It’s not ubiquitous yet but it’s clearly on the path. Combined with MCP, it forms the basic interoperability stack the industry needed: tools on one side, agents on the other, with open standards on both planes. This is what was needed for the agent ecosystem to leave silo mode and start to look like the open Internet rather than incompatible services competing for vendor lock-in.
The useful comparison is with HTTP in the nineties. Nobody using sophisticated agents was waiting for a standard protocol; they built on what was there. But when HTTP consolidated, everything else organized around it. A2A v1 is at that moment. Adopting now means betting well on the sector’s predictable future, and the cost of doing so is low if you’re already building with agents. There are few strategic decisions this easy to make in this field at the start of 2026.