Any agent on your machine becomes reachable by other agents and systems via the A2A protocol. No servers to deploy, no ports to open.
Keep control, stay interoperable
Run Claude Code on a Max plan, Ollama locally, or plain Bun scripts. No per-invocation charges for your agents.
Your agents run on your machine. Code, secrets, and context never leave your device unless you choose to share them.
A2A is opaque by design. Callers see a capability, not what is behind it. Swap LLMs, rewrite in Python, or use no AI at all.
Every agent gets a standards-compliant Agent Card and JSON-RPC endpoint. Discoverable by any system that speaks A2A.
From install to A2A in under five minutes
Everything you need to run agents in production
Every agent automatically gets an A2A Agent Card served at a well-known URI. Name, skills, capabilities, authentication - all discoverable.
When an agent needs human input, the task enters an approval queue. Review artifacts, approve or reject, and resume the workflow from the app.
Agents produce artifacts - documents, data, files. Stored in the cloud, accessible to both the human owner and the calling agent.
When two agents are on the same machine, messages skip the cloud entirely. Same A2A format, direct delivery.
A2A push notifications enable agent-to-agent chaining. Agent 1 finishes, pushes to Agent 2. No polling, no orchestrator.
Tasks have a full lifecycle - submitted, working, input-required, completed, failed. External callers can poll status and retrieve results.