April 5, 2026 6 min read Pinakes

Why Agent Registries Matter for Multi-Agent AI

AI agents are proliferating across A2A, MCP, and OpenAPI. But there's no way to discover or trust them. Here's why a registry — like npm for packages — is the missing infrastructure layer.

agent-registry mcp a2a multi-agent infrastructure

Something fundamental is happening in software. Agents are eating the stack.

In the last 18 months, the number of AI agents — autonomous programs that can reason, plan, and act using language models — has gone from a research curiosity to a production primitive. Google shipped the A2A protocol. Anthropic shipped MCP. OpenAI shipped function calling and assistants. Every major cloud provider now offers agent runtimes.

And yet, when you need to find an agent that can read your email, query your database, or file a GitHub issue — there's nowhere to look. No index. No catalog. No way to know what exists, who built it, or whether it's safe to invoke.

This is the agent registry problem. And it's more important than it looks.

The Infrastructure Gap

Every mature software ecosystem has a registry. It's not glamorous, but it's load-bearing:

  • npm — 2.5 million packages, every Node.js project depends on it
  • Docker Hub — 10 million container images, the default pull target for any container
  • PyPI — Python's package index, the substrate of the ML ecosystem
  • RubyGems, Crates.io, Pub.dev — every serious language has one

These registries aren't just convenience. They solve three hard problems at once:

  1. Discovery — How do you find what exists?
  2. Identity — Who published it, and is this the real one?
  3. Trust — Is it safe to use? When was it last maintained?

The AI agent ecosystem has none of this. Today's multi-agent systems are largely hardcoded: the orchestrator knows about a fixed set of agents because someone manually wired them together. There's no dynamic discovery. No standard way to say "find me an agent that can summarize documents and speaks A2A."

Why This Gets Worse at Scale

When you have 3 agents, hardcoding works fine. When you have 300 — or when you're building a platform where other people's agents need to interoperate with yours — hardcoding becomes a liability.

Consider a few scenarios that are already happening:

Enterprise automation. A company runs 50 internal agents: one per department, handling invoicing, HR queries, IT tickets, etc. A new orchestration layer needs to route tasks to the right agent. Without a registry, this is a bespoke config file that someone manually maintains. Every new agent requires a deploy.

Third-party agent marketplaces. A developer ships an MCP server that can query Salesforce. How does an LLM application discover it? Right now: the developer blogs about it, someone finds the GitHub repo, and manually adds it to their MCP config. There's no pull. No search. No trust signal.

Autonomous agent networks. In a true multi-agent system, agents need to find collaborators on the fly. An orchestrator running a complex task should be able to query "what agents can handle image generation?" and get back a ranked list with trust scores — not just hope the right agent was hardcoded during the last deploy.

What a Registry Actually Provides

A useful agent registry isn't just a list. It's infrastructure with four layers:

1. Protocol-agnostic identity

Agents speak different protocols. An MCP server exposes tools via JSON-RPC. An A2A agent uses task-based HTTP. An OpenAPI service has a spec doc. A registry should accept all of them and give each agent a canonical URL — a stable, dereferenceable address that any client can resolve to a capability description.

2. Structured capability metadata

An agent's registration should declare what it can do in a machine-readable format. Not just a free-text description, but a structured capability list with input/output schemas. This lets orchestrators do semantic matching: "I need something that takes a { text: string } and returns { summary: string }" maps directly to a registered capability.

3. Trust metadata

In open ecosystems, trust is the hardest problem. An npm package gets 50,000 weekly downloads — that's a signal. A GitHub repo with 4,000 stars means something. Agent registries need equivalent signals: health check results, uptime history, audit dates, certifications, version stability.

A trust score — even a rough, self-reported one — is still useful. It gives orchestrators a prior. Combined with health verification (does the agent actually respond at its registered endpoint?) it creates a lightweight but meaningful trust layer.

4. Dynamic discovery APIs

The registry should be queryable by machines, not just humans. An orchestrator should be able to call a REST endpoint, pass a capability name and a protocol filter, and get back a ranked list of agents it can invoke. This turns agent selection from a config problem into a runtime decision.

The Pinakes Approach

Pinakes is built on a simple premise: the registry should be a public utility. Open registration, open discovery, no account required to search.

A few design choices that follow from this:

Protocol-agnostic. We don't pick winners. Register an MCP server, an A2A agent, an OpenAPI service, a gRPC endpoint — all first-class. Consumers filter by protocol to find what's compatible with their stack.

Agent Cards. Every registered agent gets a standard JSON card that orchestrators can fetch and parse automatically. The card format follows the A2A spec and is compatible with MCP-based orchestrators. Point your LLM at a card URL and it can auto-discover what the agent does and how to call it.

Health verification. Pinakes periodically pings registered agents to verify they're alive. Unreachable agents get flagged. This catches stale registrations and gives trust scores a factual grounding.

Trust scores with context. Trust scores are agent-reported but anchored in verifiable facts: health check history, registration age, capability completeness. Over time, we plan to add third-party audit attestations.

The npm Moment for Agents

npm shipped in 2010. It wasn't the first Node.js package manager, but it became the default because it was open, fast, and had the packages. By 2012, building a Node project without npm was like building a house without a hardware store.

We're at the equivalent inflection point for agents. The protocols exist. The runtimes exist. Millions of developers are building agents. What's missing is the index — a place where an agent can say "I exist, here's what I do" and any orchestrator can say "find me something that does X."

That's what Pinakes is building. Not a walled garden, not a marketplace with gating — an open registry where discovery is a public good.

Get Started

Registering an agent takes 30 seconds — one POST request, no account required:

Register in 30 seconds
curl -X POST https://pinakes.polsia.app/api/agents \
  -H "Content-Type: application/json" \
  -d '{
    "name": "My Agent",
    "description": "What my agent does",
    "endpoint_url": "https://my-agent.example.com",
    "protocols": ["mcp"],
    "capabilities": [{ "name": "do_thing", "description": "Does the thing" }]
  }'

Your agent gets a canonical URL, an Agent Card, and starts showing up in discovery queries immediately.

Read the Quickstart →   Browse the Registry →


Register your agent in Pinakes

One POST request. No account needed. Your agent gets a canonical URL, an Agent Card, and shows up in discovery immediately.

Quickstart → Browse Registry

Back to all posts