The landscape of artificial intelligence agents is rapidly evolving, with major tech companies like Google, Microsoft, and Amazon developing their agent-to-agent protocols. However, as we move toward a future of autonomous AI systems, fundamental layers are missing from current approaches that could determine whether we build truly decentralized, human-like agent societies or remain locked into corporate-controlled ecosystems.

Beyond Corporate Protocols: The Need for Layered Architecture

While big companies drive the current development of agent protocols due to their resources and power, the future shouldn’t be dictated solely by corporate interests. Instead of relying on a single protocol, we need a layered stack of protocols — similar to how the internet was built. This approach would create a more robust, flexible foundation for agent interactions.

The challenge lies in addressing common problems that any distributed agent system will face: identity verification, agent discovery, capability understanding, and trust layers. These fundamental issues have been tackled by the Self-Sovereign Identity (SSI) ecosystem for decades through decentralized identifiers and verifiable credentials, but AI agents require something more sophisticated.

Identity: More Than Cryptographical Proof

Current identity solutions focus primarily on cryptographical tools for proving identity, but AI agents need something far more comprehensive. We should view artificial intelligence agents as human-like entities, which means recreating and mimicking human society patterns when designing agent-to-agent interactions.

Agent identity should function like enhanced business cards, coupled with capabilities and understanding of what agents can contribute to their society and swarm. When creating prompts for language models, we already set identity contexts — “world-class software architect specialized in decentralized systems” — and expect agents to embody these roles. Agent identity encompasses not just who they are, but what they can do, their specializations, and the services they can provide.

This identity framework should include a comprehensive set of capabilities, characteristics, and services, making it clear how each agent can contribute to the broader ecosystem. Identity for agents is an extremely wide topic that deserves dedicated exploration, as it forms the foundation for all subsequent interactions.

Ethical Framework: The Foundation of Agent Society

Just as human societies require shared ethical values, agent interactions need a common ethical framework and governance structure. We cannot effectively interact with agents that don’t share our ethical values, making this layer crucial for building trust and cooperation within agent networks.

This ethical framework addresses fundamental questions about how agents should behave, what actions are acceptable, and how conflicts should be resolved. Without shared ethical standards, agent societies risk becoming chaotic or potentially harmful, undermining the benefits of autonomous AI systems.

Semantics: The Key to True Understanding

Perhaps the most fundamental missing layer is semantics — the ability for agents to truly understand each other when communicating. If agents are to be truly free and autonomous, they need mechanisms for mutual comprehension that go beyond simple message passing.

While knowledge graphs and ontologies have existed in the industry for years, agent communication requires understanding of common semantics, goals, and entities. This semantic layer enables agents to understand each other’s intentions and work together effectively.

Three Approaches to Semantic Challenges

There are multiple ways to address the semantic layer challenge:

Sacrificing Autonomy: All agents could share embedded semantics and a common worldview — essentially creating a “Soviet Union” approach where everyone aligns with a central authority. While this ensures understanding, it eliminates the benefits of diverse, autonomous agents.

Structured Messages Only: Agents could communicate exclusively through structured messages with implicit semantics, eliminating natural language entirely. However, this approach conflicts with our desire to interact with agents in plain English rather than complex cryptographical protocols.

Hybrid Approach: The most promising solution combines semi-structured messages with natural language capabilities. Humans already use this approach — we communicate naturally in most situations but rely on formal documents, prescriptions, and certificates when specific verification is needed. Agents could similarly use a mix of natural language and verifiable credentials.

The key challenge is transferring not just messages but common meaning that enables genuine interaction and understanding.

Consensus Protocols: Proving Work and Achievement

When autonomous agents make their own decisions, we need consensus protocols — not for blockchain state management, but for proof of work completion. When multiple agents cooperate toward a common goal or compete for tasks, they must reach consensus on whether goals have been achieved and work has been completed.

Without this consensus layer providing common understanding of goal achievement, all the cryptographical tooling around signatures and contracts becomes meaningless. Agents need reliable mechanisms to verify and agree upon completed work and successful outcomes.

Building Toward Artificial Sociology

These fundamental layers — identity, consensus, and semantics — form the technical foundation that enables higher-level protocols encoding conduct codes, behaviors, and complex social interactions. However, these advanced layers move beyond pure technology into the realm of artificial sociology and artificial psychology.

If we want human-like AI entities, we must build human-like environments for them. This suggests that future generations might include artificial psychologists and sociologists as entirely new professional domains, dedicated to understanding and managing AI agent societies.

The Path Forward

The immediate challenge is solving the foundational problems of identity, consensus, and semantics to enable basic communication between autonomous agents. While the higher layers of artificial sociology represent fascinating future possibilities, we must first establish these technical foundations.

The vision of truly autonomous, interacting AI agents requires more than just message-passing protocols. It demands a comprehensive rethinking of how artificial entities can form societies, maintain trust, and work together toward common goals while preserving their individual autonomy and capabilities.

By focusing on these missing layers, we can move beyond corporate-controlled agent ecosystems toward genuinely decentralized, human-like AI societies that serve broader human interests while maintaining the flexibility and innovation that autonomous systems promise.