Through Alicia Juarrero's lens of constraints and hierarchical organization, we discover that holocracy represents something profound: a working implementation of the constraint dynamics that enable complex systems to maintain identity while navigating change. For AI agents operating in dynamic, uncertain environments, these same dynamics provide the missing foundation for genuine intelligence.
The Paradox at the Heart of Intelligence
Picture a flock of starlings wheeling across the evening sky, thousands of birds moving as one fluid shape, contracting and expanding, splitting and merging, yet never colliding. No bird commands the others. No central brain choreographs the dance. Each bird follows simple rules in response to its neighbors, yet the flock achieves coherence that seems to transcend its parts. This is the paradox every intelligent system must solve: how to be both autonomous and coordinated, both independent and integrated, both responsive to context and true to identity.
Traditional hierarchical organizations fail at this paradox. The CEO issues directives that cascade down through middle managers to front-line workers, but by the time instructions reach those who actually interface with customers or products, context has shifted. The orders no longer fit reality. Workers either follow outdated commands and fail, or ignore their superiors and succeed—but undermine the organization's coherence. The hierarchy trades autonomy for coordination, and gets neither.
Classical AI architectures mirror this failure. Monolithic systems achieve coherence through rigid programming but cannot adapt to novel situations. Swarm approaches achieve local autonomy but struggle to maintain global purpose. Hierarchical planners break down when reality refuses to match their models. Every approach sacrifices either autonomy or coherence, as if the two were fundamentally incompatible.
But the starlings suggest otherwise. So do ecosystems, immune systems, nervous systems, and every other complex adaptive system that nature has refined over billions of years. These systems solve the autonomy-coherence paradox through something traditional hierarchies lack: they organize through constraints rather than commands.
This is the profound insight at the heart of Alicia Juarrero's Context Changes Everything. Constraints are not limitations imposed from outside. They are relationships of dependence that shape what's possible without dictating what's actual. A catalyst enables chemical reactions without participating in them. A riverbank channels water without pushing it. A musical key constrains which notes belong together without forcing any particular melody. Constraints create coherence by reshaping possibility spaces, not by issuing orders.
When we view organizations and AI systems through this lens, something remarkable emerges. Holocracy—Brian Robertson's organizational system that distributes authority through roles rather than concentrating it in managers—turns out to be something more than an interesting management fad. It's a practical implementation of constraint-based hierarchical organization. And constraint-based hierarchical organization, Juarrero demonstrates, is how complex systems actually work.
The question this paper explores is not whether AI agents should adopt holocratic principles. The question is why we ever thought they could work any other way. Once you understand constraints, holarchies, and the rate-differentiated levels of organization that characterize every robust adaptive system, holocracy for AI agents becomes not an interesting option but an obvious necessity. The starlings have known this all along.
Juarrero's Framework: A Different Way of Seeing
When Forces Aren't Enough
Imagine you're watching a master chef prepare a sauce. She adds a pinch of salt, and suddenly flavors that were hiding emerge, the sauce comes alive, everything coheres. The salt didn't push the other ingredients around. It didn't add energy to the system. It changed what was possible. It enabled flavors to interact that couldn't before. This is what Juarrero means by a constraint.
For three centuries, Western science has explained the world through forces—billiard balls colliding, gravity pulling apples, electrons repelling each other. Forces transfer energy. They make things happen through what philosophers call efficient causation: one event directly causing another through mechanical interaction. This framework works beautifully for simple systems. But when you try to explain how a flock of starlings moves, or how an ecosystem maintains itself, or how a society coordinates millions of individual actions into recognizable patterns, forces aren't enough.
Forces can push things around, but they can't create new possibilities. They can't coordinate independent elements into coherent wholes. They can't shape behavior without dictating it. For that, you need something different. You need constraints.
Juarrero distinguishes three types, each with its own character and role. Enabling constraints are like the chef's salt—they make new interactions possible that couldn't happen before. A catalyst speeds up a chemical reaction without being consumed. A bridge enables crossing that the river prevented. A shared language enables conversations that isolation prohibited. These constraints create pathways, reduce barriers, open channels. They coordinate previously independent elements into new configurations without forcing any particular outcome.
Governing constraints work differently. They regulate and modulate existing processes without participating in them directly. Think of a thermostat. It doesn't heat your house—the furnace does that. But it controls when and how much the furnace runs, maintaining temperature within a comfortable range. Or consider regulatory genes, which Juarrero discusses extensively. They don't build proteins themselves—that's the job of structural genes. But they control the timing, location, and amount of protein production, orchestrating a symphony of molecular interactions without playing an instrument.
Constitutive constraints are different again. They define what something is by creating interlocking interdependencies that hold a system's identity together. The rules of chess don't move any pieces, but they constitute what chess is. Break the rules and you're playing a different game. Similarly, the feedback loops and closure of processes in a living cell constitute its identity as that particular type of cell. Change those fundamental constraints and you have a different kind of system entirely.
What makes all three types of constraints different from forces is this: they shape possibility spaces without adding energy. They're rate-independent, as Juarrero says—they influence outcomes without intervening energetically in the processes they affect. The timing of a playground swing kick doesn't add energy to the kick, but it determines whether you go higher or drag to a stop. The rules of a market don't buy or sell anything, but they shape every transaction that occurs within them.
This distinction changes everything. Because if control can be exercised through constraints rather than forces, then you can coordinate complex systems without requiring central command. You can maintain coherence without sacrificing autonomy. You can enable adaptation without losing identity. This is how nature does it. And this, Juarrero argues, is what we've been missing.
The Janus-Faced Nature of Holons
Arthur Koestler, the Hungarian-British writer, noticed something peculiar about organized systems. Everything that seems like a whole is simultaneously a part of something larger. A cell is a whole unto itself, complete with membrane, nucleus, and internal organization. But it's also part of a tissue. The tissue is a whole, with its own structure and function. But it's part of an organ. The organ is a whole, but it's part of an organism. The organism is a whole, but it's part of an ecosystem.
Koestler called these dual-natured entities holons—from the Greek holos (whole) and the suffix -on (particle or part). Like the Roman god Janus with his two faces, every holon looks inward toward the components that comprise it and outward toward the larger context in which it's embedded. This Janus-faced quality isn't incidental. It's essential to how complex systems work.
Think of yourself in an organization. In your team, you're a whole person with initiative, judgment, and autonomy. But from the department's perspective, you're a component filling a specific role. The department is a whole with its own identity and purpose, but from the organization's perspective, it's one unit among many. The organization is a whole with its own culture and mission, but from the market's perspective, it's one player in a larger ecosystem.
What Koestler recognized, and what Juarrero emphasizes, is that holons exhibit what he called "self-assertive tendencies." They maintain their identity. They resist dissolution. A cell doesn't simply merge back into random chemistry when conditions get tough—it fights to maintain its boundary, its internal organization, its characteristic patterns. An organization doesn't instantly dissolve when the market shifts—it adapts, reorganizes, finds new ways to sustain its essential purpose.
But where does this self-assertiveness come from? Not from some mysterious vitalist force. It emerges from the structure of constraints that constitute the holon. Feedback loops that correct deviations. Closure of processes that reinforce patterns. Interlocking interdependencies that create metastable configurations resistant to perturbation. The enhanced metastability that comes from these constraint structures is what generates the coherence we perceive as self-assertiveness.
When holons relate to each other, they form holarchies—systems of nested interdependence where each level maintains its own integrity while participating in larger wholes. This is different from traditional hierarchies in crucial ways. Traditional hierarchies imagine a linear chain of command from top to bottom, with control flowing exclusively downward through force or authority. Holarchies recognize that influences flow in multiple directions: bottom-up as components shape the wholes they comprise, top-down as contexts constrain components, and laterally as entities at the same level interact. McCulloch called this latter pattern heterarchy—organization without absolute tops and bottoms.
The key insight is that these interlevel relationships aren't energetic exchanges. They're relationships of constraint. A heart doesn't command individual cells through force. Rather, the circulation patterns the heart maintains create a context—a constraint regime—within which cells operate. The cells, in turn, through their collective metabolism and signaling, create the conditions that maintain the heart. Neither level controls the other through force. Both constrain each other through the interdependencies they create.
When Slower Governs Faster
Here's something that seems backwards at first: in hierarchical systems organized by constraints, the slower processes control the faster ones. Your heartbeat is faster than your breathing, but your breathing rhythm influences your heart rate variability. Individual neurons fire rapidly, but slower brain rhythms modulate their firing patterns. Market transactions happen constantly, but quarterly earnings cycles and annual planning processes shape trading behavior.
Salthe, one of the hierarchy theorists Juarrero draws from, explains why this must be so. If a fast process tried to control a slower one, it would complete its cycle before feedback from the slower process arrived. Imagine trying to steer a supertanker by making rapid adjustments to the rudder. Each adjustment completes before the ship begins to turn. The controls are ineffective because the rates are mismatched.
But when slower processes constrain faster ones, something different happens. The fast process cycles many times within one cycle of the slower process. The slower process appears constant from the fast process's perspective—it provides the stable context within which rapid events occur. Think of how genes (stable over an organism's lifetime) constrain metabolism (cycling continuously). Or how organizational culture (evolving slowly) shapes daily decisions (made constantly).
These rate differences do more than enable control. They mark boundaries between hierarchical levels. Sharp discontinuities in process rates reveal where one level of organization ends and another begins. Within a level, processes synchronize—they influence each other rapidly through tight coupling. Between levels, influences are slower and operate through different mechanisms. The gap in rates creates space for each level to maintain its own identity while remaining integrated with adjacent levels.
This is why tissues can smooth over the individual variability of their component cells. The cells turnover rapidly—some die, others are born, all are constantly active. But the tissue persists. It operates at a slower rate, so from the tissue's perspective, individual cellular events average out into a continuous background. The tissue sees itself as a stable whole, not a collection of transient parts. Unless something goes wrong and cells start cycling faster than the tissue can track—then you get cancer, where fast-growing cells slip out of the tissue's regulatory control.
The interfaces between levels—the boundaries where different rates meet—are where the magic happens. These aren't passive barriers like the walls of nested boxes. They're active filters, gates, and translators. Think of your eardrum. Sound waves in air (fast, high-frequency) need to couple with fluid waves in the cochlea (slower, different medium). The eardrum doesn't just pass vibrations through. It impedance-matches the two media, adjusting amplitudes and frequencies so that information can cross the boundary without losing coherence.
Similarly, in organizations, the interface between strategic planning (slow, coarse-grained) and daily operations (fast, fine-grained) requires active management. Strategies don't directly control actions. They set parameters, establish priorities, define boundaries. Tactical processes translate these constraints into specific decisions. Interface roles—middle managers in traditional organizations, lead links in holocracy—mediate between timescales, ensuring both levels can maintain their characteristic rates while remaining coupled.The Context That Cannot Be Ignored
The Context That Cannot Be Ignored
Perhaps Juarrero's most profound challenge to traditional thinking concerns how we define things. Western philosophy and science have long aspired to what Thomas Nagel called "the view from nowhere"—objective truth independent of perspective, definitions based on intrinsic properties that hold regardless of context. Particles have mass whether or not they're embedded in a molecule. Elements have atomic numbers that don't change with circumstance. These are extensional definitions—you can point to the thing itself and measure its properties directly.
This approach works for simple, context-independent entities. But for anything organized by constraints, it fails completely. Consider the role of "mother" in a family. You can't define mother by pointing to physical properties. The same woman might be a mother in relation to her children, a daughter in relation to her parents, a sister in relation to her siblings, and a wife in relation to her spouse. Her identity shifts with context. She's defined intensionally—by her role and relationships within a particular constraint regime.
Or consider money. A dollar bill is intrinsically just paper and ink. What makes it money is its position in a vast constraint network of laws, institutions, expectations, and behaviors. Change the context—take that dollar to a society with different currency, or to a future where that government no longer exists—and the same physical object has utterly different properties. Its nature is relational, not intrinsic.
This matters profoundly for understanding complex systems. When constraints shape a system's organization, each element must be defined by its neighborhood and moment—by where it sits in the constraint network and when it exists in the system's temporal evolution. Properties emerge from position and relationship, not from composition alone. This is what Juarrero means when she says hierarchically organized systems require intensional rather than extensional definition.
Think about what this implies. It means qualitative differences between levels of organization are real and irreducible. A tissue isn't just a collection of cells—it's a new kind of entity with properties that emerge from how those cells are constrained together. A society isn't just a collection of people—it's a constraint regime that creates possibilities and patterns that isolated individuals cannot produce. You can't reduce the higher level to the lower one because the higher level's properties are constituted by relationships that don't exist at the lower level.
It also means context isn't optional background information we can safely ignore when convenient. Context creates the constraints that constitute what things are. Change the context and you change the entity itself, not just its circumstances. Domestic dogs and wolves are the same species genetically, but the constraint regime of domestication transformed their behavior, morphology, and even cognition. Remove those constraints—let domestic pigs go feral—and within generations they revert to wild boar characteristics. The constraints were real, and their effects were real, even though no intrinsic properties changed.
For AI and organizational design, this insight is revolutionary. It means you can't specify intelligence or capability through fixed internal properties alone. You must design the constraint regime—the context—within which agents operate. You must define roles intensionally, by their relationships and position within the larger system. You must recognize that the "same" agent will be qualitatively different in different constraint contexts. There is no view from nowhere. Context changes everything.
2.5 Three Levels, Always
Look closely at any coherent system organized by constraints and you'll find it's never just one level. It's always at least three, arranged in a particular way. This trilevel structure isn't arbitrary—it's necessary for constraint-based organization to work.
At the center sits what hierarchy theorists call the focal level—the level you're currently examining or operating at. For a biologist studying an organism, the focal level might be individual organs. For an ecologist studying a forest, it might be particular species. For a business analyst studying a company, it might be departments. This is where your attention is focused.
But the focal level never exists alone. Below it sits the component level—the parts that comprise the focal level. For organs, that's tissues and cells. For species, it's individual organisms. For departments, it's teams and people. These components interact rapidly among themselves. They exhibit tight coupling, frequent synchronization, similar timescales. They form the active substrate from which the focal level emerges.
Above the focal level sits the embedding context—the larger whole within which the focal level is situated. For organs, that's the whole organism. For species, it's the ecosystem. For departments, it's the entire organization. The embedding context operates more slowly than the focal level, with longer feedback loops and coarser grain. It provides the stable background against which focal-level dynamics play out.
Why must it always be three? Because constraint-based control requires asymmetry in both directions. The focal level must be faster than its embedding context so that rapid fluctuations at the focal level average out from the context's perspective, allowing the context to maintain stable coherence. But the focal level must be slower than its components so that it can act as a governing constraint that appears constant to fast component processes, shaping what they do without micromanaging every detail.
This trilevel structure creates space for both emergence and control. Components interact to generate the focal level (bottom-up emergence). The embedding context constrains how the focal level behaves (top-down control). The focal level maintains its own integrity between these influences (self-assertive coherence). Remove any of the three levels and the whole architecture collapses.
Consider how this plays out in practice. Individual cells metabolize rapidly, constantly transforming molecules. Tissues integrate these cellular activities, operating at a slower rate. The organism provides the context, slower still, within which tissue dynamics unfold. The organism doesn't directly control individual molecules—that would be impossible given the rate mismatch. Instead, it maintains conditions (temperature, pH, nutrient availability) that constrain which metabolic pathways are viable. Tissues translate these whole-organism constraints into signals that modulate cellular behavior. Cells respond to these signals while maintaining their own autonomous metabolism.
Each level is both constrained and constraining. Each maintains its own characteristic dynamics. Each is defined intensionally by its position in this trilevel structure. This is how holarchies work. This is how nature builds complex adaptive systems that are robust yet flexible, coherent yet responsive, autonomous yet coordinated.
And this, as we'll see, is exactly what holocracy implements—and exactly what AI agents need.
Holocracy: Constraint Theory Made Operational
An Organization Without Bosses
Walk into a holocratic organization and you might not notice anything unusual at first. People sit at desks, attend meetings, work on projects. But start paying attention to how decisions get made and you'll notice something strange. When someone has a question about authority—who can decide this? who's responsible for that?—the answer isn't a person's name. It's a role.
"The Marketing Coordinator handles that," someone might say. You ask who the Marketing Coordinator is. "Oh, that's Sarah right now. But she's also filling the Content Lead role and the Social Media Strategist role. And Tom used to be Marketing Coordinator until last month's governance meeting when we split out the Email Campaign Manager role and he moved into that one."
There are no managers. No one has direct authority over anyone else. Yet somehow work gets done, decisions get made, and the organization holds together. How?
The answer: holocracy is a practical implementation of constraint-based hierarchical organization. Where traditional organizations try to maintain coherence through command (managers telling employees what to do), holocracy maintains it through carefully designed constraints. Roles define domains of authority. Circles create contexts for coordination. Governance processes enable structural evolution. Tactical meetings synchronize operations. The constitution provides overarching rules that preserve organizational identity while allowing everything else to change.
It's a living example of what Juarrero describes theoretically. And once you see it through her framework, every element of holocracy reveals itself as an implementation of constraint dynamics.
Roles as Enabling Constraints
Consider what a role actually is in holocracy. It's not a job description listing tasks someone must perform. It's a bundle of three elements: a purpose (why this role exists), a domain (areas where this role has exclusive authority), and accountabilities (ongoing activities the role is expected to maintain).
The Marketing Coordinator role might have a purpose like "compelling brand presence in target markets," domains including "company blog" and "brand messaging standards," and accountabilities like "publishing weekly content" and "maintaining editorial calendar." Notice what's missing: no specification of how to do any of this. No mandated tactics. No required procedures.
This structure acts as an enabling constraint. It creates a channel—a possibility space—within which the person filling the role can act autonomously. The purpose orients action without dictating it, like a gradient in a possibility landscape that slopes toward certain outcomes without forcing any particular path. The domains grant authority that enables action, like opening a gate that was previously closed. The accountabilities set expectations that reduce uncertainty about what others can depend on, like establishing a rhythm that others can synchronize with.
But the role doesn't command. It doesn't transfer energy. It doesn't force compliance. It enables. The person filling the role brings their own judgment, creativity, and responsiveness to context. Two different people in the same role will enact it differently, yet both satisfy its constraints. This is multiply realizable functionality—exactly what Juarrero identifies as characteristic of constraint-based systems.
And here's the crucial part: roles coordinate with each other through their purposes and domains, not through managerial oversight. When the Content Lead (purpose: "engaging content that serves audience needs") interacts with the SEO Specialist (purpose: "maximum discoverability in search"), their domains define clear boundaries. The Content Lead has authority over "editorial direction and voice," while the SEO Specialist has authority over "keyword strategy and technical optimization." Neither controls the other. Both constrain each other through their domain boundaries, creating a coordination dynamic that emerges from mutual constraint satisfaction rather than hierarchical command.
This is enabling constraint in action. Independent agents (people) coordinate into coherent patterns (effective marketing) through carefully designed possibility spaces (roles) without requiring top-down control.
Governance as Rate-Independent Regulation
Now watch what happens when something isn't working. Maybe the Marketing Coordinator role has become too broad—one person can't effectively maintain all its accountabilities. Or maybe a new need has emerged that doesn't clearly fall within any existing role's purpose. These are what holocracy calls "tensions"—felt gaps between current reality and sensed potential.
In a traditional organization, you'd escalate to a manager who would decide how to reorganize. But holocracy has no managers. Instead, it has a governance process—a structured meeting format where roles (not people) propose changes to organizational structure.
Here's how it works: Anyone feeling a tension can bring a proposal to the next governance meeting. "I propose we create a new Email Campaign Manager role, split from Marketing Coordinator, with the purpose 'effective email marketing campaigns that drive engagement.'" Other roles in the circle can ask clarifying questions, share reactions, raise objections if the proposal would cause harm or move the organization backward. If no objections survive scrutiny, the proposal is adopted. Immediately. The role now exists.
This governance process is a governing constraint in exactly Juarrero's sense. It regulates organizational structure without dictating what that structure should be. It operates at a different timescale than daily work—governance meetings happen weekly or monthly, while tactical work happens constantly. From the perspective of fast operational dynamics, governance appears nearly constant, a stable background that shifts only occasionally. But when it does shift, it changes the constraint regime within which operations unfold.
Critically, governance doesn't intervene energetically in the work. No one forces the new Email Campaign Manager to take specific actions. Instead, governance modifies the possibility space by creating a new channel (the role), establishing new boundaries (its domain), and setting new expectations (its accountabilities). The work itself continues to flow according to the purposes and authorities of the roles involved, but now that flow has different constraints shaping it.
This is rate-independent control—regulation without energetic participation. It's the organizational equivalent of regulatory genes that control protein expression not by building proteins themselves but by modulating when, where, and how much structural genes operate. Governance doesn't do the work; it shapes the constraints within which work gets done.
Circles as Holons
As organizations grow, roles multiply. Soon you have twenty, thirty, fifty roles, all interacting. The coordination complexity becomes overwhelming. How does holocracy handle this?
Through circles—nested groups of roles organized around a shared purpose. The Marketing Circle might contain the Marketing Coordinator, Content Lead, SEO Specialist, Email Campaign Manager, and several other roles. This circle functions as a holon in Koestler's exact sense.
Face inward, and the circle is a whole unto itself. It has its own purpose ("effective marketing presence and customer engagement"), its own governance process for evolving its internal structure, its own tactical meetings for coordinating operations. The roles within the circle interact frequently—they're tightly coupled, operating at similar timescales, constantly synchronizing. From inside the circle, it's a complete world.
Face outward, and the circle is a part of something larger. It sits within the broader company, alongside other circles like Product Development, Customer Success, and Operations. The company as a whole has its own purpose ("creating value for customers and stakeholders"), and the Marketing Circle is one component contributing to that larger purpose.
This dual nature creates the trilevel structure Juarrero identifies as essential. Below the Marketing Circle sits its component roles. At the focal level sits the circle itself with its own dynamics and identity. Above it sits the company context within which the circle operates. Each level maintains its own characteristic rate and grain size. Individual roles adjust daily. The circle's structure evolves monthly through governance. The company's overall strategy shifts quarterly or annually.
And here's what makes circles brilliant: they have interfaces that actively manage information flow between levels. The Lead Link role brings purpose and priorities from the company level down to the circle, representing the needs of the larger whole. The Rep Link role brings tensions and information from the circle up to the company level, ensuring the component voice is heard. These link roles are precisely the "active dynamic gates that recode, standardize, harmonize, and adjust" that Juarrero describes as necessary for hierarchical organization.
The Lead Link doesn't command the circle—that would be traditional hierarchy. Instead, the Lead Link assigns people to roles, prioritizes among competing tensions, and allocates resources. These are all constraint-modifying activities. They shape what's possible and probable without dictating specific actions. Similarly, the Rep Link doesn't report upward in a traditional sense. Instead, they bring proposals to modify higher-level constraints when lower-level tensions can't be resolved locally.
The Constitution as Constitutive Constraint
Beneath all of this—or perhaps above it, depending on your perspective—sits the holocracy constitution. This document defines the core rules that make holocracy what it is. How roles are defined. How governance meetings proceed. What authorities are distributed and how. What you must do, what you cannot do, and what you're free to decide.
Unlike corporate bylaws that focus on legal structure, the constitution is purely about constraint architecture. It establishes the fundamental interdependencies that constitute the system. Change these rules and you're no longer doing holocracy—you're doing something else. This is constitutive constraint—the interlocking relationships that define identity.
What's remarkable is how much the constitution allows to vary. Roles can be created, modified, dissolved. Circles can be split, merged, reorganized. Policies can be added, revised, removed. People can move between roles, take on multiple roles, leave roles unfilled. The structure is extraordinarily fluid. Yet through all these changes, the organization remains recognizably itself because the constitutional constraints persist.
This is exactly the "self-assertive tendency" Koestler observed in holons. The organization maintains identity across structural changes because that identity isn't defined by specific roles or people—it's defined by the constraint regime the constitution establishes. The enhanced metastability comes from interlocking feedback loops: roles depend on circles, circles depend on governance, governance depends on the constitution, which defines how roles work. Each element constrains and is constrained by the others. The whole becomes robust through mutual interdependence.
When you map holocracy onto Juarrero's framework, the correspondence is uncanny. It's not that holocracy was designed based on constraint theory—Brian Robertson developed it from practical organizational experience. Rather, both discovered the same deep structure. This is how complex adaptive systems actually organize themselves when you remove top-down command and let constraint-based coordination emerge.
And this, as we're about to see, is exactly what AI agents need.
The Crisis of Control in Artificial Intelligence
When Autonomy and Coherence Collide
Picture an autonomous vehicle navigating a busy city. It needs to maintain its destination and route (coherence) while responding to unpredictable pedestrians, changing traffic patterns, and sudden obstacles (autonomy). Now imagine a thousand such vehicles, all part of a coordinated fleet. Each must be autonomous enough to handle its local context, yet coherent enough to optimize overall fleet efficiency. Traditional approaches force a tragic choice: centralize control and lose local responsiveness, or distribute control and lose global coordination.
This isn't just an autonomous vehicle problem. It's the fundamental challenge every AI agent system faces. A customer service AI must maintain consistent brand voice and policy compliance while adapting to each customer's unique situation. A trading algorithm must coordinate with other algorithms to avoid market instability while independently capitalizing on fleeting opportunities. A robotic swarm must maintain formation while individual robots navigate their own local obstacles.
The pattern repeats across every domain: AI agents need to be both autonomous (capable of independent judgment and action in varied, unpredictable contexts) and coherent (maintaining consistent purpose, identity, and coordination with other agents). For three decades, AI researchers have tried to solve this through three primary approaches, and all three fail in revealing ways.
Monolithic systems—traditional expert systems and early neural networks—achieve coherence by encoding all knowledge and behavior in a single, unified architecture. They maintain consistency beautifully. But they cannot adapt. Drop them in a novel situation and they freeze or fail catastrophically. Their coherence is brittle, purchased at the price of autonomy.
Swarm approaches go the opposite direction. Give each agent simple rules and let coordination emerge from local interactions. This works remarkably well for certain problems—ant colony optimization, particle swarm optimization, swarm robotics. Individual agents are highly autonomous. But the systems struggle to maintain coherent purpose. They optimize toward whatever gradients they detect, with no built-in capacity to question whether they're optimizing the right thing. Their autonomy comes at the price of directed coherence.
Hierarchical planners try to split the difference. Build a planning layer that sets goals and strategies, then hand specific tasks down to execution layers that carry them out. This works in stable, predictable environments. But it fails in the real world because of rate mismatch. By the time a plan propagates from the strategic layer through tactical layers to execution, the context has shifted. The plan no longer fits reality. The system either rigidly executes an obsolete plan (losing autonomy) or abandons the plan to react locally (losing coherence).
Every approach fails for the same underlying reason: they try to maintain coherence through command or direct control rather than through constraints. They miss what Juarrero makes explicit—that robust, adaptive coherence emerges from carefully structured constraint regimes operating at multiple rates, not from central planning or rule-following.
This is where holocracy becomes not just interesting but essential. It demonstrates how to achieve genuine autonomy-coherence integration through constraint architecture. And the principles that make it work in human organizations are precisely what AI agent systems need.
Enabling Constraints That Don't Micromanage
Think about what happens when you program an AI agent traditionally. You specify inputs, outputs, and the algorithm that maps one to the other. For simple tasks in controlled environments, this works. But for complex tasks in dynamic environments, you face an explosion of cases to handle. Every possible situation needs its own rule or training example. The code or the training dataset becomes massive, yet still fails to cover everything that might occur.
Now think about how a holocratic role enables action instead. The Marketing Coordinator role doesn't specify "if the blog post is about product launch, use template A; if it's about customer success, use template B; if it's Thursday, post at 2 PM..." It simply says: purpose is "compelling brand presence," domain includes "company blog," accountability includes "publishing weekly content." The person filling the role brings their own judgment about what compelling means, what content serves the current moment, how to balance multiple considerations.
This is constraint-based enablement rather than rule-based control. The role creates a channel—a possibility space where certain types of action make sense and are authorized. Within that space, the agent exercises autonomy. Different people enact the same role differently, yet all satisfy its constraints. The role is multiply realizable.
AI agents need exactly this. Instead of programming specific behaviors, define agent roles through purpose, domain, and accountabilities. The purpose acts as an attractor in a possibility landscape, shaping which actions the agent gravitates toward without forcing any particular choice. The domain establishes boundaries where the agent has authority to act without requiring permission—opening gates that would otherwise be closed. The accountabilities create expectations that other agents can synchronize with, reducing coordination complexity without eliminating local initiative.
Consider autonomous vehicles again. Rather than programming every possible driving scenario, define roles: the Route Optimizer has purpose "efficient transport of passengers," domain "route selection and modification," accountabilities "arrival time prediction and optimization." The Safety Monitor has purpose "preventing accidents and harm," domain "emergency intervention," accountabilities "maintaining safety margins and avoiding collisions." The Energy Manager has purpose "optimal resource usage," domain "speed and acceleration profiles," accountabilities "maximizing range and minimizing charging time."
These roles create channels for agent activity while preserving autonomy. The same Route Optimizer role could be filled by different AI systems—a rule-based planner, a learning algorithm, a hybrid approach—as long as each satisfies the role's constraints. Different vehicles in different contexts will optimize routes differently, yet all remain recognizable as enacting the Route Optimizer role. The multiply realizable structure prevents over-specification while maintaining coordination.
The key insight: enable through constraint, don't command through specification. Create possibility spaces, don't enumerate possibilities. Shape gradients, don't dictate paths. This is how natural systems achieve both autonomy and coordination. This is what holocracy implements. This is what AI agents require.
Governance That Evolves Understanding
But enabling constraints alone aren't enough. What happens when the constraint regime itself needs to change? When the environment shifts, when new capabilities emerge, when persistent tensions reveal that current roles no longer fit reality?
Traditional AI handles this poorly. Monolithic systems require complete retraining or reprogramming—a restart from scratch. Swarm systems have no mechanism to modify their rules except through meta-optimization, which is itself just another algorithm that might not fit the new context. Hierarchical planners can adjust plans but not planning strategies.
Watch what holocracy does instead. When someone experiences a tension—a gap between current reality and sensed potential—they don't escalate to a boss for permission to fix it. They bring a proposal to the governance process. "The Route Optimizer role needs a new accountability: 'coordinating with other vehicles for convoy efficiency.' The current system optimizes each vehicle independently, but we could reduce energy consumption by 15% if vehicles traveling similar routes drafted each other."
The governance meeting considers this proposal not from a position of central planning but through a process of objection testing. Will this change cause harm? Will it move us backward? If no objection survives scrutiny, the role evolves. Immediately. The constraint regime has updated based on experienced reality.
This is meta-level learning—learning about how to structure learning itself. AI agents desperately need this capability. Current approaches modify parameters within a fixed architecture, but they can't easily modify the architecture itself. Neural networks adjust weights but not layer structure (without complete retraining). Reinforcement learning tunes policies but not reward functions (without risking perverse incentives). The architecture remains static even as the problem space evolves.
Holocratic governance provides a model for dynamic architecture modification. Operate with fast operational learning at the parameter level: adjust routes, optimize speeds, fine-tune predictions. Operate with slower governance learning at the structural level: modify roles, create new accountabilities, restructure circles. Operate with rare constitutional learning at the fundamental level: change core constraints that define what kind of system this is.
These rate-differentiated levels of learning create stability without rigidity. Operational learning happens constantly, adapting to immediate context. Governance learning happens periodically, responding to persistent patterns that operational adjustments can't handle. Constitutional learning happens rarely, only when fundamental assumptions prove wrong. The system can evolve without dissolving because different kinds of change happen at different timescales.
Critically, governance in holocracy doesn't intervene energetically in operations. The governance meeting doesn't tell the Route Optimizer which specific routes to choose. It modifies the constraint regime—the role definition—within which route optimization occurs. This is rate-independent control in Juarrero's exact sense. The slower governance level shapes the possibility space for the faster operational level without micromanaging its dynamics.
For AI agents, this means implementing tension detection mechanisms that identify when current constraints aren't working, proposal generation processes that suggest structural modifications, and validation procedures that test whether changes actually improve performance. The governance engine operates at a coarser grain and slower rate than operational learning, but it has the authority to reshape the constraint regime that operations inhabit.
This solves the adaptation-stability paradox. Traditional approaches must choose between rigid structures that can't adapt and fluid structures that can't maintain identity. Rate-differentiated governance allows both: stable constraints that provide identity combined with structured processes for constraint evolution.
Identity Through Constraint Satisfaction
There's something deeper happening here, something that touches on what it means for an AI agent to have identity at all.
In traditional AI, identity is extensional—defined by fixed properties. This neural network has these layers, these weights, these activation functions. That's what it is. Change the weights through learning and it's now a different network in some sense, even if we still call it by the same name. There's no real continuity of identity, just versioning of parameters.
But in constraint-based systems, identity is intensional—defined by role and relationship within a constraint network. The Marketing Coordinator is whoever fills that role. The person changes, yet the role persists. Different people enact it differently, yet it remains recognizably the same role because its purpose, domain, and accountabilities continue. Identity inheres in the constraint structure, not in the particular material or agent that currently enacts it.
This has profound implications for AI agents. Rather than defining an agent by its architecture or weights, define it by the role it fills in a larger constraint network. The Route Optimizer role defines an agent's identity, not the specific algorithm that currently fills it. You could swap a rule-based system for a neural network, and as long as the new implementation satisfies the role's constraints, it's the same agent in the meaningful sense—it occupies the same position in the constraint network, serves the same purpose, interfaces the same way with other roles.
This enables something impossible in extensional systems: continuity of identity through radical internal change. The organization persists even as every person in it is replaced. The autonomous vehicle fleet maintains coherence even as individual vehicles upgrade their software, adopt new algorithms, or get replaced entirely. Identity emerges from constraint satisfaction, not material continuity.
Moreover, intensional identity is inherently context-dependent. The same software system might be a Route Optimizer in one context and a Safety Monitor in another, depending on which role it's filling in which constraint network. Properties like "risk-averse" or "efficiency-focused" aren't intrinsic to the agent—they emerge from its position in the constraint regime. Change the context and you change the agent's effective properties, even without modifying its code.
This is exactly how Juarrero says complex systems must be understood. Each position in a hierarchical network is defined by its neighborhood and moment—by where it sits in the constraint structure and when it exists in the system's temporal evolution. AI agents defined intensionally inhabit this kind of contextual space. Their capabilities and characteristics emerge from how they're constrained, not from what they're made of.
The practical consequence: stop trying to build generally intelligent agents with fixed capabilities. Instead, build role-filling agents whose effective capabilities emerge from the constraint regimes they inhabit. The same base system becomes different kinds of agents depending on the roles it fills, the circles it participates in, the purposes it serves. Intelligence becomes contextual and multiply realizable rather than fixed and intrinsic.
Multi-Agent Coordination Without Central Control
Now scale up. You don't have one AI agent—you have thousands. They need to coordinate across a distributed system. Some operate on edge devices with limited communication. Some have local goals that might conflict with global optimization. Some specialize in narrow tasks while others integrate across domains. How do you maintain coherent system behavior without a central controller?
Traditional approaches require either a master coordinator (creating a bottleneck and single point of failure) or hope that local optimization somehow aligns with global goals (which it generally doesn't). Holocracy offers a third way: distributed coordination through nested constraint regimes.
Think about circles in holocracy. Each circle operates semi-autonomously, with its own governance process and tactical coordination. Yet circles nest within larger circles, creating a fractal structure of constraint contexts. The Marketing Circle operates independently, but it sits within the broader company, receiving purposes and priorities through its Lead Link and sending tensions up through its Rep Link. When the Marketing Circle creates an Email Campaign Manager role, it doesn't need approval from the CEO. The circle has authority within its domain. Yet this local autonomy somehow maintains global coherence.
The mechanism: shared purposes cascade down while tensions cascade up, creating bidirectional constraint satisfaction. Higher-level circles establish purposes that constrain lower-level circles without dictating their operations. Lower-level circles detect tensions that inform higher-level governance without requiring direct intervention. Each level filters and translates information for adjacent levels, maintaining both autonomy and integration.
For multi-agent AI systems, this structure is liberating. Organize agents into circles with shared purposes. The Fleet Optimization Circle might contain all the vehicles operating in a particular geographic region. Within that circle, individual vehicle roles coordinate directly through domain boundaries and tactical protocols. The circle's Lead Link brings priorities from the city-wide Transportation Network Circle—"minimize congestion during rush hour" or "prioritize hospital access routes during emergency." The circle's Rep Link sends tensions upward—"current traffic signal timing creates dangerous intersection conflicts" or "charging station availability constraints prevent optimal routing."
Each circle governs its own structure. The Fleet Optimization Circle can create new roles, modify existing ones, establish policies, without asking permission from higher levels—as long as they operate within their domain and serve their purpose. Different circles can experiment with different coordination mechanisms. Innovation happens locally but patterns that work can spread through governance proposals that modify higher-level constraints.
This solves the scalability problem that plagues centralized approaches. As the system grows, you don't need to increase the central coordinator's capacity. Instead, you create new circles, each operating at the scale where its characteristic rate and grain size remain effective. The system scales through nesting, not through enlarging a central brain.
It also solves the single point of failure problem. No individual circle, role, or agent is essential. If a vehicle fails, its roles can be reassigned or absorbed by other vehicles. If an entire circle becomes dysfunctional, it can be dissolved and its responsibilities redistributed. The constraint regime holds the system together, not any particular material component.
Most importantly, it enables genuine emergence without losing coherence. Circles can discover novel coordination patterns that weren't explicitly programmed. Because governance happens locally, innovations can be tested without risking system-wide failure. If they work, the patterns spread through other circles adopting similar structures. The system evolves from the bottom up while maintaining top-down coherence through shared purposes and constitutional constraints.
This is holarchical organization in action. Each level is both autonomous and constrained, both independent and integrated. The system maintains identity through constraint satisfaction while continuously adapting structure to context. It's how ecosystems coordinate billions of organisms without a coordinator. It's how markets coordinate millions of transactions without a central planner. It's how brains coordinate billions of neurons without a homunculus.
And it's exactly what AI agent systems need to scale beyond toy problems into genuine complexity.
Critical Advantages for AI Agents
Resilience Through Modularity
Holocratic constraint architecture provides exceptional resilience:
Graceful degradation: Loss of individual agents doesn't cascade
Rapid recovery: Roles can be reassigned to available agents
Degenerate pathways: Multiple routes to satisfy same function
Adaptive reorganization: System restructures under stress
This implements the "watchmaker" principle—interruptions don't force restart from scratch.
Scalability Through Hierarchy
Rate-differentiated hierarchy enables scaling:
Fast local coordination: Agents within circles synchronize rapidly
Slower global coordination: Higher-level circles integrate gradually
Decoupled rates: Local changes don't perturb global structure
Emergent organization: Structure self-organizes without central planning
This solves the coordination problem that plagues flat multi-agent systems.
Adaptability Through Governance
Separation of operational and governance processes enables genuine learning:
Operational flexibility: Agents act autonomously within constraints
Structural evolution: Constraint regime adapts based on persistent patterns
Meta-learning: System learns how to learn more effectively
Constitutional stability: Core identity preserved despite surface changes
This implements the kind of developmental learning natural organisms exhibit.
Interpretability Through Constraint Transparency
Holocratic organization makes AI behavior more interpretable:
Explicit constraints: Roles and policies are documented, not implicit in weights
Traceable decisions: Actions explained by active constraint regime
Governance history: Changes to constraints recorded with rationales
Role accountability: Clear assignment of responsibility for outcomes
This addresses one of the most critical challenges in current AI systems.
Safety Through Boundary Management
Constraint-based control enhances safety:
Domain boundaries: Agents can only act within defined authority
Governance process: Changes require validation before deployment
Interface filtering: Signals are interpreted through safety-aware constraints
Constitutional limits: Hard boundaries that cannot be modified operationally
This implements the kind of "regulatory control" Juarrero discusses in inflammation and homeostasis.