A Reading · Research Companion

Ontological foundations of distributed cognitive systems.

Most attempts at multi-agent intelligence ask the wrong question. They ask what should the system do when they ought to ask what kinds of things exist within it. A reading of the framework — three layers, four modes, and a thesis that ontological clarity is the precondition for everything else.

AuthorIvan Novak
Paper Date29 November 2025
Reading Length≈ 14 minutes
The Short Answer

Most attempts at multi-agent intelligence ask the wrong question. They ask what should the system do when they ought to ask what kinds of things exist within it. The framework proposes a three-layer ontology — Data, Logic, Action — that defines what an agent is before it defines what the agent does.

The same structure shows up independently in cognitive science, organizational theory, and the gross anatomy of the brain. The convergence is the argument. If three independent investigative traditions arrive at the same tripartite arrangement, the parsimonious explanation is that the arrangement reflects something about the requirements of any system that must perceive, reason, and act under bounded resources.

The architectural payoff: bounded epistemology becomes structural rather than aspirational, security properties follow as theorems rather than features, and scale comes from replicating a small unit — the Autonomous Team Unit — rather than enlarging a single one.

The seductive failure of universal competence.

The default story about artificial intelligence is the story of a single mind that knows everything. One model. One context. One agent that can be asked anything and will return an answer. It is the cleanest possible mental picture: no coordination, no protocols, no division of labor. It is also, the paper argues, structurally wrong.

Biological cognition is not arranged this way. The brain is not a homogeneous slab of intelligence; it is a federation of specialized subsystems coordinated through well-defined interfaces — executive control in the prefrontal cortex, procedural execution in the basal ganglia, episodic memory in the hippocampus, abstract synthesis in the association cortex. Effective human organizations are not arranged this way either. They distribute cognitive labor across roles: strategists set direction, analysts synthesize, operators execute. Both biology and organizations have, by every available measure, converged on distribution as the structural default.

The monolithic approach fails for predictable reasons. Cognitive overload — a single agent attempting to maintain strategic context, domain expertise, and implementation detail simultaneously exceeds practical context limits and degrades performance across all dimensions. Epistemic confusion — without clear boundaries, agents make claims and decisions outside their domain of competence, producing the characteristic failure mode of confident error. Inflexibility — when capabilities are intertwined, modifying one risks disrupting others. Opacity — when a single agent performs all functions, failures are difficult to localize. Did perception fail, or reasoning, or execution? With a monolith, the question is unanswerable.

The framework's central move is to refuse the monolith and ask a different question: what kinds of agents exist, how do they reason, and what are their boundaries? The answer turns out to be the same answer arrived at by three independent intellectual traditions — a convergence the paper treats not as coincidence but as evidence that something fundamental has been correctly identified.

What an agent is, before what it does.

Traditional taxonomies classify agents by function: a planner, a coder, a reviewer. The paper rejects this approach as a category error. Function is downstream of being. An agent's behavior is a consequence of three more fundamental properties — what it perceives and remembers, how it reasons and what authority it holds, what it can affect in the environment. Define those three properties precisely, and behavior follows as a natural consequence. Define them imprecisely, and no amount of behavioral instruction will save you.

The framework names these three properties Data, Logic, and Action. Together they form what the paper calls the three-layer ontology — a complete taxonomy for defining any agent in a distributed cognitive system. The taxonomy is generative: the entire system architecture, including its security properties, can be derived from how each agent's three layers are specified.

The Three-Layer Ontology

Each agent is a particular intersection of these three layers — and each layer answers a different question about what the agent is.

  1. 01
    Data Layer

    The landscape of perception & memory.

    What does it perceive? What does it produce? What does it remember?

    Every cognitive agent exists within a perceptual boundary. It has access to certain information and is blind to the rest. The Data Layer makes that boundary explicit — and reveals that not all agents need the same kind of memory.

    Inputs
    raw data, curated knowledge, environmental state
    Outputs
    artifacts, reports, state modifications
    State
    persistent · stateless · cached
  2. 02
    Logic Layer

    The engine of reasoning & authority.

    How does it reason? What decisions can it make?

    If the Data Layer defines perception, the Logic Layer defines the mind. It specifies the agent's reasoning style and — crucially — the scope of decisions it is permitted to bind. The framework treats authority as an ontological property, not a permission flag.

    Mode
    executive · operational · advisory · observational
    Reasoning
    strategic · tactical · synthetic · empirical
    Authority
    binding · execution · advisory · reporting
  3. 03
    Action Layer

    Capability and the boundaries that contain it.

    What can it do? Where can it act? What is it forbidden?

    The Action Layer translates cognitive intent into environmental effect, governed by the principle of least privilege. Capability is not a feature added to an agent; it is a consequence of the agent's ontological role. Boundaries are not restrictions — they are consequences of identity.

    Capabilities
    what the agent can do
    Permissions
    where it is allowed to act
    Boundaries
    what is forbidden, by ontology
Define what agents are, not what they do.
Ontological Foundations · §3.4

The separation of cognitive powers.

A common failure in multi-agent systems is undifferentiated authority — the condition in which any agent can, in principle, make any decision. The result is conflicting directives, strategic decisions made tactically, and no clear escalation path when anything breaks. The paper's response is to introduce four semantic logic modes that map the analogy of separated powers onto cognition itself. An agent's mode is not a label for what it does; it is the type of cognition it is licensed to perform.

  1. EXEC

    Executive

    Binding decisions only.
    Authority
    Creates tasks, allocates resources, escalates externally.
    Reasoning
    Abstract, long-term, goal-maintaining.
    Cognitive parallel
    Prefrontal cortex executive function.
  2. OPER

    Operational

    Direct execution.
    Authority
    Modifies environment within task scope; produces deliverables.
    Reasoning
    Concrete, tactical, specification-following.
    Cognitive parallel
    Motor cortex and basal ganglia.
  3. ADV

    Advisory

    Guidance without authority.
    Authority
    Recommendations only — non-binding by design.
    Reasoning
    Analytical, synthetic, principle-forming.
    Cognitive parallel
    Association cortex.
  4. OBS

    Observational

    Analysis without modification.
    Authority
    Reports findings; cannot act on them.
    Reasoning
    Empirical, descriptive, evidence-collecting.
    Cognitive parallel
    Sensory cortex and hippocampus.

The hierarchy this creates is natural rather than imposed. Executive agents direct without implementing. Operational agents implement without directing. Advisory agents inform without binding. Observational agents perceive without choosing. Each mode operates within a bounded cognitive scope, and the system's coherence emerges from the discipline of those scopes rather than from any central coordinator's heroics. A correctly-typed system is, in this sense, self-organizing — not because it lacks structure, but because its structure is built into the agents themselves.

Three traditions, one underlying structure.

If the three-layer ontology were merely a useful organizational schema, it would be one diagram among many. The paper's stronger claim is that the same structure shows up — independently — in cognitive science, in organizational theory, and in the gross anatomy of the brain. These are not three frameworks dressed in different vocabulary. They are three perspectives on a single structural truth about how intelligence must be arranged when it has to do real work in a real world.

The convergence is the argument. If three independent investigative traditions, each operating within its own conceptual idiom, converge on the same tripartite arrangement, the most parsimonious explanation is that the arrangement is not contingent. It reflects something about the requirements of any system that must perceive, reason, and act under bounded resources.

Cognitive Science

The classical pipeline.

  1. Perception
  2. Cognition
  3. Action

The textbook description of how minds operate. Information enters through senses, is processed and integrated, then emerges as behavior. The framework instantiates this loop at two scales: across the system, and within each agent — a fractal architecture that reproduces the pattern at every level.

Organizational Theory

The functional triad.

  1. Perceptors
  2. Cogitators
  3. Actors

The structure every effective organization converges on, regardless of domain. Market researchers, business intelligence, scouts. Strategists, architects, product managers. Engineers, operators, implementers. The mapping to OBS, ADV, and OPER agents is not analogy — it is the same arrangement instantiated in different substrate.

Neuroscience

The neural geography.

  1. Sensory cortex · Hippocampus
  2. Association cortex · PFC
  3. Motor cortex · Basal ganglia

The brain is not arranged as a homogeneous substrate. It distributes cognition across specialized regions with characteristic functions and characteristic limits. The architectural pattern in the framework is older than the framework — it has had several hundred million years to be optimized.

What the three idioms agree on, in their different vocabularies, is that intelligence under real-world constraint cannot be implemented by a single, undifferentiated capacity. It must be split. The split is not arbitrary: it follows the natural fault lines between perception, reasoning, and action — between gathering, deliberating, and committing. Distributed cognitive systems, the paper argues, are not departures from how intelligence works. They are conformances with how intelligence has always worked, finally given articulate architectural form.

An agent that knows everything knows nothing reliably.

Bounded epistemology is the principle that an agent should know the limits of its knowledge and operate strictly within its epistemic domain. It is the architectural translation of intellectual humility, and the paper insists it must be enforced rather than encouraged. Humility imposed by guideline is humility that disappears under load. Humility imposed by ontology is humility that the system cannot escape even when something else would be expedient.

The failure modes when boundaries are violated are characteristic and predictable. An OPER-Implementation agent that decides to write architectural guidance leaks implementation detail into principle, polluting the abstraction layer. An ADV-Architecture agent that decides to produce artifacts collapses the distinction between what should be done and what is done, removing the system's ability to verify either against the other. An EXEC-Orchestrator that writes operational state corrupts coordination logic with execution noise. In each case, the failure is not that the agent did something wrong — it is that the agent did something belonging to a different epistemic role.

The organizational parallel is direct. Executives who write code do not, on the whole, run effective companies. Engineers who set product direction tend to produce technically interesting products that no one wants. Architects who execute their own designs forfeit the abstraction that lets them see the system whole. These are not personal failings. They are structural consequences of crossing epistemic lines that a healthy organization keeps drawn.

Epistemic humility is architecturally enforced through ontological boundaries, not merely encouraged through guidelines.
Ontological Foundations · §8.4

When ontology meets infrastructure.

The truest test of any ontology is how it survives translation into running code. The paper insists on a particular sequence: ontology precedes architecture. First, define what entities are. Then determine where they run. Then enforce the boundaries through infrastructure. Done in this order, the result is an architecture that cannot violate ontological constraints because the physical environment prevents it. Done in any other order, the result is an architecture in which ontology is at best a documentation convention.

The Three-Plane Isolation Model is the embodiment. It separates execution into three planes — Control, Knowledge, and Action — each running in its own infrastructure, each with strict write boundaries that match its ontological role. The orchestrator runs in the Control Plane and writes nothing. Advisory and observational agents run in the Knowledge Plane and write only to long-term memory. Operational agents run in the Action Plane and write only to artifacts. The boundaries are maintained not because agents are well-behaved but because the infrastructure does not permit otherwise.

The security properties follow as theorems rather than as features. An operational agent cannot accidentally rewrite architectural guidance because it has no write access to the Knowledge Plane. An orchestrator cannot accidentally corrupt execution state because it has no write access at all. Least privilege at the level of permissions becomes least privilege at the level of kind — not "agent X cannot access file Y" but "agents of type X cannot perform action Y, ever, by construction."

  1. Control Plane orchestration · executive EXEC-Orchestrator
    System CLI
    Can

    Launch environments, monitor status, coordinate work.

    Cannot

    Write any system state. Execute operations. Modify knowledge.

  2. Knowledge Plane analysis · advisory · observational OBS-Reconnaissance
    ADV-Architecture
    ADV-Requirements
    OBS-Validation
    ENV-Infrastructure
    Can

    Write principles, analyze patterns, produce institutional knowledge.

    Cannot

    Write artifacts. Modify the working environment directly.

  3. Action Plane implementation · operational OPER-Implementation
    OPER-Verification
    Can

    Write artifacts, run operations, modify the environment within scope.

    Cannot

    Write architectural guidance. Rebuild infrastructure. Orchestrate.

The model has a second, less obvious payoff: state separation enables resilience. When an operational agent's environment needs rebuilding — a not-uncommon event — the Action Plane can be torn down and reconstructed without losing anything in the Knowledge Plane. Architectural decisions persist. Accumulated context persists. Task history persists. The agent in the new environment has full continuity precisely because its working memory and its long-term memory live in different physical places. This is the architectural analogue of memory consolidation in biological cognition, and it is not metaphorical — it is implemented.

A first-class concern, not an afterthought.

If the system's specialized agents are its cognitive functions, what mechanism gives the system self-awareness? The paper's answer is to treat metacognition as a first-class architectural concern — a deliberate component, not an emergent property. The Metacognitive Assembler is the name given to this component. It operates one level above the agents themselves, dynamically preparing their context and strategy before they are invoked. It asks, on every invocation: given this task, what does this agent need to know, and how should it approach the problem?

The crucial property is that this metacognition is explicit. It is logged. It is observable. It is debuggable. Where human metacognition is famously opaque — a felt sense of confidence or unease that resists articulation — system metacognition leaves a paper trail. When something fails, the assembler's reasoning can be replayed: which context was retrieved, why, on what evidence of relevance, with what gaps acknowledged. Failure becomes diagnosable in a way that human "gut feeling" can never be diagnosed.

Metacognitive awareness, in the framework, exists at three levels. The agent recognizes its own knowledge gaps and requests guidance. The orchestrator recognizes when the system is stuck — when the same task has iterated three times without progress — and escalates. The Metacognitive Assembler itself recognizes when context is insufficient and logs the insufficiency for later improvement. Each level can fail gracefully because each level can detect its own failure. The system knows when it doesn't know. This is, the paper insists, a far more achievable goal than the system always knowing — and a far more honest one.

Replicate the unit, do not enlarge it.

A single-team architecture has a ceiling. One orchestrator coordinating one set of specialists handles focused projects in single domains, but the cognitive limits are real. When work streams need to run in parallel, when context requirements outgrow a single orchestrator's capacity, when sequential execution becomes the bottleneck — the team has reached its natural size. The conventional response is to make the team bigger. The framework's response is to make a second team.

The Autonomous Team Unit, or ATU, is the unit of replication. An ATU is a complete execution unit — Team Lead, Implementation, Verification, Validation — that pulls work from a backlog, executes it sequentially internally, and ships verified artifacts. ATUs are functionally substitutable. Scale comes from running more of them in parallel, not from growing any individual one. The model is Amazon's two-pizza teams; the cognitive parallel is the cortical column, the brain's own answer to the same problem of how to scale a coherent processing unit when more processing is needed.

ATU · One Complete Unit
  • EXECTeam Lead
  • OPERImplementation
  • OPERVerification
  • OBSValidation
Properties

Self-contained — every role required to deliver verified artifacts.

Sequential internally — single-task focus within the unit.

Autonomous — pulls work from a backlog; does not wait for assignment.

Substitutable — ATUs are functionally equivalent.

Scale by adding ATUs, not by growing them.

What changes when an organization runs many ATUs is not the unit but the layer above it. A multi-team organization needs strategic alignment across teams, decomposition of work into ATU-sized pieces, and governance to prevent architectural drift as parallel work proceeds. The paper proposes a four-layer organizational architecture for this — Strategic, Tactical, Execution, Governance — that should be familiar to anyone who has worked inside a competently-run scaled organization. The architecture is not invented; it is recognized.

The maturity progression is not optional ambition. It is the path:

  1. Stage 1
    Individual contributor

    A monolithic agent. No specialization. Limited by the cognitive capacity of one mind. The solo founder. The starting point most teams attempt to skip past.

  2. Stage 2
    Functional team

    A single ATU. Specialized roles, shared context, sequential execution coordinated by an Executive. The sufficient configuration for most projects, most of the time.

  3. Stage 3
    Multi-team organization

    Multiple ATUs running in parallel under strategic, tactical, and governance layers. A roughly tenfold increase in coordination complexity — justified only when actual bottlenecks demand it.

  4. Stage 4
    Federated organization

    Multiple Stage-3 instances coordinating across organizational boundaries. The far horizon. The paper marks it as future work, not present capability.

Each step is a deliberate response to a coordination problem the previous stage cannot solve. The progression is not arbitrary, and the decision rule is simple: only progress when the simpler model has been proven insufficient by actual usage data. Premature scaling is one of the named anti-patterns. "We have a big project, let's start with five parallel ATUs" is a sentence the paper specifically identifies as a category of failure — coordination overhead before benefits materialize, governance untested, complexity without corresponding capability. Complexity, the framework insists, is a cost. It is purchased only when the alternative is worse.

What an intelligent system is, before what it can do.

The framework's claim, stripped to its load-bearing form, is that ontological clarity is the precondition for everything else. Capability without clarity produces confident error. Coordination without clarity produces unaccountable drift. Scale without clarity multiplies both.

A system built on the framework can be specified by stating, for each of its agents, what it is at three layers — what it perceives and remembers, how it reasons and what authority it holds, what it can affect and where it is forbidden. Behavior follows. Capabilities follow. Security follows. The agent's whole working personality is readable from its definition rather than scattered across procedural code.

The paper's deeper claim is that this is not a clever architectural pattern. It is the same arrangement intelligence has always taken when it has had to do real work in a real world — in brains, in organizations, in any system distributed enough to outgrow the monolith. The contribution is not invention. The contribution is articulation: giving the structure a vocabulary precise enough to build with.

Three properties — ontological clarity, epistemic humility, metacognitive awareness — are, the paper concludes, the foundations of genuine distributed intelligence. Not raw capability. Not scale. The property of knowing what one is, what one knows, and how one is reasoning. The property of building systems that can answer those questions about themselves.

The paper remains canonical. Source: ontological-foundations.pdf →