Glossary

Definitions, named.

A canonical reference for the proprietary terms and architectural concepts used across smpl. Brand and offers; the four system layers; the internal subsystems; the doctrine concepts the architecture rests on.

When in doubt about what something means in smpl context, this is the single source. Each definition links forward to the page where it's articulated in full.

Brand & Offers

  1. smpl The studio.

    A quiet software studio that builds vertically integrated AI engineering infrastructure. The lowercase wordmark is intentional — smpl is meant to read as an imprint, not a hero. Brand philosophy: reduction without loss.

  2. Persistent codebase intelligence The category.

    The capability smpl produces. Continuously maintained structural understanding of a codebase that survives across sessions, accumulates institutional memory, and grounds investigation and execution in actual system state rather than improvised context.

    See the system architecture
  3. Codebase Intelligence Review The canonical engagement.

    smpl's selective, manually-reviewed entry point — a written diagnostic of where legibility, context loss, and structural drag are most expensive in a specific codebase. The first step into a deeper engagement.

    See the review

System Layers

  1. Lux Legibility layer.

    The codebase intelligence layer. Fuses semantic retrieval, structural code intelligence, relation-aware retrieval, and domain discovery into a single continuously maintained structural view of a codebase. Symbol-, type-, and reference-level truth that plain-text search cannot provide.

    Read the technical paper
  2. Recon Investigation layer.

    The investigation engine. Evaluates each ticket through a pipeline of bounded-epistemology stages — reconnaissance, archetype classification, problem scoping, archetype-specific investigation — and produces targeted pushback questions grounded in actual codebase analysis. Ambiguity surfaces before engineering time is spent on it.

    Read the technical paper
  3. Corpus Memory layer.

    The institutional-memory substrate. A structured, git-backed knowledge repository where investigations, architectural decisions, accumulated specialist expertise, and resolved incidents are captured as queryable findings. The 50th investigation in a service is grounded in everything the previous 49 surfaced.

  4. WorkStream Execution layer.

    The orchestration substrate. Decomposes work into a DAG with explicit dependency edges, runs each task in a bounded context, and structurally prevents context exhaustion. Where the layers above are stable, drafts changes, scaffolds migrations, and opens pull requests with tests and evidence — always carrying the context of the layers beneath.

Internal Subsystems

  1. Researcher The learning engine.

    An automated research platform that runs experiments across the seven optimization tiers, scored quantitatively against an evaluation corpus that grows with every real ticket. Treats every prompt, configuration, context strategy, and tool exposure as a tunable parameter calibrated empirically rather than by intuition.

    See §03 of the paper
  2. Faculty / Faculty Agents Specialist roles.

    Per-stage, per-archetype specialist agents that accumulate domain-specific expertise across many tickets in the same service. Faculty knows its domain deeply, knows the limits of its domain, and defers outside its scope. Their accumulated findings are first-class retrieval sources for future investigations.

  3. Recon Pipeline Recon's deterministic stages.

    Recon's typed, mandated, bounded sequence: Reconnaissance → Archetype Classification → Problem Scoping → Investigation (with branches per archetype). The structure is the enforcement — no stage sees the full investigation, no stage is permitted to reason outside its scope.

  4. Data Fabric Recon's external data surface.

    A workspace-scoped configuration of external data sources that investigation agents can query during analysis: code intelligence (Lux), Linear, Slack, Sentry, read-only database replicas, log groups. Credentials are resolved per workspace and injected ephemerally, scoped to a single evaluation.

Doctrine Concepts

  1. Bounded epistemology Architectural humility.

    The principle that an agent should know the limits of its knowledge and operate strictly within its epistemic domain. Enforced structurally — through typed interfaces, separated planes, and pipeline stages — rather than encouraged through prompts. Humility imposed by guideline disappears under load; humility imposed by ontology can't be escaped.

    See §05 of Ontological Foundations
  2. Three-Layer Ontology Data · Logic · Action.

    The taxonomy that defines an agent before it defines what the agent does. Data Layer specifies what the agent perceives, produces, and remembers. Logic Layer specifies how it reasons and what authority it holds. Action Layer specifies what it can affect and where it is forbidden. Behavior follows from being.

    See §02 of Ontological Foundations
  3. Four Modes EXEC · OPER · ADV · OBS.

    The semantic logic modes that map separated cognitive powers onto agents. Executive (binding decisions only), Operational (direct execution), Advisory (guidance without authority), Observational (analysis without modification). Each agent operates within a bounded cognitive scope.

    See §03 of Ontological Foundations
  4. Three-Plane Isolation Model Control · Knowledge · Action planes.

    The architectural embodiment of the ontology. Each plane runs in its own infrastructure with strict write boundaries that match its ontological role. The orchestrator runs in Control and writes nothing; advisory and observational agents run in Knowledge and write only to long-term memory; operational agents run in Action and write only to artifacts.

    See §06 of Ontological Foundations
  5. Five-Layer Optimization Hierarchy Model · System Prompt · User Prompt · Context · Tooling.

    The five layers that determine the quality of any AI-driven use case, where each layer is a ceiling for the layers below. The compounding gains live in simultaneous optimization, not depth at any single layer.

    See §02 of What Is Neil?
  6. Seven Tiers Cheap to expensive optimization escalation.

    Researcher's optimization ladder, from Tier 0 (prompt variants) through Tier 6 (dataset curation). Each tier consumes the signal produced by the tiers below it. Lower tiers are cheap and fast; higher tiers are expensive and slow but increasingly structural.

    See §04 of What Is Neil?
  7. Context Exhaustion Drowning in accumulated history.

    The persistent failure mode where an agent spends its reasoning budget navigating its own context rather than solving the problem. The architecture defends against it at four levels simultaneously — persona, ontological, epistemological, and DAG-orchestration segmentation — none sufficient on its own.

    See §06 of What Is Neil?
  8. Autonomous Team Unit (ATU) The unit of organizational scale.

    A complete, self-contained execution unit — Team Lead, Implementation, Verification, Validation — that pulls work from a backlog and ships verified artifacts. ATUs are functionally substitutable. Scale comes from running more in parallel, not from growing any individual one.

    See §08 of Ontological Foundations

Definitions are scaffolding. The work is what makes them real.

Every term in this glossary corresponds to something the architecture actually does on a real codebase. Start with the Codebase Intelligence Review.