Skip to content
@structural-explainability

Structural Explainability

Structural Explainability

License: MIT Build Status Check Links

Defines neutral constraints under which explainability is possible without embedding interpretation.

Structural Explainability defines the minimal, stable constraints under which identity, structure, change, explanation, and persistent disagreement can coexist over time without embedding interpretation.

It characterizes the representational conditions required for explainability to remain possible across incompatible legal, political, scientific, or normative frameworks.

This organization contains specifications, formalizations, and papers that define, justify, and defend that neutral space.

Organization Structure

This organization is structured around roles:

Role Purpose
Specifications Define what must be true for admissibility
Formalizations Demonstrate that specifications are internally consistent
Foundations Establish the underlying theoretical results
Papers Justify why the constraints are necessary and unavoidable
Boundaries & Overlays Define how interpretation may attach without entering the substrate

Normative Specifications

These repositories define the admissible representational space for structurally explainable systems. They are normative only in the sense of defining constraints, not interpretations.

Repository Purpose Status
spec-se Neutrality and boundary constraints of Structural Explainability Normative
spec-ae Accountable Entities and identity regimes Normative
spec-ep Graph evolution over accountable entities Normative
spec-cee Contextual interfaces for explanation, attestation, and provenance Normative
spec-se-appendix Identifier rules, examples, and cross-spec patterns Informative

Formalizations

These repositories do not define meaning or behavior. They demonstrate that the specifications are internally consistent, coherent, and composable under formal reasoning.

Repository Purpose CI Description
AccountableEntities Entity-regime instantiation CI Formalization of AE identity regimes
EvolutionProtocol Neutral exchange substrate CI Formalization of EP graph evolution
StructuralExplainability Cross-cutting constraints CI Neutrality and conformance predicates

Foundations

Repository Purpose CI Description
NeutralSubstrate Neutrality theorem CI Substrates stable under incompatible extensions must be pre-causal and pre-normative
IdentityRegimes Identity regimes CI Six identity-and-persistence regimes are necessary and sufficient for accountability-oriented substrates under neutrality assumptions

Papers

Papers provide theoretical justification for the specifications. They are explanatory, not normative.

Repository Focus Status Description
paper-100-neutral-substrate Neutrality theorem Submitted Narrative exposition of the neutrality theorem and its formal proof, establishing design constraints for neutral representational substrates
paper-200-identity-regimes Identity regimes Submitted Narrative exposition of the identity-regimes result and its formal justification

Boundary and Overlay Specifications

These repositories define additional structural layers or boundaries that operate relative to the neutral substrate. They do not alter identity, structure, or recorded change.

Repository Purpose CI Description
CEE Explanation overlay CI Structural forms for contextual explanation and evidence
InterpretationBoundary Interpretation boundary CI Conditions under which external frameworks may interpret substrate records
GovernanceBoundary Governance boundary CI Governance

Design Commitments

Across all repositories:

  • Identity precedes explanation.
  • Structure and change are recorded without interpretation.
  • Interpretation remains external, attributable, and contestable.
  • Disagreement is representable and not forced to resolve.
  • No domain semantics are embedded in the core.

Intentionally Excluded

The following are intentionally excluded from this organization:

  • Domain vocabularies (except clearly labeled examples)
  • Application schemas or data models
  • Analytics, inference, or decision systems
  • Governance or enforcement frameworks
  • Visualization or tooling layers

Domain projects may claim conformance with these specifications, but are outside this core.

How to Use This Organization

  • To implement Structural Explainability, start with the specifications.
  • To understand the justification, start with the papers.
  • To verify coherence, consult the Lean formalizations.

Core Statement

Structural Explainability defines a neutral structural substrate that enables explainability by preserving identity, structure, and change while allowing disagreement to remain external, attributable, and unresolved.

At its core, Structural Explainability is concerned with identity. Stable identity is the precondition for explanation, provenance, and disagreement over time. Identity must persist independently of interpretation, authority, or consensus.

Structural Explainability defines the complete admissible space of representation. It simultaneously bounds what representation must not do and contains all representation that is permitted. Within this space, identity, structure, and change may be recorded, while interpretation, explanation, and judgment are constrained to remain external to the substrate.

  • Accountable Entities define identity regimes that allow entities to persist across time and change.
  • Evolution Protocol defines the evolution of structural relationships among those entities, recording change without embedding explanation.

Together, they form the structural substrate.

  • Contextual Evidence & Explanations provide the structural interface by which external interpretation may be attached to, referenced from, and reasoned about, without entering or contaminating the substrate.
  • Context tags, explanations, attestations, and provenance are not facts about the world; they are structured references to interpretive acts that occur outside the substrate.

Interpretation does not disappear. It is made explicit, attributable, and contestable.

Structural Explainability is not anti-interpretation; it is anti-implicit interpretation. Elements of interpretation may exist only in forms that do not alter identity, structure, or recorded change.

Structural Explainability is designed for plural systems: independent implementations that represent the same phenomena in different ways. Uniform naming, shared ontologies, or centralized authority are not required. Differences are addressed through explicit, accountable mappings rather than forced normalization or consensus.

Domains such as science, model development, and law do not alter the substrate. They specialize explanation by contributing controlled vocabularies for contextual scoping. These vocabularies are constrained by Structural Explainability and do not assert truth, causality, or normativity.

The result is a system that records reality without deciding its meaning, enables explanation without enforcing agreement, and supports long-term coordination across disagreement, institutional change, and time.

Pinned Loading

  1. paper-100-neutral-substrate paper-100-neutral-substrate Public

    Paper establishing that for an ontological substrate to remain neutral and stable under allowable frameworks that include persistent disagreement, causal or normative commitments cannot be part of …

    TeX 1

  2. paper-200-identity-regimes paper-200-identity-regimes Public

    This paper derives necessary and sufficient structural constraints on neutral ontological substrates required to support stable reference and accountability under persistent disagreement.

    TeX 1

  3. spec-ae spec-ae Public

    1

  4. spec-cee spec-cee Public

    1

  5. spec-ep spec-ep Public

    1

  6. spec-se spec-se Public

    1

Repositories

Showing 10 of 18 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…