-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathExplainer.tex
More file actions
1267 lines (788 loc) · 143 KB
/
Explainer.tex
File metadata and controls
1267 lines (788 loc) · 143 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
% Options for packages loaded elsewhere
\PassOptionsToPackage{unicode}{hyperref}
\PassOptionsToPackage{hyphens}{url}
\PassOptionsToPackage{dvipsnames,svgnames,x11names}{xcolor}
%
\documentclass[
11pt]{article}
\usepackage{amsmath,amssymb}
\usepackage{iftex}
\usepackage{fontspec}
\usepackage{unicode-math}
\defaultfontfeatures{Scale=MatchLowercase}
\defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1}
% Use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
\IfFileExists{microtype.sty}{% use microtype if available
\usepackage[]{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\makeatletter
\@ifundefined{KOMAClassName}{% if non-KOMA class
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}}
}{% if KOMA class
\KOMAoptions{parskip=half}}
\makeatother
\usepackage{xcolor}
\usepackage[margin=1in]{geometry}
\usepackage{longtable,booktabs,array}
\usepackage{calc} % for calculating minipage widths
% Correct order of tables after \paragraph or \subparagraph
\usepackage{etoolbox}
\makeatletter
\patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{}
\makeatother
% Allow footnotes in longtable head/foot
\IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}}
\makesavenoteenv{longtable}
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{5}
\IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}}
\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available
\urlstyle{same}
\hypersetup{
colorlinks=true,
linkcolor={Maroon},
filecolor={Maroon},
citecolor={Blue},
urlcolor={Blue},
pdfcreator={LaTeX via pandoc}}
\author{}
\date{}
\begin{document}
\hypertarget{the-incompleteness-of-observation}{%
\section{The Incompleteness of Observation}\label{the-incompleteness-of-observation}}
\hypertarget{why-physics-biggest-contradiction-might-not-be-a-contradiction-at-all}{%
\subsubsection{Why Physics' Biggest Contradiction Might Not Be a Contradiction at All}\label{why-physics-biggest-contradiction-might-not-be-a-contradiction-at-all}}
\hypertarget{with-complete-mathematical-detail}{%
\subsubsection{With Complete Mathematical Detail}\label{with-complete-mathematical-detail}}
\textbf{Alex Maybaum --- March 2026}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{the-worst-prediction-in-physics}{%
\subsection{The Worst Prediction in Physics}\label{the-worst-prediction-in-physics}}
Physics has two spectacularly successful theories. Quantum mechanics describes the behavior of atoms, particles, and light. General relativity describes gravity, space, and time. Each has been confirmed to extraordinary precision. They have never disagreed with any experiment.
They disagree with each other.
Ask quantum mechanics how much energy empty space contains and it gives you a staggering number: roughly \(10^{113}\) joules per cubic meter. Ask general relativity the same question --- read the answer off the expansion rate of the universe --- and you get about \(6 \times 10^{-10}\) joules per cubic meter. The ratio is \(10^{122}\). For scale, the number of atoms in the observable universe is about \(10^{80}\). This is not a close call.
For decades, the assumption has been that something is deeply broken --- that one or both calculations contain an error, and that finding the mistake will point the way to a unified theory of everything.
This paper argues the opposite. Neither calculation is wrong. They disagree because they \emph{must}. In fact, that massive \(10^{122}\) discrepancy isn't a failure at all. It is the strongest piece of existing evidence we have that quantum mechanics is not the fundamental bedrock of reality, but an emergent description forced upon us by our limited vantage point.
The argument is built from a chain of mathematical proofs, each feeding into the next. This document explains what the paper claims, walks through the logic of every major proof, and shows how they connect.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{the-starting-point-four-axioms-and-three-conditions}{%
\subsection{The Starting Point: Four Axioms and Three Conditions}\label{the-starting-point-four-axioms-and-three-conditions}}
Every mathematical proof starts from assumptions, and the paper is explicit about its four. None of them mention quantum mechanics. None of them mention general relativity. They are:
\textbf{Axiom 1: Deterministic dynamics.} The universe evolves according to definite rules. Given the complete state at one time, the state at any other time is uniquely determined. Mathematically, the state lives in a phase space Γ, and there's a Hamiltonian H\_tot that governs how the state changes:
\[\frac{\partial \rho}{\partial t} = \{H_{\text{tot}}, \rho\}\]
The curly braces \{,\} are Poisson brackets --- the classical mechanics version of ``how things change.'' The function ρ represents a probability distribution, but that probability reflects the observer's ignorance, not any fundamental randomness. Think of it like a billiard table: if you knew the exact position and velocity of every ball, you could predict the future perfectly.
\textbf{Axiom 2: Finiteness.} The system has finitely many distinguishable states. There's a smallest meaningful size ε --- you can't resolve anything smaller. This means the configuration space is finite, not continuous. This matters because finite systems have a property infinite systems don't --- they must eventually return to their starting state (Poincaré recurrence). In Part I, ε is left unspecified. In Part II, self-consistency forces ε = 2 l\_p (twice the Planck length).
\textbf{Axiom 3: Causal partition.} The total phase space splits into two pieces:
\[\Gamma = \Gamma_V \times \Gamma_H\]
Γ\_V is the visible sector (what the observer can access). Γ\_H is the hidden sector (what they cannot). The total Hamiltonian splits correspondingly:
\[H_{\text{tot}} = H_V + H_H + H_{\text{int}}\]
H\_V governs the visible sector alone. H\_H governs the hidden sector alone. H\_int couples them --- it's how the two sectors talk to each other. Without H\_int, the two sectors would evolve independently and the observer would never feel the hidden sector's influence.
\textbf{Axiom 4: Classical probability.} The observer uses standard Kolmogorov probability theory. No exotic probability theories, no negative probabilities, no quantum probability --- just ordinary probability. This is the axiom that makes the result surprising: we're putting in classical probability and getting out quantum mechanics.
That's it. The claim is that quantum mechanics --- the Schrödinger equation, the Born rule, superposition, entanglement, Bell inequality violations --- follows from these four premises alone, given the right conditions on the hidden sector:
\textbf{C1: Non-zero coupling (H\_int ≠ 0).} The visible and hidden sectors interact. Information flows between them. Without this, the observer's room is perfectly isolated --- nothing interesting happens.
\textbf{C2: Slow bath (τ\_S ≪ τ\_B).} The hidden sector evolves much more slowly than the visible sector. τ\_S is the timescale of visible-sector processes; τ\_B is the timescale of hidden-sector processes. This is the \emph{opposite} of the usual assumption in physics. Normally, people assume the environment is fast and chaotic (a ``heat bath'' that quickly forgets everything). Here, the environment is slow and has a long memory. This is what makes the dynamics non-Markovian.
\textbf{C3: Sufficient capacity (N\_H ≫ N\_V).} The hidden sector has many more degrees of freedom than the visible sector. There's enough ``room'' to store information about the visible sector's history without running out of space.
The axioms set the stage. The conditions determine what kind of show plays on it. The next section explains why the cosmological horizon satisfies all three.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{the-observers-blind-spot}{%
\subsection{The Observer's Blind Spot}\label{the-observers-blind-spot}}
Light travels at a finite speed, and the universe has a finite age. Put those together and every observer has a horizon --- a boundary beyond which no signal has had time to arrive. Everything beyond that boundary is ordinary physics: fields, particles, radiation. But it is structurally inaccessible. Not because our telescopes aren't good enough, but because the geometry of spacetime forbids it. No technology that obeys the speed of light can reach past it.
This means every observer in the universe is in the same epistemic situation: there are degrees of freedom --- a vast number of them --- that influence what you measure but that you can never track. When you write down the laws of physics for the things you \emph{can} see, you're forced to average over everything you can't. You have to ``trace out'' the hidden sector.
Here's what that looks like concretely. The total system --- visible plus hidden --- is deterministic. If you knew the complete state, you could predict the future exactly. But you don't know the hidden part. You know the visible state is \(x\), but there are many possible hidden states compatible with \(x\), and each one sends \(x\) to a different visible future. Hidden state \(h_1\) might send the particle left; hidden state \(h_2\) might send it right. Since you can't tell which hidden state you're in, the best you can do is assign probabilities: average over all the possible hidden states, weighted by how likely each one is. The result is a set of \emph{transition probabilities} --- the chance that visible state \(x\) at time \(t_1\) becomes visible state \(y\) at time \(t_2\). You've gone from a deterministic system you can't fully see to a probabilistic one you can. That's a stochastic process, and it's the only description available to any observer who can't access the hidden sector.
The standard expectation is that this should produce something boring --- classical, memoryless noise. And it would, if the hidden sector were fast and forgettable, like air molecules bouncing off a grain of pollen. Each kick is independent of the last. Physicists call this \emph{Markovian} behavior.
But the hidden sector beyond the cosmological horizon is not like that. It differs in three specific ways, and the paper proves that their conjunction changes everything.
\textbf{It's coupled.} The horizon is not a static wall. Stress-energy conservation enforces continuous dynamical correlations across it. Matter crosses the horizon, and the horizon area adjusts in response to interior energy density. Information flows in both directions. (Condition C1.)
\textbf{It's slow.} The hidden sector's correlation time is set by the Hubble timescale --- roughly \(10^{17}\) seconds, the age of the universe. Any laboratory experiment operates on timescales of \(10^{-15}\) seconds or shorter. The ratio is \(10^{-32}\). The hidden sector cannot ``reset'' between your measurements. Every correlation it picks up from one experiment is still there when the next one begins. This is the \emph{opposite} of the standard Markovian regime, where the environment decorrelates fast. Here, it never decorrelates at all. (Condition C2.)
\textbf{It's vast.} The hidden sector has roughly \(10^{122}\) independent degrees of freedom --- the Bekenstein-Hawking entropy of the cosmological horizon. No experiment you could ever perform would appreciably disturb its state. Its memory never saturates. (Condition C3.)
A fast environment with vast capacity would wash out correlations (Markovian noise). A slow environment with limited capacity would eventually fill up and stop recording. Only an environment that is simultaneously coupled, slow, and vast sustains the kind of persistent, non-decomposable correlations that the paper calls \emph{P-indivisibility} --- a technical term meaning the system's transition probabilities at different times cannot be broken into independent steps.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{partition-relativity-1.4}{%
\subsection{Partition-Relativity (§1.4)}\label{partition-relativity-1.4}}
This is the first real proof in the paper, and it's beautifully simple.
\textbf{What it proves:} The emergent description (what the observer sees) depends \emph{only} on the partition --- on which degrees of freedom are visible and which are hidden. Nothing else.
\textbf{The formula:}
\[T_{ij}(t_2, t_1) = \int_{\Gamma_H} \delta_{x_j}[\pi_V(\phi_{t_2-t_1}(x_i, h))] \, d\mu(h)\]
Unpacking each symbol:
\begin{itemize}
\tightlist
\item
\textbf{T\_ij}: The probability of transitioning from visible state x\_i to visible state x\_j in the time interval from t\_1 to t\_2. This is what the observer measures.
\item
\textbf{(x\_i, h)}: The complete state --- visible part x\_i, hidden part h.
\item
\textbf{φ\_\{t2-t1\}}: The deterministic evolution. Takes the complete state at time t\_1 and returns the complete state at time t\_2. Uniquely determined by Axiom 1.
\item
\textbf{π\_V}: Projection onto the visible sector. Takes a complete state (x, h) and returns just x.
\item
\textbf{δ\_\{xj\}{[}\ldots{]}}: The Kronecker delta. Equals 1 if the visible part ended up at x\_j, equals 0 otherwise.
\item
\textbf{dμ(h)}: Integration over all possible hidden states, weighted by the Liouville measure.
\end{itemize}
\textbf{In plain English:} For each possible hidden state h, check whether starting at (x\_i, h) and evolving forward lands the visible part on x\_j. Count up all the hidden states where this happens, weighted by how likely each hidden state is. The result is the probability of the transition x\_i → x\_j.
\textbf{The proof:} The formula has exactly three inputs: (1) the dynamics φ\_t --- fixed by Axiom 1, (2) the partition (Γ\_V, Γ\_H) and projection π\_V --- fixed by Axiom 3, and (3) the measure μ --- fixed by Axiom 4 (Liouville measure is the unique choice). Since inputs 1 and 3 are determined by the axioms, the only free input is the partition. Therefore: everything about the emergent description depends only on the partition. QED.
\textbf{Why the Liouville measure is unique:} The observer needs a ``prior'' --- a way to weight the hidden states. Liouville measure is the unique measure on phase space that is absolutely continuous (no point masses) and invariant under Hamiltonian flow. Any smooth initial distribution evolves toward it. Singular measures are excluded by Axiom 4's requirement of standard probability theory. The observer has no choice.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{emergent-stochasticity-and-the-slow-bath-regime-2.12.2}{%
\subsection{Emergent Stochasticity and the Slow-Bath Regime (§§2.1--2.2)}\label{emergent-stochasticity-and-the-slow-bath-regime-2.12.2}}
The total system is deterministic. If you knew both x and h, you'd know the future with certainty. But the observer knows only x. Different hidden states h send the same visible state x to different futures.
Example: visible state is ``Heads.'' Hidden state could be any die value 1--6. If the die is 1 or 2, the dynamics flip the coin to Tails. If the die is 3--6, the coin stays at Heads. The observer doesn't know the die, so they see: P(Heads → Tails) = 2/6 = 1/3. The randomness is epistemic (from ignorance) not ontological (from fundamental indeterminacy).
In a normal ``heat bath'' scenario, the environment is fast and chaotic. It scrambles any information you write into it before you can read it back. This produces Markovian (memoryless) dynamics --- each step is independent of previous steps.
C2 inverts this. The hidden sector is slow. When the visible sector interacts with it (writing information through H\_int), the information stays there. At the next interaction, the hidden sector reads back what was written before. The observer sees history-dependent transition probabilities --- what happens next depends on what happened before.
This is non-Markovian dynamics. It's the key ingredient that separates quantum mechanics from classical stochastic processes.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{the-p-indivisibility-theorem-2.3}{%
\subsection{The P-Indivisibility Theorem (§2.3)}\label{the-p-indivisibility-theorem-2.3}}
\textbf{What it claims.} If a deterministic system is split into a visible and hidden sector, and these sectors are genuinely coupled, then the visible sector's behavior \emph{cannot} be a simple memoryless random process. It must exhibit P-indivisibility --- a specific kind of built-in memory.
\textbf{What ``P-indivisible'' means.} A stochastic process is ``P-divisible'' if you can always find a valid transition matrix connecting any two time points. Mathematically: for any times t\_1 \textless{} t\_2 \textless{} t\_3, there exists a stochastic matrix Λ such that:
\[T(t_3, t_1) = \Lambda(t_3, t_2) \cdot T(t_2, t_1)\]
where Λ has non-negative entries and rows summing to 1. ``P-indivisible'' means this fails --- the ``intermediate propagator'' would need negative entries, which means it's not a valid probability matrix.
Breuer, Laine, and Piilo proved that P-indivisibility is equivalent to ``information backflow'' --- the system's distinguishability can \emph{increase} over time. In a classical Markov process, you can only lose information (mixing). In a P-indivisible process, information comes back. This is exactly what quantum systems do --- interference, revivals, and non-classical correlations all involve information returning from where it was stored.
\textbf{The setup.} We work on finite sets (Axiom 2). The visible sector has states C\_V = \{x\_1, x\_2, \ldots\} with \textbar C\_V\textbar{} ≥ 2. The hidden sector has states C\_H = \{h\_1, h\_2, \ldots\}. The total dynamics is a bijection φ on C\_V × C\_H. The transition matrix is:
\[T_{ij} = \frac{|\{h \in \mathcal{C}_H : \pi_V(\varphi(x_i, h)) = x_j\}|}{|\mathcal{C}_H|}\]
\textbf{The key tool --- total variation distance:}
\[d(p, q) = \frac{1}{2}\sum_k |p_k - q_k|\]
This measures how distinguishable two probability distributions are. If d = 1, they're perfectly distinguishable. If d = 0, they're identical. For P-divisible processes, d can only decrease or stay constant.
\textbf{Step 1 --- Recurrence.} φ is a bijection on a finite set. Keep applying φ and you must eventually return to where you started --- there are only finitely many states to visit. Formally: there exists N such that φ\^{}N = id. So T\^{}(N) = I, and:
\[d(\delta_i T^{(N)}, \delta_j T^{(N)}) = d(\delta_i, \delta_j) = 1\]
After N steps, states that started distinguishable are still perfectly distinguishable.
\textbf{Step 2 --- Strict contraction.} T is not a permutation matrix (this follows from C1 --- the coupling mixes things). So there exist states i, j, l where both T\_il \textgreater{} 0 and T\_jl \textgreater{} 0. The total variation distance after one step:
\[d(\delta_i T, \delta_j T) = \frac{1}{2}\sum_k |T_{ik} - T_{jk}| < 1\]
The inequality is strict because the distributions overlap. Distinguishability has decreased.
\textbf{Step 3 --- The punchline.} At t = 1: d \textless{} 1 (distinguishability decreased). At t = N: d = 1 (distinguishability restored). The distinguishability went down then came back up --- non-monotonic behavior. A P-divisible process can only have non-increasing distinguishability. Therefore the process is P-indivisible. QED.
The proof uses almost nothing --- just that the dynamics is a bijection on a finite set (Axioms 1 and 2) and that the coupling is non-trivial (C1). It is purely combinatorial.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{the-accessible-timescale-lemma-2.3-continued}{%
\subsection{The Accessible-Timescale Lemma (§2.3 continued)}\label{the-accessible-timescale-lemma-2.3-continued}}
The recurrence proof shows P-indivisibility exists as a mathematical property. But the recurrence time is absurdly long --- for the cosmological case, it's e\textsuperscript{(10}122) years. Nobody will ever observe it.
The accessible-timescale lemma shows that information backflow happens on \emph{laboratory timescales}, independently of recurrence.
\textbf{The mechanism:} At each interaction (timescale τ\_S), the coupling H\_int transfers some information from the visible sector to the hidden sector. Call the amount I\_0. Between interactions, the hidden sector's correlations decay with a rate set by its spectral gap Δ \textasciitilde{} 1/τ\_B.
The decay per visible-sector step is:
\[e^{-\Delta \tau_S} \approx 1 - \frac{\tau_S}{\tau_B}\]
When τ\_S ≪ τ\_B (C2), this is very close to 1 --- almost no decay. The hidden sector remembers almost perfectly between steps. After k steps, the cumulative decay is:
\[e^{-k\Delta\tau_S} \approx 1 - \frac{k\tau_S}{\tau_B}\]
As long as k·τ\_S ≪ τ\_B, the hidden sector retains \textasciitilde k bits of visible-sector history. The mutual information satisfies:
\[I(X_{<t}; X_{>t} \mid X_t) \geq I_0\left(1 - \frac{k\tau_S}{\tau_B}\right)\]
For the cosmological case: τ\_S \textasciitilde{} 10\^{}\{-15\} s, τ\_B \textasciitilde{} 10\^{}\{17\} s. Even after k = 10\^{}\{20\} steps, k·τ\_S/τ\_B \textasciitilde{} 10\^{}\{-12\} --- negligible. The hidden sector remembers everything.
\textbf{The role of C3:} The hidden sector's memory capacity is log\_2(\textbar C\_H\textbar) bits. If k bits of history are written but the capacity is only m \textless{} k bits, old data gets overwritten. C3 ensures m is large enough that the memory never saturates on observable timescales.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{the-coin-and-die-model-2.4}{%
\subsection{The Coin-and-Die Model (§2.4)}\label{the-coin-and-die-model-2.4}}
The paper builds a concrete toy model to make the mechanism tangible.
\textbf{Setup:}
- Visible: x ∈ \{0, 1\} (a coin: 0 = Heads, 1 = Tails)
- Hidden: h ∈ \{1, 2, 3, 4, 5, 6\} (a die)
- Total: 12 states
\textbf{The permutation σ:}
\begin{longtable}[]{@{}lll@{}}
\toprule\noalign{}
Input state & Output state & What happens \\
\midrule\noalign{}
\endhead
\bottomrule\noalign{}
\endlastfoot
(0, 1) & (1, 1) & Coin flips, die stays \\
(1, 1) & (0, 1) & Coin flips, die stays \\
(0, 2) & (1, 2) & Coin flips, die stays \\
(1, 2) & (0, 2) & Coin flips, die stays \\
(0, 3) & (0, 4) & Coin stays, die changes \\
(0, 4) & (0, 3) & Coin stays, die changes \\
(0, 5) & (0, 6) & Coin stays, die changes \\
(0, 6) & (0, 5) & Coin stays, die changes \\
(1, 3) & (1, 4) & Coin stays, die changes \\
(1, 4) & (1, 3) & Coin stays, die changes \\
(1, 5) & (1, 6) & Coin stays, die changes \\
(1, 6) & (1, 5) & Coin stays, die changes \\
\end{longtable}
Every swap is a transposition (a ↔ b), so σ² = id (apply twice, everything returns).
\textbf{Checking the conditions:} C1 (coupling): die values 1 and 2 flip the coin ✓. C2 (slow bath): σ² = id means recurrence time is 2 steps, giving τ\_S/τ\_B = 1/2 ✓. C3 (sufficient capacity): 6 hidden states vs 2 visible states ✓.
\textbf{Computing T(1,0).} Start at x = 0 (Heads). All 6 die values are equally likely.
\begin{itemize}
\tightlist
\item
h = 1: σ(0,1) = (1,1) → Tails
\item
h = 2: σ(0,2) = (1,2) → Tails
\item
h = 3: σ(0,3) = (0,4) → Heads
\item
h = 4: σ(0,4) = (0,3) → Heads
\item
h = 5: σ(0,5) = (0,6) → Heads
\item
h = 6: σ(0,6) = (0,5) → Heads
\end{itemize}
P(0 → 0) = 4/6 = 2/3, P(0 → 1) = 2/6 = 1/3. By the same logic for x = 1:
\[T(1,0) = \begin{pmatrix} 2/3 & 1/3 \\ 1/3 & 2/3 \end{pmatrix}\]
\textbf{Distinguishability at t = 1:}
\[d(\delta_0 T, \delta_1 T) = \frac{1}{2}(|2/3 - 1/3| + |1/3 - 2/3|) = 1/3\]
Started at d = 1. Now d = 1/3. Distinguishability decreased.
\textbf{What Markov would predict at t = 2:} Apply the same transition matrix again:
\[T(1,0)^2 = \begin{pmatrix} 5/9 & 4/9 \\ 4/9 & 5/9 \end{pmatrix}\]
Distinguishability would drop to d = 1/9. More mixing.
\textbf{What actually happens at t = 2:} σ² = id. Every state returns to its starting point.
\[T(2,0) = I = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}\]
Distinguishability is back to d = 1. Complete un-mixing. Impossible for a Markov process.
\textbf{The smoking gun --- negative entries.} If there were a valid stochastic matrix Λ(2,1) connecting steps 1 and 2:
\[\Lambda(2,1) = T(2,0) \cdot [T(1,0)]^{-1} = I \cdot \begin{pmatrix} 2 & -1 \\ -1 & 2 \end{pmatrix} = \begin{pmatrix} 2 & -1 \\ -1 & 2 \end{pmatrix}\]
The entries −1 are negative. No valid stochastic matrix exists. \textbf{This is P-indivisibility.}
\textbf{The mechanism in detail.} The die works as a memory register. At step 1, if the coin was at 0 and the die was at 1, the coin flips to 1 but the die stays at 1. The die value 1 now encodes the information ``the coin was at 0 and I flipped it.'' At step 2, σ sees (1, 1) and flips it back to (0, 1). The die read its own memory and reversed the flip. C1 (coupling) allows writing to the memory. C2 (slow bath) ensures it isn't erased between reads. C3 (sufficient capacity) ensures there's enough room. Together, they produce the information backflow that makes the process P-indivisible --- and therefore, by the stochastic-quantum correspondence, equivalent to quantum mechanics.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{why-conditions-c2-and-c3-matter-physically}{%
\subsubsection{Why conditions C2 and C3 matter physically}\label{why-conditions-c2-and-c3-matter-physically}}
The P-indivisibility theorem needs only coupling (C1) and finiteness. So why does the paper insist on slow memory (C2) and vast capacity (C3)?
Because P-indivisibility without C2 and C3 might only show up at absurd timescales or might self-destruct. C2 ensures the memory persists on timescales accessible to actual experiments, not just at cosmic recurrence times. C3 ensures the hidden sector never runs out of room to store information --- if it saturates, later imprints overwrite earlier ones, and the process becomes effectively memoryless. Together, C2 and C3 guarantee that P-indivisibility is strong, persistent, and observationally relevant.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{the-stochastic-quantum-correspondence-3.1-and-appendix-a}{%
\subsection{The Stochastic-Quantum Correspondence (§3.1 and Appendix A)}\label{the-stochastic-quantum-correspondence-3.1-and-appendix-a}}
This is the key link. Section 2 proved that the embedded observer's dynamics are P-indivisible. Section 3 shows this is mathematically equivalent to quantum mechanics.
\textbf{The core statement.} Any P-indivisible stochastic process on a finite configuration space of size n can be embedded into a unitarily evolving quantum system. Specifically, there exists a Hilbert space H (dimension ≤ n³) and a unitary operator U(t) such that:
\[T_{ij}(t) = |U_{ij}(t)|^2\]
This is the Born rule. The left side is the transition probability computed by averaging over hidden states (the classical formula from partition-relativity). The right side is the quantum mechanical probability --- the squared modulus of a matrix element of the unitary evolution operator. The equivalence is not approximate. It is not an analogy. It is a mathematical identity.
\textbf{Two independent routes to the same conclusion.} The primary route uses Barandes' stochastic-quantum correspondence (2023--2025): P-indivisibility means transition probabilities can't be factored through intermediate times --- try it and you get ``negative probabilities.'' In quantum mechanics, this is \emph{exactly what happens}: probability amplitudes combine to produce interference patterns that don't factorize classically. What Barandes proved is that these are the same mathematical object, written in different notation.
The secondary route, given in Appendix A, uses Stinespring's dilation theorem (1955): a deterministic bijection on a finite product space defines a permutation unitary; tracing out the hidden sector with the Liouville measure produces a completely positive quantum channel whose diagonal elements recover the classical transition probabilities exactly. This second route requires only textbook results. Either route alone suffices; together they ensure the bridge rests on no single recent result.
\textbf{Where the quantum features come from:}
\begin{itemize}
\item
\textbf{The Schrödinger equation} arises because U(t) is differentiable. Any smooth family of unitary matrices can be written as U(t) = exp(-iHt/ℏ) for some Hermitian matrix H.
\item
\textbf{The Born rule} T\_ij = \textbar U\_ij\textbar² is not an additional postulate --- it's the definition of how the stochastic process maps onto the unitary one.
\item
\textbf{The action scale ℏ} enters when converting from the dimensionless unitary to a dimensionful Hamiltonian: Ĥ = iℏ (∂U/∂t) U†. The value of ℏ cannot be determined from the dimensionless transition data alone --- it requires additional physical input from the partition geometry (§5).
\item
\textbf{Bell inequality violations.} Since the transition matrices for composite systems don't factorize, entangled systems naturally produce correlations that violate Bell inequalities, up to exactly Tsirelson's bound.
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{the-phase-locking-lemma-3.1-continued}{%
\subsection{The Phase-Locking Lemma (§3.1 continued)}\label{the-phase-locking-lemma-3.1-continued}}
A potential objection: the relation T\_ij = \textbar U\_ij\textbar² throws away phase information. Different unitaries could give the same transition probabilities. Does this make the quantum description ambiguous?
The phase-locking lemma shows: no.
\textbf{Setup:} The transition probability at time t is:
\[T_{ij}(t) = \left|\sum_k V_{ik} \, e^{-iE_k t} \, V_{jk}^*\right|^2\]
where V\_ik = ⟨i\textbar k⟩ are the overlaps between the configuration basis and the energy eigenbasis, and E\_k are the energy eigenvalues. Expanding the square:
\[T_{ij}(t) = \sum_{k,l} V_{ik}\, V_{jk}^*\, V_{jl}\, V_{il}^*\; e^{-i(E_k - E_l)t}\]
\textbf{The Fourier trick:} This is a sum of oscillating terms at frequencies ω\_kl = E\_k - E\_l. If all these frequencies are distinct (condition G2: non-degenerate energy gaps), you can extract each coefficient by Fourier transform:
\[a_{ij}^{kl} = V_{ik}\, V_{jk}^*\, V_{jl}\, V_{il}^*\]
\textbf{Extracting the moduli:} Setting i = j: \(a_{ii}^{kl} = |V_{ik}|^2 |V_{il}|^2\). If none of the overlaps are zero (condition G3), all moduli \textbar V\_ik\textbar{} are determined.
\textbf{Extracting the phases:} Write V\_ik = \textbar V\_ik\textbar{} e\^{}\{iφ\_ik\}. The argument of the Fourier coefficient gives:
\[\arg(a_{ij}^{kl}) = (\varphi_{ik} - \varphi_{il}) - (\varphi_{jk} - \varphi_{jl})\]
The only transformation preserving all double differences is φ\_ik → φ\_ik + α\_i + β\_k --- just relabeling (choosing a different phase convention for the basis states). Once you fix these conventions, all remaining phases are uniquely determined.
\textbf{Bottom line:} Continuous-time transition probability data uniquely determines the Hamiltonian up to physically irrelevant relabeling.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{bell-inequality-violations-3.2}{%
\subsection{Bell Inequality Violations (§3.2)}\label{bell-inequality-violations-3.2}}
This is the question everyone asks: isn't this ruled out by Bell's theorem?
\textbf{What Bell's theorem actually requires.} Bell's theorem proves that no hidden variable theory can reproduce quantum correlations if it satisfies three conditions simultaneously:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
\textbf{Locality:} The outcome at detector A doesn't depend on the setting at detector B
\item
\textbf{Measurement independence:} The experimenters' choices are independent of the hidden variables
\item
\textbf{Factorizability:} P(a,b \textbar{} x,y,λ) = P(a\textbar x,λ) · P(b\textbar y,λ)
\end{enumerate}
The framework satisfies conditions 1 and 2. It violates condition 3.
\textbf{Why factorizability fails.} Factorizability requires that, conditioned on the hidden variable λ, the outcomes at the two detectors are independent --- that λ carries all the relevant information as a snapshot at a single moment.
P-indivisible processes don't work this way. The transition probabilities for a joint system can't be factored:
\[T_{QR} \neq T_Q \otimes T_R\]
Two subsystems that interacted during preparation carry a joint transition matrix that doesn't decompose into a product. This non-factorizability IS entanglement.
\textbf{The Jarrett decomposition.} Factorizability splits into parameter independence (outcome at A doesn't depend on setting at B --- preserved ✓) and outcome independence (outcome at A doesn't depend on outcome at B --- violated ✗). Parameter independence prevents faster-than-light signaling. Fine's theorem shows that violating outcome independence while preserving parameter independence is exactly the class of theories consistent with quantum correlations.
\textbf{The maximum violation.} Barandes, Hasan, and Kagan prove the maximum CHSH violation from P-indivisible processes is exactly Tsirelson's bound: 2√2 --- the quantum maximum.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{the-characterization-theorem-3.3}{%
\subsection{The Characterization Theorem (§3.3)}\label{the-characterization-theorem-3.3}}
It's not enough to show that embedded observation \emph{produces} QM (sufficiency). The paper shows QM \emph{requires} embedded observation under C1--C3 (necessity). The full logical chain:
\begin{itemize}
\tightlist
\item
Barandes proved: QM ⟺ P-indivisibility
\item
Section 2.3 proved: C1--C3 ⟹ P-indivisibility (sufficiency)
\item
Section 3.3 proves: P-indivisibility ⟹ C1--C3 (necessity)
\item
Combined: \textbf{QM ⟺ P-indivisibility ⟺ embedded observation under C1--C3}
\end{itemize}
\textbf{Necessity of C1 (coupling).} If T is a permutation (no coupling), then T\^{}k is also a permutation for all k. The intermediate propagator Λ(k₂,k₁) = T\^{}\{k₂-k₁\} is always a valid stochastic matrix. So the process is P-divisible. Contrapositive: P-indivisibility requires non-trivial coupling.
\textbf{Necessity of C2 (slow bath).} Between coupling events (separated by τ\_S), the hidden sector evolves under its own Hamiltonian. The convergence to equilibrium is:
\[\| e^{\mathcal{L}_H \tau_S} \mu_H(\cdot | x_i) - \mu_{\text{eq}} \|_{\text{TV}} \leq C \, e^{-\Delta \tau_S}\]
In the fast-bath regime (Δ·τ\_S ≫ 1), this is exponentially small. The hidden sector forgets everything between interactions. Each transition is computed against the same equilibrium distribution, so T\^{}(k) = T\^{}k --- a Markov chain, hence P-divisible. Contrapositive: P-indivisibility requires τ\_S ≪ τ\_B.
\textbf{Necessity of C3 (sufficient capacity).} The non-Markovian mutual information is bounded by the hidden sector's size:
\[I(X_{<t} ; X_{>t} \mid X_t) \leq \log_2 m\]
where m = \textbar C\_H\textbar. Proof: the total system is deterministic, so X\_\{\textgreater t\} is a function of (X\_t, H\_t). Given X\_t, the chain X\_\{\textless t\} → H\_t → X\_\{\textgreater t\} is Markov. By the data processing inequality:
\[I(X_{<t} ; X_{>t} \mid X_t) \leq I(X_{<t} ; H_t \mid X_t) \leq H(H_t \mid X_t) \leq H(H_t) \leq \log_2 m\]
Each step uses a standard information-theoretic inequality. The result: if you want K bits of history dependence, you need m ≥ 2\^{}K hidden states.
\textbf{The complete characterization.} For \textbar C\_V\textbar{} ≥ 2, the following are equivalent: (1) the process is mathematically equivalent to unitarily evolving QM, (2) the process is P-indivisible, (3) the process arises from marginalizing a deterministic bijection with C1, C2, C3. This is the biconditional: \textbf{QM ⟺ embedded observation under C1--C3.}
\textbf{What ``unitarily evolving QM'' means precisely.} The characterization theorem delivers a Hilbert space, a Hermitian Hamiltonian, a unitary time evolution, and Born-rule transition probabilities. Additional structures of operational quantum mechanics --- the tensor product decomposition for spatially separated subsystems, state update via the Lüders rule, and multi-time predictions --- are all derived from the construction rather than added as independent postulates. The tensor product for the visible--hidden split comes from the Stinespring route (Appendix A). The tensor product for subsystems within the visible sector (two laboratories, for instance) follows from the spatial Markov property of range-1 dynamics on the coupling graph. Projective measurement corresponds to Bayesian conditioning on the classical substratum. The equivalence between ``classical non-Markovian'' and ``quantum'' is not metaphorical --- the theorem proves these are the same mathematical category.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{the-cosmological-application-4}{%
\subsection{The Cosmological Application (§4)}\label{the-cosmological-application-4}}
\hypertarget{the-partition}{%
\subsubsection{4.1 The Partition}\label{the-partition}}
The cosmological horizon is the boundary beyond which no signal traveling at or below c can ever reach the observer. In a universe with a positive cosmological constant (de Sitter space), this horizon exists for every observer and has a definite, finite area:
\[A = \frac{4\pi c^2}{H^2}\]
where H is the Hubble parameter. This implements Axiom 3 naturally: Γ\_V = everything inside the horizon, Γ\_H = everything beyond.
\textbf{Why ℏ is the same for all observers:} Different observers have slightly different horizons. But the gap equation ℏ = c³ε²/(4G) depends only on local geometric quantities (c, G, ε) --- not on the horizon area or the observer's worldline. So all observers derive the same ℏ.
\hypertarget{verification-of-the-conditions}{%
\subsubsection{4.2 Verification of the Conditions}\label{verification-of-the-conditions}}
\textbf{C1 (coupling).} In general relativity's ADM formulation, the bulk Hamiltonian is a sum of constraints that vanish on-shell --- meaning the ``real'' dynamics happens at the boundary. The Hamiltonian and momentum constraints correlate interior and exterior data. This is stronger than just H\_int ≠ 0: the constraints enforce correlations that persist on all timescales.
\textbf{C2 (slow bath).} τ\_B \textasciitilde{} 1/H \textasciitilde{} 10¹⁷ seconds (the Hubble time). τ\_S \textasciitilde{} 10⁻¹⁵ seconds (a typical atomic process). The ratio: τ\_S/τ\_B \textasciitilde{} 10⁻³².
\textbf{C3 (sufficient capacity).} The hidden sector has A/ε² \textasciitilde{} 10¹²² modes (the de Sitter entropy). The visible sector has \textasciitilde{} 10⁸⁰ baryons. No experiment comes close to saturating the hidden sector.
\hypertarget{application}{%
\subsubsection{4.3 Application}\label{application}}
With the cosmological horizon satisfying all axioms and conditions, the characterization theorem applies. The observer's reduced description is P-indivisible and therefore equivalent to unitary quantum mechanics. The value of ℏ is determined by the partition geometry --- which is what Section 5 derives.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{where-plancks-constant-comes-from-5}{%
\subsection{Where Planck's Constant Comes From (§5)}\label{where-plancks-constant-comes-from-5}}
Partition-relativity proved that the emergent quantum description is completely and uniquely determined by the partition. This means \(\hbar\) cannot be a free parameter --- it must be fixed by the geometry of the boundary.
\hypertarget{the-classical-horizon-temperature-5.1}{%
\subsubsection{The Classical Horizon Temperature (§5.1)}\label{the-classical-horizon-temperature-5.1}}
\textbf{Starting point: Jacobson's identity.} Jacobson (1995) showed that Einstein's field equations follow from applying the first law of thermodynamics δQ = TdS to local causal horizons:
\[dE = \frac{c^2 \kappa}{8\pi G} \, dA\]
where κ is the surface gravity and dA is the change in horizon area. This is a classical gravitational identity --- no quantum mechanics involved.
The entropy density is η = 1/ε² --- one coupled mode per minimal cell of area ε². This is not an assumption about the number of states per cell: ε is defined as the minimal distinguishable scale (Axiom 2), so each cell of area ε² contributes exactly one boundary mode that couples across the partition. The number of internal states per mode (the alphabet size q) is a gauge freedom with no observable consequences (Fundamental, §4). So dS = dA/ε². From dE = TdS:
\[k_B T_{\text{cl}} = \frac{c^2 \epsilon^2 \kappa}{8\pi G}\]
For the de Sitter horizon, κ = cH:
\[k_B T_{\text{cl}} = \frac{c^3 \epsilon^2 H}{8\pi G}\]
\textbf{Critical point: no ℏ appears anywhere.} This temperature is computed from purely classical quantities.
\hypertarget{the-four-step-derivation-5.2}{%
\subsubsection{The Four-Step Derivation (§5.2)}\label{the-four-step-derivation-5.2}}
\textbf{Step 1 (Uniqueness).} Partition-relativity guarantees ℏ is determined by the partition geometry. It's not a free parameter.
\textbf{Step 2 (Boundary-only dependence).} A substantial lemma showing that ℏ depends only on the boundary modes, not on the deep interior of the hidden sector. Decompose the hidden sector into boundary modes C\_B (near the horizon) and deep modes C\_D (far from the horizon):
\[\mathcal{C}_H = \mathcal{C}_B \times \mathcal{C}_D\]
The proof has three parts: (i) spatial locality means V talks to B, and B talks to D, but V doesn't talk directly to D; (ii) on timescales t ≪ τ\_B, the deep sector barely moves (displacement \textasciitilde{} 10⁻³²); (iii) because the deep modes are frozen, the sum over them produces a trivial factor:
\[T_{ij}(t) = \underbrace{\frac{1}{|\mathcal{C}_B|} \sum_{b} \delta_{x_j}[\pi_V(\varphi_t^{VB}(x_i, b))]}_{T^{(B)}_{ij}(t)} + \mathcal{O}(t/\tau_B)\]
The transition probabilities depend only on boundary dynamics. Since ℏ is determined by transition probabilities, ℏ depends only on boundary quantities: c, G, and ε. This excludes dependence on H --- if ℏ depended on H, observers at different cosmic epochs would have different quantum mechanics.
\textbf{Step 3 (Dimensional analysis).} Step 2 excludes volumetric (deep-sector) quantities, leaving boundary quantities. The boundary carries both \emph{local} geometric data (ε, κ, and the constants c, G) and a \emph{global} quantity: the total area A, which forms the dimensionless ratio A/ε² = S\_dS. If ℏ depended on S\_dS, it would be observer-dependent --- different observers have different horizon areas --- contradicting the universality of the emergent action scale. This excludes A. The surface gravity κ is excluded by two independent arguments: (i) κ differs between horizon types (cosmological vs.~black hole), but ℏ is universal --- a laboratory experiment measures the same ℏ regardless of which horizon defines the partition; (ii) for the cosmological horizon, κ = cH is time-dependent (Ḣ ≠ 0), but ℏ is observed to be constant on laboratory timescales. The unique combination of c, G, and ε with dimensions of action:
\[[\hbar] = \frac{[c]^3 \, [\epsilon]^2}{[G]} = \text{kg·m}^2/\text{s} \quad \checkmark\]
So ℏ = β c³ε²/G, where β is a dimensionless constant that dimensional analysis alone can't fix.
\textbf{Step 4 (Thermal self-consistency).} We have two independent descriptions of the horizon temperature:
\emph{Classical:} T\_cl = c²ε²κ/(8πGk\_B) --- computed from the geometric substratum, no ℏ.
\emph{Quantum:} The emergent QFT lives on a spacetime with a bifurcate Killing horizon. Standard QFT on curved spacetime gives a KMS thermal state at temperature:
\[T_Q = \frac{\hbar \kappa}{2\pi c k_B}\]
The two temperatures are computed independently --- \(T_{\text{cl}}\) from the classical substratum alone (no QM), \(T_Q\) from the emergent QFT alone (no classical substratum details) --- but they describe the same physical degrees of freedom: the boundary modes across which the partition is defined. Since the quantum description is derived from the classical one (Part I), and the derivation is exact at the boundary, the two descriptions cannot assign contradictory temperatures. Consistency requires \(T_{\text{cl}} = T_Q\):
\[\frac{c^2 \epsilon^2 \kappa}{8\pi G} = \frac{\hbar \kappa}{2\pi c}\]
The surface gravity κ cancels from both sides. Solving:
\[\boxed{\hbar = \frac{c^3 \epsilon^2}{4G}}\]
This fixes β = 1/4.
\textbf{Why this is not circular.} The KMS temperature T\_Q contains ℏ as an \emph{unknown}. The classical temperature T\_cl contains no ℏ at all. The non-circularity is structural: Part I establishes that a QFT emerges with \emph{some} action scale ℏ; §5 determines \emph{which} ℏ, using the independent classical temperature that Part I neither requires nor produces. If T\_cl had depended on the deep hidden-sector volume (it doesn't --- the boundary-only lemma excludes it), or if T\_Q had been state-dependent (it isn't --- the KMS temperature is purely kinematic), the matching wouldn't work. That neither pathology obtains makes the gap equation a genuine determination.
\textbf{Predictive content.} The gap equation relates one free parameter (ε) to one output (ℏ). The predictive content lies not in the relation alone but in its consequences: the specific relationship ℏ = c³ε²/(4G) --- rather than any other function of c, G, ε --- produces the Bekenstein-Hawking formula with the exact factor 1/4, the CC dissolution with S\_dS as the compression ratio, the RVM parameter ν\_OI, and the GW echo timescale. Any alternative ℏ(ε) would fail at least one of these checks.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{the-d-gauge-completeness-theorem-5.3}{%
\subsection{The D-Gauge Completeness Theorem (§5.3)}\label{the-d-gauge-completeness-theorem-5.3}}
\textbf{The problem.} The relation T\_ij = \textbar U\_ij\textbar² discards phase information. Could different Hamiltonians give the same transition probabilities?
\textbf{The theorem.} If \textbar U'\_ij(t)\textbar² = \textbar U\_ij(t)\textbar² for all i, j, t, then H' = DHD† where D is a diagonal unitary (a physically meaningless relabeling of basis phases).
\textbf{The proof in three steps:}
\emph{Step 1 (eigenvalue recovery):} Fourier analysis of T\_ij(t) extracts the energy differences E\_k - E\_l. Non-degeneracy of energy gaps means E'\_k = E\_k + E₀ (same eigenvalues up to a global shift).
\emph{Step 2 (modulus recovery):} The diagonal Fourier coefficients give \textbar V\_ik\textbar² directly. So \textbar V'\_ik\textbar{} = \textbar V\_ik\textbar.
\emph{Step 3 (phase structure):} Writing V'\_ik = V\_ik e\^{}\{iδ\_ik\} and requiring all Fourier coefficients to match gives the double-difference condition:
\[\delta_{ik} - \delta_{il} - \delta_{jk} + \delta_{jl} = 0 \pmod{2\pi}\]
The general solution: δ\_ik = α\_i + β\_k --- a sum of a row phase and a column phase. This is just basis rephasing.
\textbf{The dimensional obstruction.} The unitary U(t) is dimensionless. The Hamiltonian Ĥ = iℏ ∂\_tU · U† contains ℏ, which is dimensionful. No amount of dimensionless data can fix a dimensionful constant. This is why Step 4 (thermal matching) is not just a convenient check but the \emph{mathematically obligatory} step: it's the only place where dimensionful physical input enters the framework.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{the-discreteness-scale-6}{%
\subsection{The Discreteness Scale (§6)}\label{the-discreteness-scale-6}}
\hypertarget{what-ux3b5-2-l_p-means}{%
\subsubsection{What ε = 2 l\_p means}\label{what-ux3b5-2-l_p-means}}
Rearranging the gap equation ℏ = c³ε²/(4G):
\[\epsilon^2 = \frac{4\hbar G}{c^3} = 4 \, l_p^2\]
where l\_p = √(ℏG/c³) is the Planck length. Therefore ε = 2 l\_p.
\textbf{What this is:} The framework has one free geometric parameter --- the discreteness scale ε. The gap equation fixes its relationship to ℏ. Given the measured value of ℏ, the discreteness scale is determined: ε = 2 l\_p ≈ 3.2 × 10⁻³⁵ meters.
\textbf{What this is NOT:} This is not an independent prediction of ε. The framework contains one free parameter and one equation relating it to known constants. The self-consistency condition pins ε to the Planck regime but doesn't predict a number that wasn't already implicitly known.
\hypertarget{the-bekenstein-hawking-entropy-why-the-14-factor-is-derived}{%
\subsubsection{The Bekenstein-Hawking Entropy --- Why the 1/4 factor is derived}\label{the-bekenstein-hawking-entropy-why-the-14-factor-is-derived}}
The number of independent modes on the cosmological horizon is:
\[N_{\text{modes}} = \frac{A}{\epsilon^2} = \frac{A}{4 \, l_p^2}\]
This is the Bekenstein-Hawking entropy:
\[S_{\text{BH}} = \frac{A}{4 \, l_p^2}\]
The factor of 1/4 --- which Bekenstein and Hawking introduced as a proportionality constant in 1973 --- is here derived: each minimal cell of area ε² = 4 l\_p² contributes one unit of entropy. The 4 in the denominator comes from ε = 2 l\_p.
\textbf{Why this is significant:} The 1/4 factor has been a mystery for 50 years. Most frameworks either assume it or derive it within constructions specifically designed to produce it. Here it follows from the gap equation with no additional input.
\hypertarget{self-consistency-bounds-on-ux3b5}{%
\subsubsection{Self-consistency bounds on ε}\label{self-consistency-bounds-on-ux3b5}}
\textbf{If ε² ≪ l\_p²:} Sub-Planckian cells would need further subdivision, creating a second trace-out within the first. This would make ℏ multi-valued --- contradicting the observed universality of ℏ.
\textbf{If ε² ≫ l\_p²:} Super-Planckian cells would be too coarse. The emergent quantum description would assign distinct quantum states to configurations that are physically identical.
The self-consistency condition ε = 2 l\_p is the unique value where neither pathology occurs.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{the-two-levels}{%
\subsection{The Two Levels}\label{the-two-levels}}
Now the cosmological constant problem dissolves. But first, a crucial link: how does finite-dimensional quantum mechanics become quantum \emph{field} theory --- with independent modes, each carrying zero-point energy?
The answer is spatial locality. The classical substratum has it --- neighboring cells interact, distant cells don't. The paper proves that the emergent quantum description inherits it. The argument is direct: the Barandes correspondence maps each classical configuration \((x_1, x_2, \ldots, x_N)\) to a quantum basis state \(|x_1, x_2, \ldots, x_N\rangle\). That's already a tensor product. The only question is whether the emergent Hamiltonian respects the structure. And it must, because if two configurations differ only at sites that aren't neighbors, the classical dynamics can't connect them in an infinitesimal time step --- so the transition probability between them is zero --- so the corresponding quantum Hamiltonian matrix element is zero. The emergent Hamiltonian couples only neighboring cells, exactly like the classical one. You get a lattice quantum field theory with a built-in ultraviolet cutoff at \(\epsilon = 2\,l_p\).
This is what makes the CC dissolution work. The QFT has independent modes. Each mode gets a zero-point energy. The sum diverges. And the ``worst prediction in physics'' follows --- but only within the emergent description.
The critical insight is that general relativity and quantum field theory are not competing answers to the same question. They are answers to \emph{different questions}, asked at different levels of the same reality.
\textbf{Level 1: The classical substratum.} Spacetime geometry is part of the fundamental layer. The metric tensor evolves via Einstein's field equations. The stress-energy tensor that sources gravity is the classical stress-energy of the total microstate. At this level, the vacuum energy density sits at the critical scale: \(\rho \sim H^2/G \sim 10^{-9}\) J/m³. No zero-point energy. No discrepancy.
\textbf{Level 2: The emergent quantum description.} For an embedded observer tracing out the hidden sector, the mandatory quantum description assigns a zero-point energy of \(\frac{1}{2}\hbar\omega\) per mode. Sum to the Planck cutoff and you get \(\rho_{\text{QFT}} \sim 10^{113}\) J/m³. This number is real \emph{within the quantum description} --- it reflects the magnitude of the trace-out noise --- but it is not a source term in Einstein's equations, because those equations operate at the classical level, which is logically prior to the quantum description.
This ordering is not a choice but is forced by three independent requirements. First, the partition must be definite --- not in superposition --- for the trace-out to be well-defined; a partition in superposition would yield an incoherent mixture of inequivalent quantum theories. Second, the partition is defined by the causal structure, which is determined by null geodesics of the metric; if the metric were derived from QM, the derivation would be circular (QM → metric → causal structure → partition → QM). Third, \(\hbar\) is determined by the boundary geometry; if the geometry were itself quantum-mechanical, \(\hbar\) would depend on a quantum state, contradicting its observed universality.
The \(10^{122}\) ratio between the two answers is not a discrepancy. It equals \(S_{\text{dS}}\) --- the Bekenstein-Hawking entropy of the cosmological horizon --- which is the number of hidden-sector degrees of freedom the trace-out compresses into the emergent quantum state. The ``worst prediction in physics'' is the information compression ratio of the observer's blind spot. A category error, not a fine-tuning failure.
This is not a prediction awaiting future data. The observed vacuum energy has been measured since 1998, and it sits exactly at the classical geometric scale --- the value the framework expects. Meanwhile, every attempt to search for a cancellation mechanism within quantum-first frameworks have found nothing. The framework explains why: there is nothing to cancel.
This track record is itself evidence for the ordering. The question of whether geometry is prior to quantum mechanics or quantum mechanics is prior to geometry is usually treated as a philosophical preference. But it has an empirical signature sitting in plain sight: one ordering produces the worst prediction in physics and has no solution; the other predicts the observed value and explains the discrepancy as a derived quantity. That doesn't prove the ordering is correct --- the gravitational wave echo prediction provides a more direct test --- but it's existing evidence, not a future hope.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{what-quantum-weirdness-looks-like-from-here}{%
\subsection{What Quantum Weirdness Looks Like From Here}\label{what-quantum-weirdness-looks-like-from-here}}
If quantum mechanics is an emergent description forced on embedded observers, the standard quantum puzzles acquire straightforward readings.
\textbf{The double-slit experiment.} The particle goes through one slit. The interference pattern arises because the transition probabilities are computed by marginalizing over the hidden sector, and the hidden sector includes the field configuration near both slits. Opening or closing the second slit changes the boundary conditions, which changes the marginalization, which changes the pattern. The ``wave-like'' behavior is the hidden sector's influence shifting when the geometry changes.
\textbf{Entanglement.} Two particles prepared together share a joint transition matrix inherited from the trace-out. The correlations are encoded in the structure of the dynamics itself, not in a hidden variable you could integrate over. This is why Bell inequality violations occur: the standard factorization assumption fails for indivisible processes. The framework reproduces quantum correlations exactly up to Tsirelson's bound, without faster-than-light signaling and without superdeterminism.
\textbf{The measurement problem.} Measurement produces definite outcomes through the indivisible dynamics. When Wigner can't access his Friend's lab, he traces out its internal degrees of freedom and assigns a superposition. The superposition reflects Wigner's epistemic situation --- what \emph{he} can infer --- not the Friend's physical state. Branching in the Many-Worlds sense is a feature of the compressed description, not of the underlying reality.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{predictions}{%
\subsection{Predictions}\label{predictions}}
The framework is not just a reinterpretation. It makes specific, falsifiable predictions --- and, crucially, it makes predictions in domains it was not designed to address.
\textbf{Dark energy evolution.} Because the hidden sector's dimensionality changes as the Hubble parameter \(H\) evolves (the horizon area is \(A = 4\pi c^2/H^2\)), the emergent vacuum energy inherits a dependence on \(H\). The predicted form matches the Running Vacuum Model: \(\Lambda_{\text{eff}} = \Lambda_0 + \nu H^2\). The coefficient \(\nu\) is computed from the spectral structure of the horizon: the trace-out compression noise is distributed over \(\mathcal{N} = \ln(c/\epsilon H)\) spectral decades. The spectral uniformity (\(\alpha = 0\)) is not an assumption but follows from the lattice structure: each \(\epsilon^2\) boundary cell couples equally to all frequencies (time-translation invariance), each mode carries \(\mathcal{O}(1)\) entropy bit (quantum regime), and the mode count is geometric (\(A/\epsilon^2\)), not field-theoretic. This gives \(\nu_{\text{OI}} = (2.45 \pm 0.04) \times 10^{-3}\) with total uncertainty \(\pm 1.8\%\) --- a precision prediction, not an order-of-magnitude estimate. The independently testable ratio \(\nu/\Omega_\Lambda = 0.00358 \pm 0.00003\) depends only on \(\ln(c/\epsilon H_0)\) and is insensitive to the measured values of both \(H_0\) and \(\Omega_\Lambda\). DESI's 2024--2025 data releases report evidence for evolving dark energy at \(2.8\sigma\)--\(4.2\sigma\), with RVM fits finding \(\nu \sim \mathcal{O}(10^{-3})\) --- consistent with the prediction.
\textbf{Gravitational wave echoes.} At a proper distance of about one discreteness scale (\(\epsilon = 2\,l_p\)) outside a black hole horizon, an infalling mode's wavelength hits the discreteness floor and must scatter. For a 62.7 solar-mass remnant (like GW250114), the predicted echo delay is \(\Delta t = (r_+/c)\ln(r_+/2l_p) = 48.9\) ms, where \(r_+\) is the Kerr outer horizon radius. The echo waveform is a train of damped sinusoids at the QNM frequency (\(f_{\text{QNM}} = 271\) Hz for GW250114), with alternating phase \((-1)^n\) and amplitude \(A_n = \mathcal{R} \times \Gamma \times (1-\Gamma)^{n-1}\), where \(\Gamma \approx 0.45\) is the greybody factor and \(\mathcal{R} \approx 1\) is the wall reflectivity (computed from boundary mode analysis: the \(l = 2\) mode has no dissipation channel because angular momentum transfer to \(l \neq 2\) modes is suppressed by \((\epsilon/\lambda)^2 \sim 10^{-82}\)). The first echo amplitude is \(\sim 45\%\) of the ringdown. Model-independent searches on GW250114 (total SNR \(\sim 80\)) find no echoes, but this is fully consistent with the prediction: the exact fitting factor between the OI echo template and the best long-lived QNM template is only \(\text{FF} = 0.19\) --- the search recovers less than \(20\%\) of the optimal SNR. The mismatch arises because the OI echo is silent for \(\sim 49\) ms between bursts and alternates phase \((-1)^n\), features no long-lived sinusoid can match. For GW250114: matched-filter SNR \(\sim 15\) (detectable) vs long-lived QNM search SNR \(\sim 2.8\) (below threshold). A dedicated matched-filter search using the one-parameter OI comb template on GW250114 public data would provide the definitive test (Main, §8.2).
\textbf{The dark sector as corroboration.} The trace-out that produces quantum mechanics has a gravitational consequence that the paper did not set out to find. The boundary entropy --- the \(S_{\text{dS}}\) modes traced out to produce the quantum description --- has thermal energy that, distributed over the Hubble volume, equals the critical density exactly. A crucial subtlety: this thermal energy is computed entirely from pre-trace-out (classical) quantities --- the number of boundary modes, the classical horizon temperature, and the Hubble volume --- with no reference to \(\hbar\) or the emergent quantum description. This is what distinguishes the boundary entropy's gravitational contribution from the QFT zero-point energy (\(\rho_{\text{QFT}} \sim 10^{113}\) J/m³), which exists only \emph{after} the trace-out and is an artifact of the emergent description. The framework denies that the zero-point energy gravitates (this is the CC dissolution of Part II); the boundary entropy's classical thermal energy \emph{does} gravitate, at the scale \(\rho_{\text{crit}} \sim 10^{-9}\) J/m³.
The paper proves that this entropy has no operator in the emergent QFT. The baryonic sector --- what QFT can account for --- is \textasciitilde5\% of \(\rho_{\text{crit}}\). The remaining \textasciitilde95\% is the boundary entropy: gravitationally active, invisible to the emergent description, and persistent through P-indivisibility (condition C2). This matches the observed composition of the universe, in which \textasciitilde95\% of the gravitational content has no source in particle physics. The uniform component corresponds to dark energy (handled by Part II's CC dissolution). The structured component --- dark matter --- arises from matter-induced entropy displacement: baryonic matter displaces boundary entropy via the Clausius relation, the Jacobson mechanism converts the entropy gradient into curvature, yielding the MOND acceleration scale \(a_0 = cH/6 \approx 1.2 \times 10^{-10}\) m/s² and the baryonic Tully-Fisher relation \(v^4 = GM_B \cdot cH/6\) --- both parameter-free (Main, §8.4). That axioms designed to derive quantum mechanics also account for the dark sector's total budget and internal structure is independent corroboration that observational incompleteness is capturing real structure.
\textbf{High-redshift dark matter.} Because \(a_0(z) = cH(z)/6\) and \(H(z)\) increases with redshift, the dark matter phenomenology evolves: \(a_0\) is \(1.8\times\) larger at \(z = 1\), \(3.0\times\) at \(z = 2\), and \(4.6\times\) at \(z = 3\). This shrinks the MOND crossover radius, making galaxies more baryon-dominated at high redshift --- their rotation curves should decline beyond a smaller radius. Genzel et al.~(Nature, 2017) report exactly this: stacked rotation curves at \(z = 0.9\)--\(2.4\) show declining outer velocities at \(> 3\sigma\) significance relative to local spirals. The baryonic Tully-Fisher relation also evolves: \(v_{\text{flat}} \propto H(z)^{1/4}\), predicting 32\% higher velocities at \(z = 2\) at fixed baryonic mass. McGaugh et al.~(2024) report no evolution in the \emph{stellar} mass TF to \(z \sim 2.5\) --- but this is actually \emph{predicted} by the framework, because gas fractions at high \(z\) are large (\(f_{\text{gas}} \sim 50\)--\(70\%\)) and the gas mass omitted from \(M_*\) almost exactly compensates the dynamical shift (the cancellation gas fractions --- 44\% at \(z = 1\), 67\% at \(z = 2\) --- match observations). The definitive test is the \emph{baryonic} TF at \(z > 1\) with reliable ALMA gas masses. Particle dark matter (NFW halos) predicts flat rotation curves at all redshifts --- the observed decline is unexpected in ΛCDM but natural in the OI framework.
\textbf{Cluster scales and the Bullet Cluster.} Galaxy clusters --- the hardest test for any MOND-like theory --- are addressed by the interpolation between the Newtonian and deep-MOND regimes. The simple interpolation function \(g_{\text{total}} = g_B \cdot \nu(g_B/a_0)\) with \(\nu(y) = (1 + \sqrt{1+4/y})/2\) matches the Coma cluster to \(< 1\%\) in velocity (1260 vs 1270 km/s) and reduces the standard MOND mass shortfall from a factor \(\sim 2\) to \(\sim 1.0\)--\(1.5\) for other rich clusters --- with the residual attributable to undetected warm-hot intergalactic medium (WHIM). This interpolation is indistinguishable from the deep-MOND limit at galaxy scales (differences \(< 0.07\) dex, well within the observed RAR scatter). The Bullet Cluster --- where gravitational lensing peaks at the galaxy positions rather than the dominant X-ray gas --- is explained by the non-local character of entropy displacement: the boundary entropy relaxation time is \(\sim H^{-1} \approx 14\) Gyr, while the collision crossing time is \(\sim 0.15\) Gyr. The dark gravity is frozen at the pre-collision configuration (centered on the galaxies, which defined the potential wells for gigayears), not tracking the recently displaced gas. This reproduces the observed lensing morphology and makes a testable prediction: very old post-collision systems should show gradual relaxation of the dark gravity toward the gas distribution (Main, §8.4). The same thermodynamic averaging explains why the entropy displacement reproduces the CMB acoustic peak pattern: oscillating perturbations have zero net entropy displacement per cycle (the Clausius relation involves \emph{net} heat transfer), so only the growing mode is tracked --- providing non-oscillating potential wells identical to CDM in the linear regime.
\textbf{Gauge coupling prediction.} The companion paper (Fundamental, §9) extends the derivation chain to the gauge coupling strengths. The fermion-induced coupling gives \(1/\alpha_0 = 23.25\) at the Planck scale --- a universal value determined by the lattice structure (\(N_f = 6\) flavors, \(T(R) = 1/2\)), not by the specific bijection \(\varphi\). Combined with non-perturbative gauge self-energy corrections (from pure-gauge Monte Carlo at the induced coupling) and Standard Model renormalization group running, this reproduces all three SM gauge couplings at \(M_Z\): \(1/\alpha_1 = 59.00\), \(1/\alpha_2 = 29.57\), \(1/\alpha_3 = 8.47\) --- matching the observed values to \(< 0.1\%\).
No competing framework produces all of these from a single set of axioms. The parallel with the cosmological constant dissolution is exact: the \(10^{122}\) discrepancy is the \emph{information compression ratio} of the trace-out, and the \textasciitilde95\% dark sector is the \emph{gravitational occlusion fraction}. Together, they account for the two largest anomalies in modern cosmology as two aspects of a single phenomenon: the cost of observing the universe from within.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{philosophical-lineage}{%
\subsection{Philosophical Lineage}\label{philosophical-lineage}}
The paper is a physics paper, but its core claims --- that observers face irreducible limits, that two irreconcilable descriptions can both be correct, that incompleteness is a structural feature rather than a deficiency --- sit at the intersection of some of the oldest debates in philosophy. A systematic mapping against the major traditions reveals a striking pattern: broad support for most of the framework, and near-universal resistance to one specific thesis.
\hypertarget{the-seven-claims}{%
\subsubsection{The seven claims}\label{the-seven-claims}}
The framework rests on seven implicit philosophical commitments:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
\textbf{Embedded observers face irreducible limits.} No observer inside a system can access the complete state.
\item
\textbf{QM and GR are both correct} within their domains.
\item
\textbf{The hidden sector is permanently inaccessible} --- not due to technological limitations, but structural ones.
\item
\textbf{The underlying reality is local and definite.} Indeterminacy belongs to the observer's description, not to the world.
\item
\textbf{Incompleteness is structural, not deficient.} The limitation arises because the observer is made of the same elements as the universe it is trying to describe --- a physical form of self-reference. This is analogous to Gödel's incompleteness theorem, not to ignorance that better instruments could cure.
\item
\textbf{The description is observer-relative.} Different partitions yield different emergent physics.
\item
\textbf{The two descriptions are irreconcilable} --- not because one is wrong, but because they are complementary projections of a single reality that no embedded observer can access directly.
\end{enumerate}
Claims 1, 5, 6, and 7 enjoy broad philosophical support across nearly every tradition examined. Claim 4 --- that the underlying reality is definite --- is the paper's most philosophically isolated thesis.
\hypertarget{the-guxf6del-connection}{%
\subsubsection{The Gödel connection}\label{the-guxf6del-connection}}
The analogy between this framework and Gödel's incompleteness theorem is not merely metaphorical. Gödel proved that a formal system rich enough to encode arithmetic cannot prove all true statements about itself from within --- the limitation arises because the system is self-referential, capable of constructing sentences that refer to its own provability. The observer in this framework faces a structurally parallel situation: the reason a cosmological horizon exists at all is that the observer is a physical subsystem made of the same fields, obeying the same speed-of-light constraint, as the universe it is trying to describe. An observer not made of the universe's own elements would not face a horizon and would not be forced into a quantum description. The incompleteness is a consequence of self-inclusion.
The connection can be made precise through Wolpert's limits of inference {[}19{]}, which the paper cites. Wolpert proved, using diagonal self-referential arguments directly descended from Gödel's, that any inference device embedded in the universe it is trying to predict faces fundamental limits --- not because of noise or finite resources, but because complete self-prediction is logically impossible. The present framework provides the concrete physical mechanism by which Wolpert's logical limitation manifests: the causal partition enforces the trace-out, the trace-out produces P-indivisibility, and P-indivisibility is quantum mechanics. The \(10^{122}\) compression ratio --- the Bekenstein-Hawking entropy of the cosmological horizon --- is the quantitative measure of how much information self-inclusion forces the observer to lose.
The key difference from Gödel is in the \emph{form} of self-reference. Gödel's is syntactic: the system encodes a sentence that says ``I am not provable.'' The framework's is physical: the observer is made of the described, so complete description would require the observer to fully encode its own state plus everything causally connected to it, which the causal structure forbids. Wolpert sits between the two, using Gödelian logic applied to physical systems. Together they form a chain: Gödel (formal systems cannot completely describe themselves) → Wolpert (physical inference devices cannot completely predict the systems they inhabit) → this framework (the specific mechanism is the causal partition, and the specific cost is quantum mechanics).
Hofstadter's \emph{Gödel, Escher, Bach} argues that self-referential systems produce genuinely emergent higher levels --- ``strange loops'' where a system's hierarchical levels fold back on themselves, generating properties invisible at the lower level. This framework agrees: quantum mechanics is the real, irreducible description available to any embedded observer, just as Hofstadter's ``I'' is real even though it emerges from neurons. The strange loop is intact --- the observer, made of the universe's own elements, generates through self-inclusion an emergent description that governs everything the observer can access. The disagreement with Hofstadter is narrow but consequential: he argues that once the emergent level is established, the substrate beneath it is explanatorily inert. The cosmological constant problem suggests otherwise. The \(10^{122}\) discrepancy exists precisely because the emergent quantum description and the classical substrate assign different vacuum energies, and only the substrate's value matches observation. The loop has a floor, and the floor matters.
\hypertarget{closest-ancestor-nicholas-of-cusa}{%
\subsubsection{Closest ancestor: Nicholas of Cusa}\label{closest-ancestor-nicholas-of-cusa}}
Of all thinkers surveyed, the fifteenth-century cardinal Nicholas of Cusa provides the most precise structural alignment. His \emph{docta ignorantia} (learned ignorance) is essentially Claim 5: the highest knowledge is knowing what we cannot know, and this is an intellectual achievement, not a failure. His \emph{coincidentia oppositorum} (coincidence of opposites) maps onto Claim 7 with remarkable precision: contradictions irreconcilable in the finite realm dissolve in infinity --- the infinitely large circle's circumference becomes a straight line, the infinite polygon becomes a circle. QM and GR, irreconcilable within any finite observational framework, would be coincident in the infinite ground that generates both.
Most strikingly, Cusa's ``wall of Paradise'' from \emph{De Visione Dei} maps onto Claim 3: an insurmountable boundary beyond which finite intellect cannot pass. Scholars like Emmanuel Falque interpret reaching this wall not as escaping through it but as \emph{inhabiting the boundary} --- profoundly compatible with a framework where understanding the limit is itself the deepest available insight.
\hypertarget{deepest-ontological-parallel-spinoza}{%
\subsubsection{Deepest ontological parallel: Spinoza}\label{deepest-ontological-parallel-spinoza}}
Spinoza's attribute theory is arguably the single closest ontological parallel. One substance (God/Nature) expresses itself through infinite attributes, of which humans know only two: Thought and Extension. Jonathan Bennett's ``barrier doctrine'' captures the key feature: each attribute must be conceived through itself --- no explanatory flow crosses between them. This is precisely Claim 7: two complete, correct descriptions that are structurally irreconcilable, yet both describe the same underlying reality. Spinoza's substance is fully determinate, aligning with Claim 4. His parallelism doctrine --- the order of ideas is the same as the order of things --- means the two descriptions track the same structure through incommensurable vocabularies.
The divergence is epistemological: Spinoza believes reason achieves adequate knowledge of reality through \emph{scientia intuitiva} (intuitive knowledge). The paper's permanent limits directly contradict this ambition.
\hypertarget{the-recurring-fault-line-claim-4}{%
\subsubsection{The recurring fault line: Claim 4}\label{the-recurring-fault-line-claim-4}}
Across every tradition examined, a striking pattern emerges. The claim that underlying reality is ``local and definite'' faces resistance from virtually every direction:
\textbf{Kant} prohibits positive characterization of the noumenal realm as dogmatic metaphysics --- you can know \emph{that} things-in-themselves exist but never \emph{what} they are. \textbf{Hegel} diagnoses a performative contradiction: to posit a hidden sector and characterize it as containing standard physics is already to have crossed the boundary you claim is uncrossable. \textbf{Nietzsche} attacks it as residual Platonism --- having correctly shown that all observation is perspectival, the paper reinstates the very ``true world'' his career was dedicated to destroying. \textbf{Wittgenstein} rejects it as nonsensical: asserting what lies beyond the limits of the sayable transgresses exactly the limits the paper identifies.
\textbf{Nagarjuna} identifies it as \emph{svabhāva}-reification --- attributing inherent existence to what Buddhist emptiness (\emph{śūnyatā}) says lacks it. \textbf{Daoism} warns against naming the unnameable: the Dao that can be spoken is not the eternal Dao, and calling reality ``definite'' is itself an act of conceptual carving. \textbf{Whitehead} insists reality is fundamentally processual and creative, involving genuine indeterminacy --- removing that indeterminacy robs the cosmos of its creative character.
\textbf{Advaita Vedanta} comes closest to full support --- Brahman is indeed a definite underlying reality --- but insists it is \emph{accessible}: the observer IS the underlying reality, and liberation (\emph{mokṣa}) consists in recognizing this identity. Every Hindu and Buddhist soteriology rejects permanent inaccessibility.
Only Spinoza (whose fully determinate substance is naturally definite) and Worrall's epistemic structural realism (which posits real but unknowable natures behind structures) provide genuine philosophical support for Claim 4.
\hypertarget{whats-genuinely-new}{%
\subsubsection{What's genuinely new}\label{whats-genuinely-new}}
The paper's philosophical lineage is not a single line of descent but a mosaic. Its structural epistemology draws from Kant through Wittgenstein to Wolpert. Its complementarity thesis combines Bohr's physics with Cusa's coincidence of opposites and Daoist yin-yang. Its projection metaphysics reaches through Plato's cave and Plotinus's emanation to Advaita Vedanta's maya-Brahman distinction.
What is genuinely new is the \emph{combination}: accepting Bohr's complementarity while insisting on Einstein's realism, grounding Kantian limits in physical structure while claiming knowledge of noumenal character, embracing Cusanian learned ignorance while grounding it in a concrete physical mechanism --- the Gödel → Wolpert → causal partition chain described above. The traditions reveal that this combination creates a productive philosophical tension --- the paper claims to know the character of what it proves unknowable. Whether this tension is a contradiction (as Hegel would insist), a residual Platonism (as Nietzsche would charge), or a legitimate achievement of learned ignorance (as Cusa would celebrate) may be the framework's deepest philosophical question.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{frequently-asked-questions}{%
\subsection{Frequently Asked Questions}\label{frequently-asked-questions}}
\textbf{``Isn't this just another interpretation of quantum mechanics?''}
No.~Interpretations (Copenhagen, Many-Worlds, Bohmian mechanics) accept the quantum formalism and disagree about what it \emph{means}. This framework \emph{derives} the quantum formalism from non-quantum premises and proves the derivation is the only possible one. It makes quantitative predictions --- the value of ℏ, dark energy evolution, and gravitational wave echoes --- that interpretations do not, and it accounts for the dark-sector concordance as an automatic consequence.
\textbf{``Doesn't Bell's theorem rule out hidden variable theories?''}
Bell's theorem rules out \emph{local} hidden variable theories satisfying a specific factorizability condition. This framework violates that factorizability --- not through faster-than-light signals, but because P-indivisible joint dynamics don't permit the decomposition Bell's theorem assumes. The framework reproduces exactly Tsirelson's bound (the maximum quantum violation), no more and no less.
\textbf{``If everything is deterministic underneath, where does randomness come from?''}
From ignorance. The total system is deterministic, but the observer can't access the hidden sector. Different hidden states compatible with the same visible state lead to different outcomes. The observer must assign probabilities --- not because the universe is random, but because their information is incomplete.
\textbf{``What about dark matter?''}
Dark matter is not a substance --- it is the local shape of the observer's blind spot, the same entropy that appears as dark energy when uniform. Baryonic matter displaces boundary entropy; the Jacobson mechanism converts the gradient into curvature; the crossover acceleration \(a_0 = cH/6\) and baryonic Tully-Fisher relation \(v^4 = GM_B \cdot cH/6\) follow with no free parameters. The mechanism works across all scales --- from galaxies through clusters (Coma matched to \(< 1\%\)) to the Bullet Cluster and the CMB acoustic peaks --- as detailed in the \emph{Predictions} section above.
\textbf{``What about the Bullet Cluster?''}
The horizon's response time (\(\sim H^{-1} \approx 14\) Gyr) vastly exceeds the collision crossing time (\(\sim 0.15\) Gyr). The boundary entropy is frozen at the pre-collision configuration --- centered on the galaxies, not the recently displaced gas. The observed lensing morphology follows without collisionless particles (see \emph{Cluster scales and the Bullet Cluster} above).
\textbf{``Why does gravity `see' the classical vacuum energy and not the quantum one?''}
Because the spacetime metric exists at the classical level, \emph{before} the quantum description emerges. The quantum zero-point energy is a feature of the observer's compressed description. It's real for quantum experiments (the Casimir effect, the Lamb shift) but doesn't appear in the stress-energy tensor that governs curvature. The \(10^{122}\) discrepancy is the information compression ratio --- the entropy of the observer's blind spot.
\textbf{``How can the paper claim reality is `definite' if it's permanently inaccessible?''}
This is the paper's most philosophically contested thesis (see \emph{Philosophical Lineage} above). The paper's defense is that the claim follows from the derivation's own logic: the axioms posit deterministic dynamics on a phase space, and the theorem shows that quantum indeterminacy arises from tracing out part of that phase space --- not from any indeterminacy in the underlying evolution. The ``definiteness'' is a consequence of the starting premises, not a speculative addition. Whether those premises are the right ones to start from is, of course, an open question --- but within the framework, Claim 4 is a theorem, not an assumption.
\textbf{``Doesn't holographic physics show that spacetime comes from entanglement?''}
This is probably the strongest objection to the framework's ordering --- classical spacetime first, quantum mechanics second. The Ryu-Takayanagi formula says entanglement entropy equals boundary area divided by \(4G\). Van Raamsdonk argued that reducing entanglement disconnects spacetime. Programs like ER=EPR and ``it from qubit'' read these results as evidence that quantum entanglement is prior to geometry.
The framework offers an alternative reading. If the quantum description is produced by tracing out over a geometric boundary, then \emph{of course} its entanglement entropy is proportional to the boundary's area --- the information content of the trace-out is set by the number of modes crossing the boundary, which scales with area. The Ryu-Takayanagi formula, on this account, isn't a hint that entanglement builds geometry; it's a consequence of the fact that geometry built the entanglement. The Bekenstein-Hawking entropy \(S = A/(4\,l_p^2)\), which the paper derives, is exactly this statement.
The correlation between entanglement and geometry is real either way. The question is which direction the arrow of explanation points. The two orderings make different empirical predictions: if geometry emerges from entanglement, spacetime structure should break down at high energy; if quantum mechanics emerges from geometry, the quantum description should break down near the discreteness scale while the geometric substratum persists. The gravitational wave echo prediction provides one test. But the cumulative case is already substantial: the geometry-first ordering produces the observed vacuum energy, the Bekenstein-Hawking \(1/4\) factor, the value of \(\hbar\), the dark-sector concordance, dark energy evolution consistent with DESI data, the MOND acceleration scale \(a_0 = cH/6\), the baryonic Tully-Fisher relation, the SM gauge group with its coupling strengths, the declining rotation curves at high redshift, \(\bar{\theta} = 0\), the Coma cluster mass, and the Bullet Cluster lensing morphology --- twelve independent consequences matching observation. The quantum-first ordering produces the \(10^{122}\) discrepancy, treats \(\hbar\) and the \(1/4\) factor as unexplained inputs, and has no natural account of the dark sector.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{the-decompression-algorithm}{%
\subsection{The Decompression Algorithm}\label{the-decompression-algorithm}}
There is a way to state what the framework means that is sharper than anything in the formal paper.
An observer inside the universe receives incomplete data --- the visible sector only. The characterization theorem proves that there is exactly one self-consistent algorithm for making predictions from this incomplete data. That algorithm is quantum mechanics.
Everything in QM is a feature of the algorithm, not a feature of the underlying reality. The wave function is the algorithm's internal state variable --- the bookkeeping device it uses to track what it knows and what it doesn't. Complex amplitudes are the algorithm's arithmetic --- the specific number system the reconstruction requires. Interference is what happens when the algorithm combines two incomplete pathways and their bookkeeping entries partially cancel. Entanglement is the algorithm's encoding of correlations that were written into the hidden sector during preparation and haven't been read back yet. The Born rule is the algorithm's output format --- the way it converts its internal state into predictions the observer can check.
None of these are properties of the deterministic substratum. In the substratum, there are no wave functions, no complex numbers, no interference, no entanglement. There are configurations evolving under a Hamiltonian flow. That's all. The entire apparatus of quantum mechanics --- every textbook, every equation, every experiment --- is what the decompression algorithm looks like from the inside.
\hypertarget{antimatter-as-algorithmic-artifact}{%
\subsubsection{Antimatter as algorithmic artifact}\label{antimatter-as-algorithmic-artifact}}
This reframing changes the understanding of specific phenomena. Consider antimatter.
The standard account: the Dirac equation, which combines quantum mechanics with special relativity, has solutions with both positive and negative energy. The negative-energy solutions correspond to antiparticles --- particles with opposite charge and quantum numbers. Every particle must have an antiparticle, because CPT invariance (a theorem of any local Lorentz-invariant quantum field theory) requires it.
The framework's account: the trace-out over the hidden sector forces the reconstruction algorithm to use two-signed amplitudes. When the algorithm operates in a relativistic context --- which it must, because the substratum's causal structure is relativistic --- the two-signed amplitude structure becomes the two-signed energy structure of the Dirac equation. Negative-energy solutions aren't additional features of reality. They're what the algorithm requires for self-consistency when the incomplete data comes from a relativistic system.
The parallel to the coin-and-die model is direct. The intermediate propagator:
\[\Lambda(2,1) = \begin{pmatrix} 2 & -1 \\ -1 & 2 \end{pmatrix}\]
has entries of \(-1\) --- ``anti-probabilities'' that don't exist in the substratum (where every transition probability is between 0 and 1). These negative entries are the minimal-model ancestors of antimatter. They arise for the same reason: the algorithm can't reconstruct the observed dynamics without them. In the toy model, the \(-1\) entries encode the fact that the die remembers and reverses the coin flip --- information backflow that no positive-entry matrix can describe. In the relativistic case, the negative-energy solutions encode the fact that the relativistic trace-out requires a doubled Hilbert space to remain self-consistent.
In both cases: the substratum has no doubling. The algorithm demands one.
\hypertarget{nothing-in-quantum-mechanics-explains-a-fundamental-phenomenon}{%
\subsubsection{Nothing in quantum mechanics explains a fundamental phenomenon}\label{nothing-in-quantum-mechanics-explains-a-fundamental-phenomenon}}
This is the deepest implication of the framework, and it is worth stating plainly.
The wave function does not explain what an electron is doing between measurements. It encodes what the algorithm computes given the observer's data. Interference does not explain why particles behave like waves. It reflects the algorithm's method of combining incomplete information. Entanglement does not explain a mysterious connection between distant particles. It reflects correlations stored in the hidden sector that the algorithm must track but cannot directly access. Antimatter does not explain a second kind of fundamental stuff. It reflects the algorithm's need for two-signed amplitudes in a relativistic context.
Every quantum phenomenon is an output of the decompression algorithm. The only fundamental phenomenon is the compression itself: the observer cannot see past the horizon, and the characterization theorem dictates the unique form of the resulting reconstruction.
This doesn't make quantum phenomena less real. Temperature is also an emergent phenomenon --- a single molecule has no temperature. But temperature boils water, drives engines, and burns skin. Its emergence from statistical mechanics doesn't diminish its causal power within the emergent description. The same holds for every quantum phenomenon: they are causally potent, experimentally real, and the only physics accessible to an embedded observer. But they are features of the observer's necessary algorithm, not features of the universe's fundamental structure.
This resolves what is arguably the oldest open question in quantum foundations: \emph{is the wave function real?} Since 1926, physics has been split between epistemic interpretations (the wave function is just a bookkeeping device tracking the observer's knowledge --- it doesn't correspond to anything physical) and ontic interpretations (the wave function is a real physical entity --- a field on configuration space, or a branching structure, or a guiding wave). The framework shows that both sides share a hidden assumption: that ``real'' means ``fundamental.'' The epistemic camp says the wave function isn't fundamental, therefore it isn't real. The ontic camp says it's real, therefore it must be fundamental. Both are wrong. The wave function is real \emph{and} not fundamental --- in exactly the same way as antimatter, temperature, and every other emergent phenomenon. Within the bounded projection --- which is all an embedded observer will ever have access to --- the wave function, the positron, the interference pattern, and the electromagnetic field all have exactly the same ontological status. They are all outputs of the decompression algorithm. They are all experimentally real. And none of them exist in the substratum.
The analogy to data compression is precise. Lossy compression (like JPEG or MP3) discards information and reconstructs an approximation. The reconstruction has artifacts --- ringing near edges in JPEG, pre-echo before transients in MP3. These artifacts are not in the original signal. They are features of the reconstruction algorithm applied to incomplete data.
Quantum mechanics is the reconstruction. The \(10^{122}\) discarded bits (the hidden sector) are the lost information. Wave functions, interference, entanglement, and antimatter are the artifacts. They are real --- as real as JPEG ringing is visible --- but they are properties of the reconstruction, not of the original.
The difference from ordinary compression: with JPEG, you can access the original file. With the universe, you cannot. The reconstruction is all we will ever have. Its artifacts are our physics. And the characterization theorem proves that no other reconstruction is possible --- this is the unique algorithm, and its artifacts are the unique artifacts. Quantum mechanics is not one possible decompression of incomplete cosmological data. It is the only one.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{what-this-means}{%
\subsection{What This Means}\label{what-this-means}}
The search for a ``theory of everything'' that unifies quantum mechanics and general relativity has assumed that the two theories describe the same level of reality and must be reconciled there. This paper argues that assumption is wrong. The two theories operate at different levels --- one fundamental, one emergent --- and their apparent contradiction is the information-theoretic cost of being an observer trapped inside the system you're trying to describe.
The apparent conflict between QM and GR, the \(10^{122}\) vacuum energy discrepancy, and the dark sector are three faces of the same fact. The first is the existence of the trace-out. The second is its information compression ratio. The third is its gravitational occlusion fraction. All three are mandatory consequences of observational incompleteness --- and all three match what we observe.
The universe is not broken. We are observing it from within.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{corroboration-the-rigidity-test}{%
\subsection{Corroboration: The Rigidity Test}\label{corroboration-the-rigidity-test}}
A natural objection to any framework this sweeping is: maybe it only works because it was built to work. Maybe the axioms were chosen to produce QM, and the cosmological application was chosen because it fits. A flexible framework that can accommodate anything predicts nothing.
A companion paper (``The Fundamental Structure of the Observational Incompleteness Framework'') tests this by asking a question the main paper doesn't address: if you build a concrete system satisfying the axioms, does the framework constrain the dynamics? And if so, does the constrained dynamics produce anything beyond QM --- something the framework wasn't designed to deliver?
It does, on both counts.
\textbf{The dynamics is unique.} Among all second-order reversible nearest-neighbor dynamics on a lattice, the requirements of center independence (necessary for emergent QM), spatial isotropy, and linearity select exactly one: the discrete wave equation. This holds for any alphabet size and any dimension. Center independence is necessary because center-dependent rules allow the visible sector to partially predict itself, suppressing the information backflow that produces QM --- an effect proven analytically on the full lattice via an information-screening mechanism. Linearity is selected by three independent criteria: it gives maximum propagation speed, it uniquely maximizes P-indivisibility among linear rules, and it is the only choice for which horizons are in thermal equilibrium (nonlinear dispersion breaks the Unruh effect).
\textbf{The dynamics passes seven independent checks.} The wave equation --- selected solely by the QM requirement plus symmetry --- turns out to produce: (1) non-Markovian reduced dynamics (emergent QM), (2) a causal structure with the correct spacetime dimension, (3) the gap equation for ℏ, (4) entanglement entropy proportional to boundary area, (5) a Lorentz-invariant dispersion relation, (6) the correct Unruh temperature at horizons, and (7) all inputs to Jacobson's thermodynamic derivation of Einstein's field equations. Each of these is an independent check that could have come out wrong --- the entropy could have scaled with volume, the dispersion could have been non-relativistic, the horizon state could have been non-thermal. None of these failures occurs.
\textbf{No free parameters.} The wave equation is selected, not chosen. The lattice spacing is fixed by the gap equation. The entropy coefficient is determined by thermal matching. The dispersion relation is an algebraic identity. There is nothing to tune.
This is what rigidity looks like. A framework that produces QM from one set of arguments and then, without modification, produces the inputs for GR from a completely different set of arguments --- lattice dynamics, dispersion relations, entanglement Hamiltonians --- is making a structural claim that goes well beyond the original derivation. Each passed check is a point where the framework could have been falsified and wasn't. Seven passed checks with no free parameters is not proof, but it is the kind of evidence that distinguishes a framework describing something real from one that was engineered to fit.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{what-is-the-lattice}{%
\subsection{What Is the Lattice?}\label{what-is-the-lattice}}
The rigidity test proves that the wave equation on a lattice produces both QM and GR. But what \emph{is} the lattice? Is the universe literally a grid of cells at the Planck scale? What would the grid be made of? What would it sit in?
A companion paper (``The Fundamental Structure of the Observational Incompleteness Framework'') addresses these questions head-on --- and also derives the Standard Model's structure from the lattice dynamics. Its answer to the ontology question: the lattice is not a physical object. It is the \emph{coupling structure} of the dynamics --- the pattern of which degrees of freedom affect which others.
\textbf{The minimal structure.} The paper audits every assumption in the framework and identifies which are necessary for the theorems and which are artifacts of the particular construction. The result: only six structural properties matter --- deterministic bijectivity, finite boundary entropy, bounded coupling degree, statistical isotropy, non-trivial partition coupling, and slow-bath capacity. The regular cubic lattice, the specific alphabet size q, the dimensionality d, and even the wave equation are all either derived from these six properties or irrelevant to the predictions. The minimal object is not a lattice. It is a triple: a finite set S, a bijection φ, and a partition V.
\textbf{Space as coupling graph.} Given any bijection on a finite set, you can ask: which components of the state affect which others? The answer defines a graph --- sites are vertices, and two sites are connected if the dynamics couples them. This graph is not drawn on a pre-existing space. It IS space, at the fundamental level. Distance means ``how many coupling steps apart.'' Area means ``how many edges cross the boundary.'' The area law, the dispersion relation, the Myrheim-Meyer dimension --- every spatial property used in the derivation chain is a property of this coupling graph.
This dissolves the container problem. The lattice doesn't sit in anything. There is no ambient manifold. The coupling graph is an abstract mathematical structure --- like a number or a group --- and any spatial embedding we draw is a representation for our convenience, not a physical fact. Asking ``what does the lattice sit in?'' is like asking what the number 7 sits in. It's a category error.
\textbf{The alphabet is a gauge freedom.} The companion papers prove every result for any alphabet size q ≥ 2. The gap equation contains no q. The Bekenstein-Hawking formula contains no q. The dispersion relation is valid for any q. No experiment could measure q, even in principle. This makes q a gauge freedom --- a choice of mathematical description, like choosing a coordinate system or a gauge for the electromagnetic potential. The physical content is the coupling structure, not the microscopic state space.
\textbf{Connection to causal set theory.} The bijection's coupling graph, extended in time, generates a causal partial order: event A precedes event B if B is within A's future coupling light cone. This partial order is a causal set in the sense of Bombelli, Lee, Meyer, and Sorkin --- the starting point of causal set theory, one of the established approaches to quantum gravity. Causal set theory has always had a specific gap: it postulates a causal order but lacks a deterministic dynamics that produces QM and GR. The OI framework provides exactly this. Sorkin's slogan is ``Order + Number = Geometry.'' The OI version is ``Bijection + Locality = QM + GR.''
\textbf{The hierarchy of physics.} If the fundamental object is (S, φ) --- a finite set and a bijection --- then every concept in physics is a different aspect of this pair:
\begin{itemize}
\tightlist
\item
\textbf{φ} is the dynamical law --- the complete rule mapping states to states.
\item
\textbf{Space} is the coupling structure of φ --- which degrees of freedom affect which others. It is determined by φ as the factorization minimizing coupling degree. Space is not a container; it is a relationship.
\item
\textbf{Matter} is the state --- the values assigned to the degrees of freedom. A particle is a localized pattern that propagates through the coupling graph. Space and matter are the graph topology and the graph coloring; both come from (S, φ).
\item
\textbf{Energy} is the rate of change of the state under iteration of φ. A high-energy excitation changes rapidly from step to step; the vacuum changes least. Energy is not a substance --- it is a measure of how fast a pattern evolves.
\item
\textbf{Time} is the iteration itself --- the stepping from one state to the next. There is no continuous time at the fundamental level.
\item
\textbf{Quantum mechanics} is the observer's compressed view of V. It exists because the observer must marginalize over the hidden sector.
\item
\textbf{Gravity} is the thermodynamic limit of the coupling structure --- the macroscopic behavior of bounded-degree graphs with area-law entropy.
\item
\textbf{Conservation laws} are emergent. Bijectivity of φ preserves state-space volume (information conservation). Energy conservation is what this looks like in the emergent quantum description (Noether's theorem). Momentum conservation reflects the symmetry of the coupling graph.
\end{itemize}
None of these are independent substances. They are the same structure viewed at different scales or from different angles. The framework does not unify them by reducing them to a common material. It unifies them by showing they were never separate.
\textbf{What remains.} The structural reading leaves remarkably little genuinely open. The spatial dimensionality \(d = 3\) is derived by four independent self-consistency filters (the dark sector concordance, propagating gravity, stable matter, and renormalizability). Background independence is achieved through state-dependent bijections, with the discrete Einstein equation identified as the Ollivier-Ricci curvature condition. The observer is proved to be generic: any small subgraph of any large bounded-degree bijection satisfies the conditions for emergent QM. The fundamental object is (S, φ) --- a finite set and a bijection. The partition V, the dimension, and the laws of physics are all derived. What remains undetermined are the 18+ parameter values of the Standard Model, which depend on the specific bijection φ --- analogous to how Einstein's equations don't determine the mass of Jupiter.
But the central claim stands: the fundamental object is (S, φ) --- a finite set and a bijection. The observer, the dimension, and the laws of physics are all emergent.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{why-these-particles}{%
\subsection{Why These Particles?}\label{why-these-particles}}
The first two companion papers establish that the framework produces quantum mechanics and general relativity. But quantum mechanics is a \emph{framework}, not a \emph{theory}. It tells the observer to use Hilbert spaces, unitary evolution, and the Born rule --- but it's compatible with infinitely many different quantum field theories. You could have quantum mechanics with an SU(7) gauge group, or with 15 generations of fermions, or with no gauge fields at all. Deriving QM answers the question ``what kind of probability does the observer use?'' It doesn't answer ``what particles exist?'' or ``what forces act between them?''
The Standard Model of particle physics --- quarks, leptons, the strong and weak nuclear forces, electromagnetism, the Higgs boson --- has been confirmed to extraordinary precision. But its \emph{structure} has always been taken as empirical. We observe three generations, SU(3) × SU(2) × U(1), specific hypercharge assignments, and we accept them as given. Nobody has explained \emph{why} these particles and not others.
The Fundamental Structure paper asks whether the specific lattice dynamics selected by the QM and GR requirements --- the wave equation on a d = 3 hypercubic lattice with checkerboard partition --- determines which quantum field theory the observer sees. The answer is yes --- almost completely.
\textbf{Fermions from the wave equation.} The wave equation selected by the QM requirement (the discrete lattice Klein-Gordon equation) has a well-known mathematical property: it factors into first-order operators called staggered Dirac operators. This factorization, discovered by Susskind in 1977, means the \emph{same dynamics} that produces bosonic waves also describes fermionic matter. Fermions are not added to the framework. They \emph{are} the framework, seen from a different mathematical angle.
The factorization produces a specific structure: on a three-dimensional lattice, the staggered construction yields exactly 4 ``tastes'' --- independent species of fermions. These decompose as 1 + 3 under the cubic symmetry of the lattice: one singlet and one triplet. The triplet count equals the spatial dimension \(d = 3\). Since \(d = 3\) is derived by the Fundamental Structure paper, the three-generation structure of the Standard Model --- the fact that there are three copies of each type of quark and lepton --- is traced back to the dimensionality of space.
Numerically, the singlet and triplet are not just distinguished by symmetry --- they have quantitatively different coupling strengths. The singlet has coupling \(|\mu| = 1\) (the maximum), while the three triplet members each have \(|\mu| = 1/3\). The triplet members are exactly degenerate by cubic symmetry, providing a lattice-level origin for ``generation symmetry.''
\textbf{The Higgs mechanism from one condition.} The same center independence condition (\(\alpha = 0\)) that produces quantum mechanics also enforces exact chiral symmetry in the staggered fermions. This means fermion mass terms are forbidden in the fundamental Lagrangian. The \emph{only} way to give fermions mass while preserving unitarity and renormalizability is the Higgs mechanism. One algebraic condition --- center independence --- simultaneously produces QM, chiral fermions, and the necessity of a Higgs boson.
\textbf{Gauge structure from a matrix.} The wave equation naturally generalizes to multiple components. Each lattice site carries a \(K\)-component vector instead of a single number. The selection criteria (center independence, isotropy, reversibility, linearity) uniquely produce the \emph{matrix wave equation}, where a coupling matrix \(M\) is the sole new parameter.
This matrix \(M\) determines everything about the gauge structure. Its eigenvalue multiplicities give the gauge group --- the group of transformations that leave \(M\) invariant (technically, its commutant in U(K)). Its eigenvalues give the mass spectrum. The gauge group and the mass spectrum are dual descriptions of the same matrix. This is a theorem, not an approximation.
\textbf{Why K = 6.} In three dimensions, each lattice site has 6 nearest neighbors (±x, ±y, ±z). If the internal components correspond to these link directions --- the natural choice from the factorization principle, which says to decompose the per-site state space into its independent dynamical channels --- then \(K = 2d = 6\).
These 6 link directions form a representation of the cubic rotation group \(O\). Standard character theory decomposes this representation into irreducibles:
\[6 = T_1(3) \oplus E(2) \oplus A_1(1)\]
The dimensions are 3, 2, and 1. By Schur's lemma, the coupling matrix \(M\) acts as an independent scalar on each irreducible subspace: \(M = \text{diag}(\mu_c I_3, \mu_w I_2, \mu_y)\). The gauge group --- the commutant of \(M\) --- is therefore:
\[U(3) \times U(2) \times U(1) \supset SU(3) \times SU(2) \times U(1)\]
This is the Standard Model gauge group. It comes from the representation theory of the cubic lattice in three spatial dimensions. The three factors correspond to the three irreducible representations of \(O\): the vector (\(T_1\), dimension 3) gives color SU(3), the quadrupole (\(E\), dimension 2) gives weak SU(2), and the scalar (\(A_1\), dimension 1) gives hypercharge U(1).
\textbf{Chiral coupling.} The Standard Model has a peculiar feature: the weak force treats left-handed and right-handed particles differently. Left-handed particles form SU(2) doublets; right-handed particles are SU(2) singlets. No other known force has this asymmetry. Where does it come from?