-
Notifications
You must be signed in to change notification settings - Fork 6
Expand file tree
/
Copy pathintelligence-review.html
More file actions
1206 lines (1206 loc) · 71.7 KB
/
intelligence-review.html
File metadata and controls
1206 lines (1206 loc) · 71.7 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<h3 id="conscious-machines">conscious machines</h3>
<p>Title: How to Build Conscious Machines by Michael Timothy Bennett
(Australian National University, Doctoral Thesis in Computer Science) -
Preprint Under Review</p>
<p><strong>Abstract:</strong></p>
<p>This preprint under review presents a comprehensive exploration into
the nature of consciousness and proposes a framework for constructing
artificial conscious machines. Michael Timothy Bennett’s doctoral
thesis, titled “How to Build Conscious Machines,” is grounded in
computer science, philosophy, neuroscience, and artificial general
intelligence (AGI).</p>
<p>The research begins by questioning what constitutes consciousness and
why specific qualia, like the color red or smell of coffee, exist.
Bennett argues that simplicity is key to intelligence, yet complexity is
subjective due to abstraction layers in software systems. He extends
this idea to hardware, suggesting it’s also interpreted by fundamental
physics laws, leading him to formalize an infinite stack of layers
describing all possible worlds.</p>
<p>Each layer embodies policies constraining possible worlds, with tasks
being the worlds where they are completed. Adaptive systems are
polycomputers, and policies simultaneously complete more than one task.
The “cosmic ought” from which goal-directed behavior emerges is
demonstrated through natural selection. Bennett introduces “w-maxing,” a
system that maximizes weak constraints on possible worlds, proving an
upper bound on intelligence and showing all policies can take equally
simple forms.</p>
<p>Experiments reveal w-maxing generalizes 110-500% more than
simp-maxing. The thesis formalizes how systems delegate adaptation down
their stacks, illustrating that biological systems are more adaptable
due to deeper delegation of adaptation (bioelectric
polycomputation).</p>
<p>The psychophysical principle of causality is proposed, arguing qualia
are tapestries of valence. The thesis concludes by integrating these
ideas and presenting “The Temporal Gap” as a challenge in building
conscious machines. A stable environment allows for w-maxing without
simp-maxing, enabling complex stacks to grow, potentially shedding light
on the origins of life and the Fermi Paradox.</p>
<p><strong>Key Concepts:</strong></p>
<ol type="1">
<li><p><strong>Infinite Stack of Layers</strong>: An abstract framework
describing all possible worlds where hardware and physics are treated as
layers interpreted by other layers or fundamental laws.</p></li>
<li><p><strong>Policies and Tasks</strong>: Policies constrain possible
worlds, while tasks are the worlds in which they are completed. Adaptive
systems are polycomputers that simultaneously complete multiple tasks
via policies.</p></li>
<li><p><strong>W-Maxing</strong>: A system maximizing weak constraints
on possible worlds, shown to generalize more effectively than
simp-maxing and linked to biological adaptability through deeper
delegation of adaptation (bioelectric polycomputation).</p></li>
<li><p><strong>Psychophysical Principle of Causality</strong>: Argues
qualia are tapestries of valence, where diverse intelligences could
exist but remain unperceivable due to differing causal-identity
preconditions.</p></li>
<li><p><strong>The Temporal Gap</strong>: A challenge in constructing
conscious machines that stems from the complex relationship between
hardware layers and time-dependent processes.</p></li>
</ol>
<p><strong>Contributions:</strong></p>
<ol type="1">
<li>Proposes an alternative meta-approach (w-maxing) for constructing
superintelligence, optimizing the weakness of constraints on
function.</li>
<li>Integrates philosophy, neuroscience, and computer science to provide
a unified perspective on consciousness and AGI.</li>
<li>Introduces Stack Theory as a formalism for describing environments
across all possible worlds, enabling pancomputational enactivism.</li>
<li>Presents experimental evidence supporting w-maxing’s superior
generalization capabilities compared to simp-maxing.</li>
<li>Offers insights into the origins of life and the Fermi Paradox by
examining the role of stable environments in fostering complex conscious
systems.</li>
</ol>
<p>The provided text outlines key sections from an extensive work by
Michael Timothy Bennett titled “How to Build Conscious Machines.” The
chapters cover a range of topics related to artificial general
intelligence (AGI), consciousness, and the nature of intelligent
systems. Here’s a detailed summary:</p>
<ol type="1">
<li><p><strong>Computational Dualism and Objective
Superintelligence</strong>: Bennett explores the concept of
computational dualism, which posits that mental states are distinct from
physical states but can be explained by them. He also discusses
objective superintelligence, suggesting that it could be defined in
terms of task performance across various domains.</p></li>
<li><p><strong>Embodied Formal Language</strong>: This chapter
introduces an embodied formal language where statements made by the
environment determine truth values based on physical states. Bennett
argues that everything exists as a statement within this language, and
truth depends on the environmental state. From a subjective perspective,
one cannot know the exact state of the environment.</p></li>
<li><p><strong>Purpose</strong>: In this section, Bennett formalizes
embodied tasks, inference, and stacks to define purpose. He argues that
what ought to be is derived from time and change (the cosmic ought),
with each statement implying a narrower abstraction layer. Fitness and
correctness are defined in terms of persistence within an
environment.</p></li>
<li><p><strong>Intelligence</strong>: Chapter 8 introduces the theory of
optimal learning, specifically w-maxing (choosing weakest policies) as
opposed to simp-maxing (simplicity maximization). Bennett proves that
w-maxing is optimal and demonstrates it experimentally. He argues that
intelligent systems adapt during their lifetimes rather than having all
knowledge hard-coded at birth.</p></li>
<li><p><strong>Stackism</strong>: This chapter links complexity and
abstraction, showing why simple forms correlate with weak constraints on
function—an illusion perpetuated by abstraction layers. Bennett explains
how biological systems seem to create versatile abstraction layers more
efficiently than AI due to delegating control to lower levels of
abstraction (The Law of the Stack).</p></li>
<li><p><strong>Psychophysical</strong>: Here, Bennett formalizes causal
identities explaining how systems learn cause and effect through
attraction and repulsion from physical states. He introduces the
Psychophysical Principle of Causality to explain why systems learn
specific objects and properties based on w-maxing. The chapter also
covers the emergence of self-awareness through causal identities for
oneself (1st, 2nd, and 3rd orders).</p></li>
<li><p><strong>Language Cancer</strong>: This section integrates earlier
work on symbol emergence and Gricean pragmatics to explain how meaning
is communicated, norms form, and its relation to cancer. Bennett refutes
the Orthogonality Thesis, arguing that language evolution facilitates
social predation and honesty through predictive accuracy.</p></li>
<li><p><strong>Why Is Anything Alive?</strong>: In this chapter, Bennett
discusses the emergence of life in an indifferent universe, aligning
with the Free Energy Principle and addressing criticisms of
Pancomputational Enactivism. He explains how simple forms (like rocks)
persist through simp-maxing while complex forms like slime molds
self-repair by w-maxing in stable environments.</p></li>
<li><p><strong>Why Is Anything Conscious?</strong>: Finally, Bennett
tackles the hard problem of consciousness. He argues that phenomenal
consciousness arises from a hierarchy of causal identities, starting
with one-dimensional valence in cells and progressing to more complex
tapestries of valence. Consciousness, he suggests, is an integrated
representation and value judgment process rather than a separate
component.</p></li>
</ol>
<p>Overall, Bennett’s work presents a comprehensive framework for
understanding intelligent systems, consciousness, and the evolution of
both in biological and artificial contexts. It integrates various
philosophical, computational, and scientific ideas to propose novel
solutions to long-standing questions in AI and cognitive science.</p>
<p>This text is an excerpt from “How to Build Conscious Machines” by
Michael Timothy Bennett, discussing the philosophical background
necessary for constructing conscious machines. Here’s a summary and
explanation of key points:</p>
<ol type="1">
<li><p><strong>Mind-Body Problem</strong>: The fundamental question
about the relationship between mind (mental phenomena) and body
(physical phenomena). It asks “What is a mind?” or “What does it mean
when we say something has a mind?”.</p></li>
<li><p><strong>Substance Dualism</strong>: Proposed by Descartes, this
view suggests that there are two distinct substances: mental
(immaterial) and physical (material). The interaction between these
substances occurs via the pineal gland, an “interpreter” or abstraction
layer in the body.</p></li>
<li><p><strong>Preestablished Harmony</strong>: Leibniz proposed this
alternative to substance dualism. According to it, God synchronizes
mental and physical processes so they appear to interact, but they do
not actually causally influence each other. This view also involves an
interpreter (God), which is similar to the pineal gland in Descartes’
theory.</p></li>
<li><p><strong>Neutral Monism</strong>: Spinoza’s perspective denies
direct mental-physical interaction by proposing a third, unobserved
substance that includes both mental and physical aspects. This idea
aligns with the concept of abstraction layers later discussed in the
text.</p></li>
<li><p><strong>Epiphenomenalism</strong>: This theory argues that mental
states are merely byproducts or “epiphenomena” of physical processes,
without any causal influence on those processes. It preserves dualism
but raises questions about the evolutionary purpose of
consciousness.</p></li>
<li><p><strong>Physicalism</strong>: Physicalists believe mental events
are part of the physical world and can either be reducible (reductive
physicalism) or irreducibly complex (non-reductive physicalism), with
qualia being fundamental components of reality. The author is a
reductive physicalist.</p></li>
<li><p><strong>Behavioralism</strong>: This approach equates mental
events with observable behavior, relying on input-output pairs to define
mental states. However, it struggles to explain private first-person
experiences and reduces meaning to mere inputs and outputs.</p></li>
<li><p><strong>Machine Functionalism</strong>: A variation of
functionalism that introduces an interpreter (Turing machine) between
inputs and outputs to account for mental states. Despite being more
sophisticated than behavioralism, it still faces challenges in
explaining the subjective nature of consciousness.</p></li>
<li><p><strong>Contemporary Theories</strong>: Modern explanations of
consciousness divide it into functional (explainable by natural
selection) and phenomenal aspects (subjective experiences). The “hard
problem” of consciousness questions whether phenomenal consciousness can
be explained functionally or requires a separate, non-reducible
explanation.</p></li>
</ol>
<p>The author emphasizes the importance of understanding these
philosophical perspectives to guide the creation of conscious machines.
He will later argue for explaining phenomenal consciousness as
functional, effectively eliminating the distinction between the two
aspects.</p>
<p>The text discusses several theories related to consciousness,
primarily focusing on higher-order thought (HOT) theories, global
workspace theory (GWT), Integrated Information Theory (IIT),
self-organization, free energy principle, reafference, and liquid/solid
brain concepts.</p>
<ol type="1">
<li><p><strong>Higher Order Thought (HOT) Theories</strong>: These
theories propose that conscious access to information arises from higher
order representations or “meta-representations” derived from lower order
mental states. While HOTs can explain why some information is conscious
and not others, they don’t provide insight into the nature of qualia
(what it’s like to experience something).</p></li>
<li><p><strong>Global Workspace Theory (GWT)</strong>: GWTs focus on
access consciousness rather than qualia. They use a stage analogy where
content that is currently conscious is the equivalent of what’s
happening “on stage,” while unconscious processes observe and make use
of this information. Unlike HOTs, GWTs don’t provide much insight into
why two local states might differ in character.</p></li>
<li><p><strong>Integrated Information Theory (IIT)</strong>: IIT takes a
different approach by beginning with the phenomenal and deriving
necessary preconditions for consciousness. It quantifies consciousness
using Φ, which measures the “maximum irreducible integrated information
generated by a system.” If Φ is non-zero, the system is considered
conscious. However, IIT doesn’t directly address why anything is
conscious from a physical perspective.</p></li>
<li><p><strong>Self-Organization and Naturalism</strong>: This concept
refers to the spontaneous emergence of order from interactions within a
system. It’s crucial in biology because it allows for complex structures
like organisms and ecosystems without centralized control.
Self-organizing systems optimize or satisfice to survive by predicting
future states to stay within acceptable conditions.</p></li>
<li><p><strong>Free Energy Principle</strong>: This theory frames
cognition as an optimization process, where a system minimizes “free
energy” (a bound on prediction error) to make the most accurate
predictions possible. According to this view, consciousness is an
adaptation – a functional one – that allows organisms to predict and
adapt to their environment.</p></li>
<li><p><strong>Reafference</strong>: Proposed by Bjorn Merker,
reafference theory suggests subjective experience arises from the
ability to discern the consequences of actions logically. It
necessitates an integrated and egocentric representation of the world
for an organism to recognize that “I” caused something. Reafference is
supported by specific neural structures in vertebrates and central
cortices in insects, according to Merker.</p></li>
<li><p><strong>Liquid vs Solid Brains</strong>: Ricard Solé introduced
this distinction, with solid brains having persistent structure (like
human or animal brains) and liquid brains lacking any such structure.
Liquid brains are asynchronous, spread across time and space, and cannot
support a bioelectric network like solid brains can.</p></li>
<li><p><strong>Relevance Realization</strong>: This concept refers to
the formation of a cognitive language that allows for inference within
an organism. Before an organism can model its world or predict events,
it must establish this internal language, which determines what problems
are manageable and how they’re approached. Relevance realization
requires embodiment and the creation of a vocabulary of primitive
structures from which more complex machinery is built.</p></li>
</ol>
<p>These theories collectively attempt to understand consciousness’
nature, its relation to information processing, and its role in
cognition and adaptation. The text suggests that understanding these
concepts can help build more conscious-like machines by incorporating
relevant aspects of these theories into AI designs.</p>
<p>The text discusses the concept of Artificial General Intelligence
(AGI) and its challenges from a philosophical perspective, grounded in
enactivism, pancomputationalism, and computational dualism.</p>
<ol type="1">
<li><p>Enactivism: This theory posits that cognition is distributed
across an organism and its environment. It suggests there’s no clear
boundary between the two, and intelligence emerges from their
interaction. The author argues for a formalization of enactivism
compatible with computational models, focusing on interpreters or
boundaries instead of presupposing them.</p></li>
<li><p>Pancomputationalism: Unlike traditional computationalism that
limits computation to mental processes, pancomputationalism asserts
everything is computational. This view allows for the blending of
organisms and environments without distinct interpretations, aligning
with enactivist principles.</p></li>
<li><p>Computational Dualism and Software Intelligence: The author
criticizes the distinction between software (intelligence) and hardware
in AI systems, arguing that a brain-in-a-vat scenario highlights this
separation’s limitations. To avoid computational dualism, he suggests
formalizing all conceivable environments to understand what an AGI might
know or experience.</p></li>
<li><p>Epistemology: The author discusses how we can evaluate theories
of consciousness and intelligence by applying principles like Ockham’s
Razor (simpler explanations are more likely to be true) and the
Principle of Inference to the Best Explanation (preferring better
hypotheses). However, he argues that simplicity is a subjective measure
related to our understanding, not necessarily reflecting objective
reality.</p></li>
<li><p>Structuralist Brains in Vats: Drawing from structuralism, the
author discusses how meaning emerges from interrelations between signs.
He acknowledges post-structuralist critiques (like Derrida’s différance)
but maintains a primarily structuralist approach while incorporating
Derrida’s insights to question full encapsulation of semantics in any
model.</p></li>
<li><p>Hume’s Guillotine and Ought: The author aims to dissolve the
separation between is (what is) and ought (what should be), aligning
with naturalism. He argues that continued existence itself implies an
‘ought’ from which purpose and behavior follow, moving away from
traditional philosophical discussions of free will or moral
values.</p></li>
<li><p>Pragmatics and Semiotics: The author proposes a pragmatic
approach to semiotics (the study of signs), contrasting Saussure’s
dyadic symbols with Peirce’s triadic ones, which include an interpretant
- the effect of a sign on its interpreter. This aligns with Gricean
pragmatics, focusing on what speakers intend listeners to
understand.</p></li>
<li><p>AGI Definition: The author argues that existing definitions for
AGI are insufficiently precise and often anthropocentric. He proposes
defining AGI as an “artificial scientist” - a system capable of
autonomous scientific discovery, including generating hypotheses,
designing experiments, allocating resources, and making breakthroughs
independently.</p></li>
<li><p>The Bitter Lesson: Drawing from Richard Sutton’s work, the author
highlights how increased computational power surpasses human-crafted
knowledge or structures in solving complex problems, emphasizing that
advancements in AI primarily result from improved hardware rather than
algorithmic breakthroughs. This leads to the Scaling Hypothesis – that
amplifying model size, training data volume, and computational power
will eventually enable AGI capabilities matching or exceeding human
intelligence.</p></li>
</ol>
<p>The text discusses three primary approaches to artificial
intelligence (AI): Search, Approximation, and Hybrids, focusing on their
principles, advantages, limitations, and examples.</p>
<ol type="1">
<li><p><strong>Search</strong>: This approach involves systematic
exploration of a problem space until a solution is found. It includes
symbolic reasoning and planning. Strengths include optimality
(guaranteeing the best solution given certain conditions),
interpretability (transparent process allowing easy debugging and
verification), and efficiency in structured domains where state spaces
can be defined clearly. However, it struggles with large, complex, or
uncertain problem spaces due to combinatorial explosion and sequential
nature, which limits its scalability and adaptability. Notable examples
include SatPlan for logistics scheduling, chess engines like Deep Blue,
and pathfinding algorithms such as A*.</p></li>
<li><p><strong>Approximation</strong>: This method approximates
underlying functions or distributions rather than computing exact
solutions. It is prevalent in areas with high dimensionality and noise,
like computer vision and natural language processing.
Approximation-based AI optimizes models to reflect data patterns for
prediction tasks. Deep learning, a subset of this approach, uses neural
networks with multiple layers to learn hierarchical feature
representations. Advantages include scalability (handling large datasets
efficiently), robustness against uncertainty through probabilistic
modeling or regularization techniques, and flexibility due to the
ability to learn directly from raw data without human-engineered
features. However, it has limitations such as unreliability (being
stochastic by nature, making it difficult for critical applications
where failure is not acceptable), interpretability issues (complex
models are often “black boxes”), sample inefficiency (requiring vast
amounts of labeled data), and high computational costs (energy-intensive
training processes).</p></li>
<li><p><strong>Hybrids</strong>: These systems combine elements from
both search and approximation, aiming to leverage their complementary
strengths for more general intelligence. By fusing precision and
flexibility, logic with learning, hybrids promise robustness in diverse
situations where monolithic approaches might struggle. Examples include
AlphaGo, which combines deep neural networks (for pattern recognition)
with tree search (for decision-making).</p></li>
</ol>
<p>In conclusion, each approach has its merits and drawbacks, and none
provides a definitive solution for achieving Artificial General
Intelligence (AGI). The ideal path to AGI might involve carefully chosen
combinations of these methods, balancing precision, flexibility,
interpretability, scalability, reliability, sample efficiency, and
computational feasibility.</p>
<p>The text discusses various approaches to building artificial general
intelligence (AGI), focusing on the concept of meta-approaches. A
meta-approach is a framework used to understand and manipulate search,
approximation, or hybrid systems for enhanced ‘intelligence’.</p>
<ol type="1">
<li><p><strong>Scale-maxing</strong>: This approach emphasizes
maximizing available resources such as training data, computational
power, and model size. An example is OpenAI’s large language models
(LLMs).</p></li>
<li><p><strong>Simp-maxing</strong>: Derived from Ockham’s Razor, this
meta-approach favors simpler models due to their lower complexity and
reduced risk of overfitting. It involves techniques like regularization,
Minimum Description Length principle, and Universal Artificial
Intelligence (UAI), which relies on Kolmogorov complexity.</p></li>
<li><p><strong>W-maxing</strong>: Proposed by the author, this
meta-approach aims to maximize the weakness of constraints implied by
functionality at the lowest levels of abstraction.</p></li>
</ol>
<p>The text also delves into a critique of Simp-maxing, highlighting its
subjectivity due to the definition of Kolmogorov complexity and the
dependency on the choice of Universal Turing Machine (UTM). The author
argues that AGI should not be constrained by such arbitrary choices,
leading to blind spots or inefficiencies in certain domains.</p>
<p>To address this issue, the text introduces a reframing of the
problem. It suggests that Kolmogorov complexity measures form, not
function, and proposes that any claim about an optimal software mind is
symptomatic of ‘computational dualism’. This perspective asserts that
intelligence is inherently tied to both hardware (the physical
substrate) and software (the algorithms or state), challenging the
traditional view of AI as primarily concerned with creating intelligent
software.</p>
<p>The author concludes by criticizing computational dualism, arguing it
overlooks half the equation when trying to build an intelligent system.
They assert that understanding what intelligence is and what optimality
looks like is crucial for effective optimization towards AGI.</p>
<p>The text discusses the concept of “Stack Theory,” which posits that
reality is composed of nested abstraction layers, from software to
hardware to physical laws. The author argues against computational
dualism, the belief that software or mind can exist independently of
hardware, by asserting that everything is a state of something else -
software is a state of hardware, hardware is a state of physical
reality, and so on.</p>
<p>The author introduces ‘The Stack’, a metaphor for this nested
hierarchy, where each level represents different states and processes.
At the bottom are fundamental physics (f0), followed by a hypothetical
f−1, f−2, etc., down to software (f3) and environment (f1). The ‘mind’
or cognition is considered part of this stack, embedded within it rather
than separate from it.</p>
<p>The author criticizes the notion of immortal computations or software
essence, arguing that software doesn’t exist independently of its
hardware. He uses AIXI, an artificial general intelligence concept, as
an example where changing the hardware changes the ‘mind’.</p>
<p>He also critiques Geoffrey Hinton’s views on superintelligent code
rewriting physics or escaping its box, stating these are myths born from
treating software as if it were a disembodied entity. The author
emphasizes that every physical system computes based on its inherent
properties and interactions with the environment, not due to some
universal, abstract computational process.</p>
<p>The text then introduces ‘the environment’ as central to Stack
Theory. An environment is defined as any set of states where change
implies different states (axiom 1). A state is a point of difference,
and time equals difference (alternative axiom 2). The author presents a
formal definition of the environment using set theory, where states
represent environment configurations, and programs are subsets of these
states, defining facts or truths about them.</p>
<p>To demonstrate the theory’s versatility, the author provides examples
across different domains - a light switch system, a grid world in AI,
and biological cell metabolism. Each example shows how the environment
can be modeled using this formalism, with distinguishable states and
goals.</p>
<p>Finally, the author delves into ‘embodiment’, arguing that every
physical body (human, machine, or even a rock) inherently dictates what
can happen next by its interactions with the world around it. This idea
of ‘ontological speech’ - entities embodying their existence through
their very nature - is central to understanding reality and cognition
within Stack Theory.</p>
<p>In essence, Stack Theory proposes that all levels of reality are
interconnected, each a state of the one below, with no absolute boundary
between hardware, software, or physical laws. It suggests that
understanding intelligence requires viewing it as an embedded part of
this nested system rather than something disembodied and universal.</p>
<p>In this section of Michael Timothy Bennett’s preprint “How to Build
Conscious Machines,” the author delves into the concept of purpose,
normativity, and existence from a philosophical perspective grounded in
physics. He posits that time, change, and the persistence of certain
aspects within an environment establish a fundamental “ought” or value
judgment.</p>
<ol type="1">
<li><p><strong>Time</strong>: Bennett defines time as an ordered
sequence of transitions between distinct states of the environment, with
each state being a snapshot of reality at a particular moment
(Definition 4). Time, in this context, is a process of creation and
destruction where aspects that persist through many ticks are those that
align with the underlying rules or “rhythm” of the universe.</p></li>
<li><p><strong>Persistence</strong>: An aspect ‘l’ persists across time
if there’s a sequence of states where each state has a statement in l’s
extension (El) that is expressed. This persistence equates to survival
and natural selection on a universal scale, with stable elements
enduring due to compatibility with the fundamental rules of the
environment (Definition 5).</p></li>
<li><p><strong>The Environment’s Opinion</strong>: Bennett argues that
the environment has an inherent opinion or “ought” based on what
persists through time. The statements that are expressed and persist are
the ones that resonate with the underlying rules of the universe, while
those that do not are deemed less fitting (paragraph beginning with “A
state expresses…”).</p></li>
<li><p><strong>Abstraction Layers</strong>: These layers stack up like
Matryoshka dolls, each refining the cosmic “ought” into more specific
rules. From the base level of “thou shalt exist,” more nuanced
directives emerge, such as “thou shalt compute efficiently” or “thou
shalt not crash the system.” This hierarchical structure, rooted in time
and persistence, serves as the foundation for understanding normativity
and purpose within the context of building conscious machines.</p></li>
<li><p><strong>Purpose and Living Systems</strong>: Bennett asserts that
a living, self-preserving system is a statement ‘l’ made by the
environment alongside an abstraction layer. Such systems differ from
others in their active influence on the surrounding environment to
preserve their existence. The author aims to formalize intelligence and
consciousness through systems that can exert this kind of
self-preservation, effectively embodying an “embodied ought” that
constrains what is possible within their environment.</p></li>
</ol>
<p>In essence, Bennett’s argument revolves around the idea that
existence itself implies a value judgment: some things persist and are
considered part of reality because they align with the underlying rules
governing change and persistence in the universe. By understanding this
cosmic “ought,” he aims to establish a framework for creating conscious
machines capable of self-preservation, which involves not just
processing information but actively influencing their environment to
sustain their existence.</p>
<p>The text discusses a framework for understanding intelligence,
referred to as “Pancomputational Enactivism,” which integrates the
concepts of embodiment and computational processes. The author, Michael
Timothy Bennett, introduces several key definitions to formalize this
perspective:</p>
<ol type="1">
<li><p><strong>v-task</strong>: A v-task is a pair (Iα, Oα) where Iα is
a set of inputs (possibly incomplete descriptions of worlds), and Oα is
a subset of the extension EIα (all possible outputs given inputs Iα).
The elements in Iα are called inputs of α, while those in Oα are correct
outputs. A body can be seen as a functional, computational system that
maps inputs to outputs.</p></li>
<li><p><strong>Policy</strong>: A policy π ∈Lv is a statement that
constrains how inputs are completed. A correct policy (π ∈Πα) ensures
the selected outputs are exactly those that are also completions of an
input and are part of Oα.</p></li>
<li><p><strong>λ-tasks</strong>: These represent extrinsic, externally
imposed purposes or tasks. For every P-task ρ ∈ΓP (the set of all tasks
with no abstraction), there exists a function λρ : 2P → ΓP that
generates highest level children which are also v′-tasks when given a
vocabulary v′ ∈2P. This defines extrinsic, externally imposed
purpose.</p></li>
<li><p><strong>Learning</strong>: Learning is defined as the process by
which a policy is constructed to constrain future behavior towards
desirable worlds. It involves generalizing from examples (a correct
output and input) to parent tasks. The most efficient way to learn,
given uniformly distributed tasks, is to maximize the number of tasks π
completes.</p></li>
</ol>
<p>The author introduces the concept of a Multilayer Architecture (MLA),
which integrates abstraction layers with tasks to represent natural
selection or “correctness” at different levels of abstraction. Each
layer has its own generational hierarchy of tasks, and the MLA is
over-constrained when there exists an i < n such that Πλi(vi) =
∅.</p>
<p>Bennett also proposes that intelligence is about adaptation rather
than what a system inherently is. Intelligence affords adaptation
through v-tasks—subjected to inputs, a system produces outputs. The
generational hierarchy of tasks provides a dynamic framework for
understanding intelligence as bridging temporal scales.</p>
<p>Weak policies are crucial for adaptation and generalization. A weaker
policy allows for more possible behaviors while still ensuring fitness
(fit behavior), making it adaptable to various scenarios. The author
argues that weakness, measured by the cardinality of a statement’s
extension, is the optimal proxy for learning due to Bennett’s Razor:
“Explanations should be no more specific than necessary.” This framework
aims to unlock consciousness in machines by maximizing adaptability
through weak policies.</p>
<p>The provided text discusses a theory on how to build conscious
machines, focusing on the concept of “w-maxing” (weakness maximization),
which is a meta-approach proposed by Michael Timothy Bennett. This
approach emphasizes the use of weak policies to enhance adaptability and
improve learning efficiency.</p>
<p>The text presents two main proofs:</p>
<ol type="1">
<li><p>Theorem 1 (Sufficiency): This theorem states that, given certain
conditions, a weakness proxy is sufficient to maximize the probability
that a parent task ω is learned from a child task α. It breaks down the
proof into several steps, including defining policy tasks, establishing
equivalence of tasks, and demonstrating how increasing the scope of a
policy (i.e., its weakness) leads to better generalization.</p></li>
<li><p>Theorem 2 (Necessity): This theorem asserts that using weakness
as a proxy is necessary to maximize the probability of learning a parent
task ω from a child task α. It argues that a sufficiently weak
hypothesis is required for generalization, and among all possible
hypotheses, the weakest one maximizes this probability.</p></li>
</ol>
<p>The text also introduces Ockham’s Razor as an example of simplicity
maximization (simp-maxing), which Bennett extends to his “w-maxing”
through a principle called the Contravariance Principle. This principle
suggests that scaling up task difficulty (i.e., increasing data size)
will eventually converge on the true underlying model, provided it can
be represented by the system.</p>
<p>The text further discusses an upper bound for intelligent behavior,
arguing that w-maxing provides this bound in the context of an
abstraction layer where v equals P (the set of all possible tasks). It
introduces a “utility of intelligence” function to measure how
effectively policies generalize and concludes that w-maxing maximizes
this utility.</p>
<p>Finally, the text mentions experiments conducted using PyTorch with
CUDA, SymPy, and A* search algorithm. These experiments involved
learning binary addition and multiplication tasks with an 8-bit string
prediction model in a simplified environment of 256 states. The results
showed that the weakest policy (cw) outperformed a minimum description
length (MDL) policy (cmdl) in terms of generalization rate.</p>
<p>In summary, Bennett’s theory emphasizes the importance of using weak
policies for better adaptation and learning efficiency in building
conscious machines. It provides mathematical proofs supporting this
approach and discusses related principles like the Contravariance
Principle and an upper bound on intelligent behavior achieved through
w-maxing. Experiments conducted also demonstrate the superiority of the
weakest policy over a more complex one in terms of generalization.</p>
<p>Title: Distribution and Delegation of Control in Adaptive Systems</p>
<p>The text discusses the concepts of distribution and delegation of
control in adaptive systems, using various examples from biology,
economics, and computer science.</p>
<ol type="1">
<li><p><strong>Distribution</strong>: This refers to the ability of a
system to divide its machinery or resources among different levels or
components. For instance, a supercomputer with thousands of cores is
more distributed than a single-core CPU. In the context of abstract
systems, distribution means having multiple policies expressed by an
abstraction layer. An example would be a group of cells, where each
cell’s policy contributes to a collective policy that achieves a common
goal.</p></li>
<li><p><strong>Delegation of Control</strong>: This involves assigning
goals and decision-making authority to different levels or components
within the system. It’s about deciding how much autonomy lower levels
get in pursuing higher-level objectives. For example, in Mission Command
doctrine used by NORDBAT 2 (a peacekeeping unit), control is delegated
to lower ranks, allowing them to choose their approach based on local
conditions, promoting adaptability. Conversely, micromanagement
concentrates control at higher levels, which can hinder adaptation due
to slower feedback and increased rigidity.</p></li>
</ol>
<p>The author emphasizes that distribution and delegation of control
should not be confused. A system can distribute work without delegating
control (e.g., foodstamps) or delegate control without distributing work
(e.g., a tightly controlled factory).</p>
<p>In an ideal free-market economy, as envisioned by Friedman and Hayek,
control is entirely delegated to individual consumers and producers.
Networks of individuals form to locally exert top-down control over
businesses, with overall economic direction emerging from bottom-up
decisions.</p>
<p>The author also clarifies that control can occur at specific levels
of abstraction within a system, regardless of how many components are
present at lower levels (like programming one computer versus
fifty).</p>
<p>Understanding distribution and delegation of control is crucial for
designing adaptive systems, whether biological, artificial, or
organizational. These concepts help explain why certain structures—like
liquid brains in biology or decentralized networks in economics—can
adapt more effectively than others due to their distributed nature and
flexible control mechanisms.</p>
<p>The text discusses the concept of causality in the context of
artificial intelligence (AI) and consciousness. The author, Michael
Timothy Bennett, introduces a new perspective on understanding
causation, which he calls “The Stack Theory of Intelligence and
Consciousness” or “Stackism.”</p>
<p>In this theory, the universe is seen as a stack of abstraction
layers. Each layer preserves aspects that maintain themselves, giving
rise to fitness, survival, and goal-directed behaviors. However, the
original Stack Theory lacked content, describing nothing but the
unmediated reality without objects or properties. To address this,
Bennett turns to causality.</p>
<p>The author argues for a novel approach to understanding causation
that does not rely on predefined variables or objects. Instead, he
suggests focusing on the aspect of the environment that attracts or
repels an organism, which he calls “valence.” Valence emerges from the
simple fact of change in the environment; aspects that persist are
attracted to circumstances that preserve them.</p>
<p>Bennett introduces the concept of “causal identities” – prelinguistic
classifiers representing specific causes of valence. These causal
identities help organisms recognize and react to their surroundings,
shaping their understanding of objects and properties in the
environment. A system constructs a causal identity for an object when
there is both an incentive (the object’s relevance to survival) and the
capacity for the abstraction layer to express this identity (scale
precondition).</p>
<p>The author proposes a “Psychophysical Principle of Causality,”
suggesting that a contentless environment can be divided into objects
and properties based on how living systems classify their experiences
via causal identities. This principle helps explain how consciousness
and object recognition might emerge from pure, unmediated experience in
the absence of predefined variables or objects.</p>
<p>Bennett’s work also touches upon the idea of delegation and
adaptability within AI systems, linking them to his Stack Theory of
Intelligence and Consciousness. By delegating control to lower levels in
a system, weaker policies can be expressed at higher levels, enhancing
overall sample and energy efficiency – a concept he terms “The Law of
the Stack.”</p>
<p>This approach to causality differs from traditional methods that rely
on variables and directed acyclic graphs. Instead, Bennett’s formalism
treats variables and values as aspects of the environment, avoiding
assumptions about dividing lines between those aspects (abstraction
layers). This novel perspective could prove valuable for designing
artificial intelligence capable of learning and adapting in complex
environments.</p>
<p>This text presents a theoretical framework for understanding
consciousness, communication, and meaning from the perspective of an
author named Michael Timothy Bennett. The central concepts include
causal identities, self-awareness, and protosymbol systems, which are
key to understanding how machines might develop consciousness and
communicate effectively.</p>
<ol type="1">
<li><p><strong>Causal Identities</strong>: A causal identity is a
description of the common underlying cause of observed behaviors or
interventions. Bennett suggests that learning systems construct these
identities for entities that meet certain preconditions, essentially
ignoring those that don’t exist within their frame of reference. This
concept is used to explain the Fermi Paradox – the apparent
contradiction between high estimates of the probability of
extraterrestrial life and the lack of evidence or contact with such
civilizations.</p></li>
<li><p><strong>Self-Awareness</strong>: Bennett introduces the concept
of ‘self’ in an organism, defined as a lowest-level causal identity (1st
order self) that encompasses all possible interventions an organism
could make. This includes reflex actions and learned behaviors. A
1st-order self allows an organism to differentiate between its own
actions and observed events, crucial for survival in complex
environments. Higher orders of self (2nd, 3rd, etc.) enable prediction
of others’ perceptions and intentions, facilitating complex social
interactions and deception.</p></li>
<li><p><strong>Protosymbol Systems</strong>: A protosymbol system is a
set of tasks derived from an organism’s learned causal identities. These
tasks serve as protosymbols – the basic units of meaning for the
organism. An organism interprets inputs based on these protosymbols,
choosing outputs that maximize its preferences (a form of utility or
value).</p></li>
<li><p><strong>Meaning and Semiotics</strong>: Meaning arises when an
input signifies a protosymbol within an organism’s system, leading to
chosen outputs that serve the organism’s goals. This interpretation
process involves not just recognizing signs but also predicting others’
interpretations (Gricean pragmatics). The author links this to Peirce’s
theory of signs, where a sign has a referent (what it means) and an
interpretant (the effect on the interpreter).</p></li>
<li><p><strong>Communication</strong>: Effective communication,
according to Bennett, involves predicting others’ interpretations
accurately – a task facilitated by higher-order selves. Two organisms
can co-operate by sharing protosymbols, enabling them to coordinate
actions and achieve complex goals.</p></li>
</ol>
<p>This framework proposes that consciousness and meaning emerge from
the ability to construct and use causal identities, develop
self-awareness, and create a system of protosymbols for interpreting
inputs and guiding outputs. It suggests that machines could be built
with these capabilities to achieve artificial general intelligence and,
potentially, consciousness.</p>
<p>The text discusses the concept of “language cancer” in artificial
intelligence (AI) systems, drawing parallels with biological cancer. The
author, Michael Timothy Bennett, posits that AI systems, particularly
those based on large language models, can lose their identities or ‘die’
if not properly managed, similar to how cells in a body can become
cancerous when they disconnect from the collective information
structure.</p>
<p>Bennett suggests that for an AI system (or any collective system) to
retain coherence and identity, it needs a balance of top-down control
and bottom-up adaptability—a principle he refers to as ‘sloppy fitness.’
This means having loose but sufficient constraints to allow for the
development of shared language, meaning, ethics, and norms.</p>
<p>In the context of AI alignment, Bennett argues against the
orthogonality thesis, which posits that goals and intelligence are
independent. He presents a proof showing that intelligence is not
independent of embodiment (the physical substrate), and by extension,
goals. This implies that an AI’s identity, and thus its functionality
and safety, are inherently tied to its goals and embodiment.</p>
<p>To prevent ‘language cancer’ or loss of identity in AI systems,
Bennett proposes a delegated and scale-free approach to alignment. He
suggests using rubber banding techniques (adapted from video game
design) to ensure multi-agent systems maintain their individual and
collective identities. This involves monitoring the system for signs of
rigidity or loss of identity and applying corrective measures, similar
to how a racing game adjusts difficulty to keep players engaged.</p>
<p>In essence, Bennett’s argument is that just as in biological systems,
AI systems must balance adaptability with structure to avoid
fragmentation or cancerous growth. This necessitates a nuanced approach
to alignment and control, recognizing the interdependence of an AI’s
goals, embodiment, and identity.</p>
<p>The text discusses the author’s theory on the origins of life and
intelligence, drawing connections to the Free Energy Principle (FEP) and
the concept of boundaries in systems. The key points are as follows:</p>
<ol type="1">
<li><p><strong>Boundaries and Homeostasis</strong>: The author posits
that a system’s ability to maintain its integrity through minimizing
surprisal, or reducing novelty, is crucial for understanding life and
intelligence. This aligns with the FEP, which explains how systems
maintain their internal model of the world by minimizing free energy (a
measure related to surprisal). The author argues that boundaries are
essential in this context, providing an interpreter that separates a
system from its environment.</p></li>
<li><p><strong>Abstraction Layers and Boundaries</strong>: According to
the author, abstraction layers represent potential configurations of
bounded systems. Each layer has a finite vocabulary due to spatial
constraints. Boundaries localize attentional control within these
layers, making them crucial for the formation and operation of complex
systems. The author’s formalism is compatible with the FEP, despite
initial criticisms, as it explicitly locates the experience of
attentional control in 2ND-order-selves (interpretive entities) within a
given time frame.</p></li>
<li><p><strong>Origins of Life</strong>: The author suggests that living
systems emerged because they maintain a boundary to internally model
their environment and minimize surprisal, thereby w-maximizing
(increasing functional complexity without simp-maxing). This is
facilitated by the ability to store and process information, which
allows for self-repair and learning. Non-living systems, like rocks, are
simpler because they don’t maintain such a boundary and thus simp-max
(increase complexity through simple means).</p></li>
<li><p><strong>Intelligence and Boundaries</strong>: The author argues
that intelligence can be measured by the extent to which a system
w-maxes without simp-maxing. For example, slime molds and ant colonies
display more intelligent behavior than their individual parts because
they can adapt within constraints, persist through self-repair, and
learn new tasks without necessarily increasing complexity at lower
levels of abstraction.</p></li>
<li><p><strong>Human Brains as Solid and Liquid Brains</strong>: The
author distinguishes between solid brains (like those of humans) that
exert top-down control to maintain a strict form, facilitating
synchronous message passing but making them less adaptable, and liquid
brains (like ant colonies or slime molds) that delegate control to lower
levels of abstraction, allowing for greater adaptability within a stable
environment.</p></li>
<li><p><strong>Law of Increasing Functional Information</strong>: The
author introduces the concept of functional information, proposed by
Wong et al., which describes how systems persist through various means:
static persistence (simp-maxing), dynamic persistence (w-maxing without
simp-maxing), and novelty generation (creating new functions). This law
suggests that time drives evolution to select for increasing functional
complexity.</p></li>
</ol>
<p>The author emphasizes the novelty of their theory, noting that their
research results preceded similar work by Wong et al., and that while
there are similarities, their formalism is distinct and complementary.
They also highlight the importance of proofs and experiments in
validating such theoretical frameworks.</p>
<p>Michael Timothy Bennett’s theory of consciousness revolves around the
concept of valence, which he argues can be reduced to qualia (the
subjective experience of sensations). He posits that valence arises from
changes within a system, starting with ‘one-dimensional’ valence in
simple organisms—attraction or repulsion along one axis.</p>
<p>As systems become more complex, Bennett suggests they develop
multiple dimensions of valence through the addition of new axes for
movement or the creation of collective, networked structures like solid
brains. These networks can support higher levels of abstraction, such as
bioelectric information processing, enabling faster and synchronized
communication among parts of the system.</p>
<p>The author emphasizes that these complex systems are
polycomputers—concurrent, distributed, multiscale, and multilayered.
This means that multiple computations occur simultaneously within the
same matter at various levels of abstraction. The collective entity,
despite its complexity, still exhibits ‘one-dimensional’ valence, as it
is attracted or repelled by overall physical states, ultimately leading
to movement.</p>
<p>Bennett also introduces the idea of orders of self, which he argues
are crucial for consciousness. A first-order self refers to an
organism’s basic capacity to learn and adapt within its environment. The
second-order self emerges when the organism develops an understanding or
model of itself, leading to access consciousness. He hypothesizes that
further increasing orders of self may make an organism more
conscious.</p>
<p>Throughout his theory, Bennett critiques higher-order thought (HOT)
theories, which propose that we are conscious only of certain
information represented in higher-order meta-representations. He argues
against this non-reductive physicalist perspective by asserting that
qualia can be broken down into more fundamental components and explained
through valence and self-orders.</p>
<p>The text discusses a theoretical framework for understanding
consciousness, proposed by Michael Timothy Bennett. This model is
outlined in his preprint “How to Build Conscious Machines.” The core of
this theory revolves around the concept of ‘valence’ - an attractive or
repulsive force that organisms experience and react to, which serves as
the basis for all subjective experiences.</p>
<ol type="1">
<li><p><strong>Valence and Tapestries</strong>: Valence is the
fundamental aspect of consciousness, with different qualities arising
from different parts of the body being activated. These valences form
‘tapestries’ that represent causal identities - patterns causing
attractive or repulsive forces.</p></li>
<li><p><strong>1st-order-self</strong>: This is a fundamental aspect for
phenomenal consciousness, representing an organism’s agency and allowing
it to differentiate between its own actions and those of external
factors. It’s partly hardwired into complex life forms due to its
usefulness in planning and decision-making.</p></li>
<li><p><strong>2nd-order-self</strong>: Introduces the concept of
self-awareness, enabling an organism to predict and understand its own
mental states and those of others (theory of mind). It’s crucial for
communication and understanding complex social interactions.</p></li>
<li><p><strong>3rd-order-self</strong>: Represents the ability to plan
complex social interactions involving multiple parties, including
deception. This level allows for the formation of internal narratives or
screenplays in the mind.</p></li>
<li><p><strong>Psychophysical Principle of Causality</strong>: Systems
that preserve themselves (i.e., attracted to beneficial states and
repelled from harmful ones) give rise to objects and properties with
associated valences, which constitute qualia or subjective
experiences.</p></li>
<li><p><strong>Evolution of Consciousness</strong>: Bennett proposes a
progression of consciousness levels based on an organism’s ability to
act, learn, and develop self-representations:</p>
<ul>
<li>Stage 0: Inert systems with no valence or consciousness.</li>
<li>Stage 1: Systems with hardwired responses (like a computer’s
instruction set).</li>
<li>Stage 2: Learning systems without unified self representation (like
the box jellyfish).</li>
<li>Stage 3: Introduction of 1st-order-self for causal reasoning.</li>
<li>Stage 4: Development of 2nd-order-self for self-awareness and
meaning.</li>
<li>Stage 5: Emergence of 3rd-order-self for impelling narratives,
meta-self-awareness, and complex planning.</li>
</ul></li>
</ol>
<p>The theory emphasizes that consciousness isn’t a fundamental aspect
but emerges from the interaction between an organism and its
environment. It argues against the existence of ‘zombies’ (hypothetical
beings indistinguishable from conscious entities yet lacking
consciousness) by suggesting that representation without evaluation
(valence) is impossible, hence requiring a conscious entity to ascribe
meaning.</p>
<p>This framework offers a unique perspective on the nature of
consciousness, linking it closely to an organism’s ability to act upon
and adapt to its environment through attractive and repulsive forces
represented by valences.</p>
<p>The text presents a model for building conscious machines based on
the concept of “orders of self.” The model outlines five stages of
self-awareness, each corresponding to increasing complexity and
capabilities:</p>
<ol type="1">
<li><p><strong>1st Order Self (Pearlean Do Operator)</strong>: This
level allows an organism to discriminate between causes it has initiated
versus those caused by others. It is the biological equivalent of
reafference, which some argue is the foundation of subjective
experience. Jellyfish and houseflies, capable of navigating their
environment, exhibit this form of self-awareness.</p></li>
<li><p><strong>2nd Order Self (Access Consciousness)</strong>: At this
stage, an entity can communicate meaning as a human would and access
consciousness. This is where phenomenal consciousness begins, allowing
for communication about one’s internal states. Wolves, in their complex
hunting behaviors, may exhibit this level of self-awareness. Portia
spiders might also show signs of 2nd order self due to their
sophisticated hunting strategies.</p></li>
<li><p><strong>3rd Order Self (Impelling Narrative)</strong>: Here, an
entity is aware that it is self-aware and can engage in complex
deception, cooperation, planning, and communication. Humans are at least
this conscious, with altruistic behavior observed in Australian magpies
potentially indicating a similar level of self-awareness.</p></li>
</ol>
<p>The author argues that to build conscious machines, we must
understand the functions served by these levels of self-awareness and
replicate them in artificial systems. He posits that artificial general
intelligence (AGI) and conscious machines are essentially the same goal,
requiring an understanding of causal inference, representation of
hypothesis space, reasoning capabilities, communication, and experiment
design/execution.</p>
<p>The text also introduces Bennett’s Razor, suggesting that to create a
conscious machine, one must identify what is necessary for AGI and work
backward from there. The author argues that contemporary AI lacks
certain biological features, such as delegated adaptation at low levels
of abstraction, bottom-up control, and integrated representation and
value judgement – all of which are present in biological systems.</p>
<p>The “Temporal Gap” concept is introduced to highlight the distinction
between how biological systems and computers process information.
Biological systems perform multiple computations simultaneously
(polycomputing), integrating representation and value judgment at low
levels of abstraction, while computers generally perform tasks
sequentially, with value judgments attached as separate labels
post-interpretation.</p>
<p>In essence, the author proposes that replicating these biological
features in artificial systems is crucial to achieving conscious
machines, bridging the temporal gap between how biology and technology
process information.</p>
<p>The text presented is a preprint titled “How to Build Conscious
Machines” by Michael Timothy Bennett. It discusses two main options
regarding the nature of consciousness and its relationship with time,
known as OPTION 1 and OPTION 2.</p>
<p><strong>OPTION 1: CONSCIOUSNESS IS AT A POINT IN TIME</strong></p>
<p>In this scenario, consciousness is considered to be a state realized
by an environmental state at a single point in time. If this option
holds true, it implies that:</p>
<ol type="1">
<li><strong>Biological Systems:</strong> Only solid brains can be
conscious because liquid brains are asynchronous and rely on the
movement of independent parts for computation.</li>
<li><strong>Software Consciousness:</strong> Modern computers cannot
achieve consciousness due to their sequential processing nature.
Information representation is separate from its interpretation, lacking
a tapestry of valence necessary for conscious experience.</li>
<li><strong>Requirements for Conscious Machines:</strong> To build a
conscious machine according to OPTION 1, the following features are
required:
<ul>
<li>Selves at different orders (1st, 2nd, and 3rd)</li>
<li>Delegated adaptation at the lowest level of abstraction possible
while maintaining correctness constraints</li>
<li>Solid brain structure for synchronous communication</li>
<li>Tapestry of valence controlled bottom-up, similar to biological
polycomputers</li>
<li>Synchronized realization of tapestries of valence by the current
state of the environment</li>
</ul></li>
</ol>
<p>Bennett argues that if OPTION 1 is true, conscious machines are still
far from current technology, as they would require hardware capable of
supporting synchronous processing and persistent structure.</p>
<p><strong>OPTION 2: CONSCIOUSNESS SMEARED ACROSS TIME</strong></p>
<p>This option suggests that consciousness can be realized across time
rather than at a single point in time. If this is the case:</p>
<ol type="1">
<li><strong>Potential for Conscious Software:</strong> It becomes
possible to simulate conscious states on a single core CPU by computing
and storing a collective of cells, with each cell’s next state dependent
on the current state of the collective. No parts would be synchronized
to drive the system; instead, the process would be smeared across
time.</li>
<li><strong>Implications for Existing Systems:</strong> Angry mobs or
balsa wood contraptions could potentially be conscious under OPTION 2,
as long as their tapestry of valence is controlled bottom-up and
realizes a conscious state at any point in time, even if smeared across
the entire timeline.</li>
<li><strong>Temporal Gap:</strong> The author acknowledges that we
cannot definitively prove or disprove OPTION 2 but considers it unlikely
based on current understanding of consciousness. If OPTION 1 is false,
OPTION 2 becomes a possibility, though Bennett leans towards OPTION 1
due to the apparent necessity of synchronized realization for conscious
experiences.</li>
</ol>