-
Notifications
You must be signed in to change notification settings - Fork 6
Expand file tree
/
Copy pathintelligence-overview.html
More file actions
364 lines (364 loc) · 22.2 KB
/
intelligence-overview.html
File metadata and controls
364 lines (364 loc) · 22.2 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
<h3 id="conscious-machines">conscious machines</h3>
<p>Michael Timothy Bennett’s doctoral thesis, titled “How to Build
Conscious Machines,” proposes a comprehensive framework for
understanding consciousness and its relationship with intelligence. The
work is divided into several sections and chapters that cover various
aspects of philosophy, computer science, neuroscience, and artificial
general intelligence (AGI).</p>
<ol type="1">
<li><p><strong>Foreword and Chapter Summaries</strong>: Bennett explains
the progression of his research throughout the PhD and how it culminates
in this thesis. The work is presented as an exploration into building
conscious machines, understanding the nature of consciousness, and
addressing fundamental questions about life, intelligence, and
complexity.</p></li>
<li><p><strong>Literature Reviews</strong>: Chapters II and III provide
a survey of philosophy and neuroscience (Chapter II) and AGI (Chapter
III). These sections delve into key concepts like the mind-body problem,
functionalism, theories of consciousness, self-organization, free energy
principle, enactivism, epistemology, semiotics, structuralism,
post-structuralism, and meaning.</p></li>
<li><p><strong>What is AGI?</strong> (Chapter IV): Bennett explains what
constitutes artificial general intelligence by framing it as an
“artificial scientist,” drawing inspiration from Richard Sutton’s
‘Bitter Lesson.’ He discusses various optimization approaches like
search, approximation, and hybrids.</p></li>
<li><p><strong>Turtles All the Way Down</strong> (Chapter V): This
chapter explores embodiment by arguing that each body is an abstraction
layer. It presents a formal language of declarative programs based on
Stack Theory, allowing for the understanding of bodies and the universe
as speaking embodied formal languages.</p></li>
<li><p><strong>Master, What Is My Purpose?</strong> (Chapter VI):
Bennett discusses purpose through formal definitions of embodied tasks,
inference, and stacks. It establishes how goal-directed behavior emerges
from time, change, and natural selection, leading to the concept of the
“cosmic ought.”</p></li>
<li><p><strong>Weak</strong> (Chapter VII): This section introduces
w-maxing as an optimal learning meta-approach that maximizes weak
constraints on function. Bennett proves that w-maxing is superior to
simp-maxing in generalization and intelligence, while also exploring how
biological systems delegate adaptation more effectively than artificial
intelligence.</p></li>
<li><p><strong>Stackism</strong> (Chapter VIII): This chapter
investigates complexity by demonstrating why simplicity of form has a
correlation with function. It argues that the illusion of complexity
arises due to abstraction layers, and natural selection drives
biological systems to optimize weaker constraints using simpler
forms.</p></li>
<li><p><strong>Let’s Get Psychophysical</strong> (Chapter IX): Bennett
formalizes causality via the Psychophysical Principle of Causality,
explaining how systems learn cause-and-effect relationships based on
valence. He introduces self-classifying policies called
causal-identities and discusses the necessity for incentive and scale to
construct these identities.</p></li>
<li><p><strong>Language Cancer</strong> (Chapter X): Bennett explores
language and its connection to cancer, refuting the Orthogonality
Thesis. He demonstrates how 2ND-order-selves are crucial for
communication according to Gricean pragmatics and explains normativity
in relation to social predation.</p></li>
<li><p><strong>Why Is Anything Alive?</strong> (Chapter XI): Bennett
discusses the emergence of life by presenting a formalism that ties
together Pancomputational Enactivism, complexity theory, and the Fermi
Paradox, arguing that stable environments allow for consciousness to
develop due to the correlation between adaptability and weak
constraints.</p></li>
<li><p><strong>Why Is Anything Conscious?</strong> (Chapter XII): This
chapter addresses the hard problem of consciousness by presenting a
theory of how lower-order states give rise to higher-order thought
through causal identities grounded in valence. Bennett introduces 1ST
and 2ND-order selves, arguing that phenomenal consciousness begins with
a 1ST-order self, while communication requires a 2ND-order
self.</p></li>
<li><p><strong>How to Build Conscious Machines</strong> (Chapter XIII):
The final chapter outlines the features necessary for constructing
conscious machines and proposes an unresolved problem called “The
Temporal Gap.” It also discusses strategies for engineering conscious
machines or avoiding their creation altogether.</p></li>
</ol>
<p>Throughout the thesis, Bennett integrates various results from his
published papers, which explore topics like optimal learning,
abstraction layers, computational dualism, complexity, language,
meaning, and causality. The work aims to provide a foundation for
understanding how conscious machines can be built by reconciling
philosophical, neuroscientific, and computer science perspectives on
consciousness and intelligence.</p>
<p>The text presents a philosophical exploration of artificial general
intelligence (AGI) and the concept of computational dualism, which
posits that software or ‘mind’ is separate from hardware or ‘body’. The
author argues against this view, asserting that both are interconnected
aspects of a larger system.</p>
<p>The author introduces “Stack Theory,” which suggests everything is
nested abstraction layers, from software to hardware, and ultimately to
the fundamental laws of physics. In this framework, hardware is not a
sacred boundary where abstraction stops, but another layer in the
hierarchy.</p>
<p>The environment, according to Stack Theory, is defined by a set of
states (Φ), with each state representing a particular configuration or
difference from other states. The power set 2^Φ (P) represents all
possible subsets of states, which are called declarative programs. A
truth or fact about a state ϕ is any program (f) containing ϕ, meaning f
is true for that state.</p>
<p>The environment encodes everything through its state space, with each
state representing an aspect of the environment as a collection of facts
holding true for that state. This formal structure aligns with
pancomputationalism’s view that all physical systems are
computational.</p>
<p>Toy examples illustrate the framework’s flexibility across various
domains, from digital systems (light switch) to biological systems (cell
metabolism), and even reinforcement learning in AI. The author
emphasizes the importance of embodiment—the idea that every physical
system influences its surroundings—which is often overlooked in computer
science.</p>
<p>Finally, the author discusses “Layer Cake,” a method for formalizing
all these aspects together within an abstraction layer (v) that contains
more specific aspects. This involves defining an abstraction layer (Lv)
as everything realizable within it, and the extension Ex of a statement
x as the set of statements whose existence implies x.</p>
<p>In summary, the text challenges traditional computational dualism by
proposing Stack Theory, which views software, hardware, and fundamental
laws of physics as interconnected layers in an overall system. It
further introduces the concept of “Layer Cake” for formalizing these
aspects within abstraction layers, thereby providing a unified framework
to understand intelligence and AGI.</p>
<p>In this section, Michael Timothy Bennett discusses the concept of
distribution in adaptive systems, focusing on biological and artificial
examples.</p>
<ol type="1">
<li><p>Definition of Distribution: Distribution in an adaptive system
refers to having more than one policy (a set of programs) expressed by
an abstraction layer. This means that the system’s behavior is not
solely determined by a single entity but rather emerges from the
collective actions of multiple entities working together towards common
goals.</p></li>
<li><p>Cellular Collectives as an Example: A collective of cells can
serve as an example of distribution in biological systems. Each cell
within the collective represents a policy, with its own set of programs
that govern its behavior. When these cells work collaboratively toward
shared objectives, their individual extensions (sets of states where
each program is true) intersect to form a higher-level collective
policy. This collective policy embodies the group’s identity or behavior
at a more abstract level than any single cell.</p></li>
<li><p>Implications for Artificial Systems: This idea of distribution
can also be applied to artificial systems, such as computer networks or
multi-agent systems. In these contexts, multiple agents (policies) work
together to achieve common goals by coordinating their actions and
sharing information. The collective behavior emerges from the
interactions among these agents, leading to a more complex and nuanced
system than what could be achieved with a single agent alone.</p></li>
<li><p>Distinction Between Distribution and Delegation of Control: It is
essential to distinguish distribution from delegation of control in
adaptive systems. While distribution refers to having multiple policies
expressed by an abstraction layer, delegation of control involves
assigning decision-making authority to specific levels or entities
within the system. In other words, distribution focuses on how work is
divided among various components, whereas delegation determines who
makes decisions and to what extent those decisions can be made
autonomously.</p></li>
<li><p>Formalization of Distribution: To formalize the concept of
distribution, Bennett suggests considering a set of policies Lv within
an abstraction layer. When multiple entities (cells, agents) in the
system express these policies simultaneously, their collective behavior
emerges as the intersection of their extensions. This collective policy
then forms a higher level of abstraction, representing the system’s
overall behavior or identity.</p></li>
</ol>
<p>In summary, distribution is a crucial aspect of adaptive systems,
both biological and artificial. By allowing multiple entities to work
together and express policies concurrently, complex behaviors can emerge
that might not be achievable through individual agents alone.
Understanding and leveraging distribution can help design more efficient
and resilient adaptive systems across various domains.</p>
<p>In this section, Michael Timothy Bennett discusses the concept of
“language cancer” as it relates to his theory on conscious machines and
collective intelligence. He begins by emphasizing that human
communication involves more than just the exchange of information; it
also includes normativity or social mores. These shared expectations
guide interactions and foster cooperation, but they can also be
manipulated for deceptive purposes.</p>
<p>Bennett introduces the idea of “protosymbols,” which are learned
causal identities that represent aspects of an organism’s environment
relevant to its survival. A protosymbol system is a set of tasks based
on these learned causal identities, and preferences help an organism
interpret inputs according to its knowledge and values.</p>
<p>The author argues that accurate prediction and hard-wired behaviors
enable organisms to communicate intentions effectively. By using their
second-order selves (predictions about other entities’ predictions),
they can tailor communication to individual recipients, allowing for
nuanced meaning exchange. This predictive machinery can also be
exploited for manipulation or deception if one entity gains sufficient
insight into another’s thought processes.</p>
<p>Bennett then discusses the role of social norms in shaping human
behavior and communication. These shared expectations allow us to
navigate complex social structures efficiently, reducing the need for
constant negotiation and trust-building on an individual basis. In this
context, language and concepts emerge as policies that govern how a
population interprets information both internally and externally.</p>
<p>The author connects these ideas to cancer biology, suggesting that
cancer arises when cells lose their collective identity due to isolation
from the broader informational structure of the organism. In a
distributed system, over-constraint or adversity (fewer correct
policies) can cause parts to break away and pursue independent goals,
leading to system failure.</p>
<p>Bennett proposes that similar processes may underlie “language
cancer” in human populations – when a shared identity weakens due to
excessive top-down control or other adversities, leading to the
dissolution of coherent language and norms. This can result in
stagnation, loss of collective identity, and potential system failure,
similar to how biological self-organizing systems develop cancer when
their informational structure collapses.</p>
<p>To prevent such a breakdown, Bennett suggests the need for “sloppy
fitness” or loose constraints that allow for shared language, meaning,
ethics, and norms. In artificial intelligence, this translates to a
delegated and scale-free approach to alignment, balancing top-down
control with bottom-up adaptation while ensuring sufficient constraints
are in place.</p>
<p>Finally, Bennett argues against the orthogonality thesis, which
posits that intelligence and goals are independent. He demonstrates how
intelligence is intrinsically linked to embodiment (goal direction),
thus making it goal-dependent. This insight has implications for
artificial general intelligence, suggesting that tailoring AI systems
solely around legal and moral boundaries might be overly restrictive –
instead, a holistic approach considering internal functioning and
interacting systems could yield more robust and adaptable solutions.</p>
<p>Title: “How to Build Conscious Machines” by Michael Timothy Bennett
(Preprint under Review)</p>
<p>Summary:</p>
<p>This preprint presents a comprehensive theory on the nature of
consciousness and provides insights into how conscious machines can be
constructed. The author, Michael Timothy Bennett, proposes a framework
called Pancomputational Enactivism within Stack Theory to explain
goal-directed behavior in terms of tasks.</p>
<ol type="1">
<li><p>Stack Theory: The environment is conceptualized as an infinite
stack of abstraction layers, with the cosmic ought (the driving force
for self-preservation) at the bottom and higher levels of abstraction
emerging from lower ones.</p></li>
<li><p>Pancomputational Enactivism: This meta-approach to Artificial
General Intelligence (AGI) integrates computational processes with
enactive principles, where consciousness arises from the interaction
between an organism and its environment. AGI is seen as a form of
polycomputation across multiple abstraction layers.</p></li>
<li><p>Weak Constraints on Function: Bennett proposes that simple forms
are not sufficient for adaptation; instead, weak constraints on function
are necessary and sufficient for generalization and adaptation. This
undermines Ockham’s Razor, leading to Bennett’s Razor – favoring the
simplest explanation with the fewest assumptions.</p></li>
<li><p>W-maxing: Bennett introduces a new meta-approach called w-maxing,
which delegates control to the lowest level of abstraction while
satisfying correctness constraints. This principle leads to The Law of
the Stack, stating that adaptation improves as systems delegate more
control to lower levels of abstraction.</p></li>
<li><p>Learning Causality: The author explains how adaptive systems
learn causality by learning objects and properties causing valence
(i.e., attraction or repulsion). This process is tied to the
Psychophysical Principle of Causality, which states that consciousness
simplifies our perception of the environment into relevant objects and
properties.</p></li>
<li><p>Construction of Selves and Phenomenal Consciousness: Bennett
discusses the emergence of selves and phenomenal consciousness through a
causal framework. He relates this to language and semiotics, formalizing
Gricean pragmatics and Peircean triadic symbols as tasks. This leads to
an alternative definition of access consciousness: the contents of 2nd
and higher-order selves, making philosophical zombies impossible in any
conceivable world.</p></li>
<li><p>Emergence of Normative Meanings: The author explains how
normative meanings emerge from collective identity, cancer
(metaphorically), and his Mirror Symbol Hypothesis – which posits that
symbols are learned by identifying patterns in the environment and
attributing them with meaning. This process refutes the strong
orthogonality thesis, suggesting goals and intelligence are
intrinsically linked.</p></li>
<li><p>The Temporal Gap: The author discusses an unknown (referred to as
“The Temporal Gap”) concerning whether a conscious state must be
realized by an environmental state at a single point in time or can be
smeared across time. This has implications for the possibility of
software consciousness, suggesting that current AI systems lack the
necessary features (e.g., delegated control, persistent structure) to
support a tapestry of valence and phenomenal consciousness.</p></li>
<li><p>Conclusion: Bennett asserts that intelligence is both necessary
and sufficient for consciousness and outlines features required for
constructing conscious machines. He suggests that to build a truly
conscious machine, we should aim for a highly delegated solid brain
where tapestries of valence are realized at a single point in time
rather than smeared across it.</p></li>
</ol>
<p>The preprint is supported by mathematical definitions, proofs,
experiments, and examples, which are available on GitHub:
https://github.com/ViscousLemming/Technical-Appendices</p>
<p>The provided list is a bibliography containing references to various
books, articles, and preprints related to artificial intelligence (AI),
cognitive science, philosophy of mind, consciousness, and the nature of
computation. Here’s a detailed summary and explanation of each
section:</p>
<ol type="1">
<li><p>Bas C. van Fraassen - “Laws and Symmetry”: This book explores the
philosophical aspects of symmetry in physics and its implications for
understanding scientific laws. It questions the traditional view that
symmetries are merely mathematical properties, suggesting that they have
a deeper role in our understanding of nature.</p></li>
<li><p>Multiple authors - “APA Newsletters” (2008): This entry likely
refers to a collection of articles from the Association for
Psychological Science’s newsletter. The topics discussed may include
recent advancements, debates, and notable research findings in
psychology and related fields.</p></li>
<li><p>Michael T. Bennett - “How to Build Conscious Machines” (preprint
under review): This preprint is an unpublished manuscript by Michael T.
Bennett that aims to discuss the challenges and possible approaches for
creating conscious machines, potentially blending aspects of AI,
cognitive science, and philosophy of mind.</p></li>
<li><p>Sarah A. Fricke and Christina M. Frederick - “The Looking Glass
Self: The Impact of Explicit Self-Awareness on Self-Esteem”: This study
investigates the relationship between self-awareness and self-esteem,
drawing from Daryl Bem’s concept of the looking-glass self, which
suggests that individuals form their self-perception based on how they
believe others view them.</p></li>
<li><p>M. Friedman and R.D. Friedman - “Capitalism and Freedom”: This
classic work by Milton Friedman argues for a free market economy as an
ideal system, emphasizing the role of individual liberty and competition
in creating prosperity while minimizing government
intervention.</p></li>
<li><p>Karl Friston - “The Free-Energy Principle: A Unified Brain
Theory?” (Nature Reviews Neuroscience, 2010) and “Life as We Know It”
(Journal of The Royal Society Interface, 2013): These papers propose the
free-energy principle as a unifying framework for understanding brain
function. Friston suggests that the brain minimizes its free energy by
constantly generating predictions about the world, comparing them to
sensory input, and adjusting its internal models accordingly.</p></li>
<li><p>Karl Friston et al. - “Path Integrals, Particular Kinds, and
Strange Things” (Physics of Life Reviews, 2023): This article expands on
Friston’s previous work by exploring the implications of path integrals
– a concept from quantum mechanics and statistical physics – for
understanding brain function, learning, and decision-making.</p></li>
<li><p>Thomas Fuchs - “Ecology of the Brain: The Phenomenology and
Biology of the Embodied Mind” (Oxford University Press, 2017): This book
presents a comprehensive theory of consciousness that emphasizes the
importance of embodied cognition – the idea that mental processes are
deeply influenced by bodily experiences and interactions with the
environment.</p></li>
<li><p>Shaun Gallagher and Dan Zahavi - “The Phenomenological Mind”
(Routledge, New York, NY, 2014): This work provides an introduction to
phenomenology as a philosophical approach to understanding the mind,
focusing on first-person perspectives and lived experiences.</p></li>
<li><p>Ashitha Ganapathy and Michael T. Bennett - “Cybernetics and the
Future of Work” (2021 IEEE 21CW): This paper discusses how cybernetic
principles can inform our understanding of emerging technologies’ impact
on employment, particularly in the context of artificial general
intelligence (AGI).</p></li>
<li><p>Robin Gandy - “Church’s Thesis and Principles for Mechanisms”
(The Kleene Symposium, 1980): In this paper, Gandy examines Church’s
thesis – the idea that any effectively calculable function can be
computed by a Turing machine – and its implications for understanding
the nature of computation.</p></li>
<li><p>A. Garcez et al. - “Neural-Symbolic Computing: An Effective
Methodology for Principled Integration of Machine Learning and
Reasoning” (2019): This article explores neural-symbolic computing, a
multidisciplinary approach combining machine learning techniques with
symbolic reasoning to create more flexible and interpretable AI
systems.</p></li>
<li><p>Marta Garnelo et al. - “Towards Deep Symbolic Reinforcement
Learning” (2016): This paper proposes a method for integrating deep
neural networks with symbolic representations, aiming to improve the
interpretability and generalization abilities of reinforcement learning
algorithms.</p></li>
<li><p>James J. Gibson</p></li>
</ol>