Gabora: ORIGIN OF CULTURE
The target article below has just appeared in PSYCOLOQUY, a
refereed journal of Open Peer Commentary sponsored by the American
Psychological Association. Qualified professional biobehavioral,
neural or cognitive scientists are hereby invited to submit Open
Peer Commentary on it. Please email for Instructions if you are not
familiar with format or acceptance criteria for PSYCOLOQUY
commentaries (all submissions are refereed).
To submit articles and commentaries or to seek information:
EMAIL: psyc who-is-at pucc.princeton.edu
URL: http://www.princeton.edu/~harnad/psyc.html
http://www.cogsci.soton.ac.uk/psyc
RATIONALE FOR SOLICITING COMMENTARY: This target article presents a
model of cognitive origins that attempts to explain the transition
from episodic to mimetic/memetic culture (as outlined by Merlin
Donald in Origins of the Modern Mind, 1991) using Stuart Kauffman's
ideas about how an information-evolving system can emerge through
autocatalysis (as outlined in Origins of Order, 1993). I would like
to invite commentary from cognitive anthropologists and
archeologists on the plausibility of the proposal, from
neuroscientists on the neurobiological plausibility of this model,
and from psychologists on its compatibility with other dynamic
models memory (i.e. models of how one thought evokes another in a
train of associations.) I also invite discussion of the memetic
perspective of culture as an information-evolving system. ~
-----------------------------------------------------------------------
psycoloquy.98.9.67.origin-culture.1.gabora Thu Dec 31 1998
ISSN 1055-0143 (60 paras, 41 refs, 6 figs, 1 table 1632 lines)
PSYCOLOQUY is sponsored by the American Psychological Association (APA)
Copyright 1998 Liane Gabora
AUTOCATALYTIC CLOSURE IN A COGNITIVE SYSTEM:
A TENTATIVE SCENARIO FOR THE ORIGIN OF CULTURE
Liane Gabora
Center Leo Apostel,
Brussels Free University,
Krijgskundestraat 33,
1160 Brussels,
Belgium
lgabora who-is-at vub.ac.be
http://www.vub.ac.be/CLEA/liane/
ABSTRACT: This target article presents a speculative model of the
cognitive mechanisms underlying the transition from episodic to
mimetic (or memetic) culture with the arrival of Homo Erectus,
which Donald (1991) claims paved the way for the unique features of
human culture. The model draws on Kauffman's (1993) theory of how
an information-evolving system emerges through the formation of an
autocatalytic network. Though originally formulated to explain the
origin of life, Kauffman's theory also provides a plausible account
of how discrete episodic memories become woven into an internal
model of the world, or world-view, that both structures, and is
structured by, self-triggered streams of thought. Social
interaction plays a role in (and may be critical to) this process.
Implications for cognitive development are explored.
KEYWORDS: abstraction, animal cognition, autocatalysis, cognitive
development, cognitive origins, consciousness, cultural evolution,
memory, meme, mimetic culture, representational redescription,
world-view.
I. INTRODUCTION
1. The subject of cultural origins is usually approached from an
archeological perspective. For example, by dating artifacts such as
tools we learn approximately when humans acquired the ability to make
and use those tools. This target article takes a more cognitive
approach (see also Barkow et al. 1992; Donald 1991, 1993a, 1993b;
Tomasello et al. 1993; Tooby & Cosmides 1989). It outlines a theory of
the psychological mechanisms underlying the major cognitive transition
that, as Donald (1991) proposes, made possible the characteristic
complexity and ingenuity of human culture.
2. The theory proposed here was inspired by an idea originally put
forward to explain the origin of life. The origin of life and the
origin of culture might appear at first glance to be very different
problems. However, at a gross level of analysis they amount to the same
thing: the bootstrapping of a system by which information patterns
self-replicate, and the selective proliferation of some variants of
these self-replicating patterns over others. The theory is thus
consistent with the perspective of culture as a form of evolution
(Dawkins 1975; Gabora 1997). In keeping with this evolutionary
framework, the term "meme" is used to refer to a unit of cultural
information as it is represented in the brain. Thus meme refers to
anything from an idea for a recipe to a memory of one's uncle to a
concept of size to an attitude of racial prejudice. The rationale for
lumping together episodic memories and symbolic abstractions is that
they are both food for thought, units of information that can be drawn
upon to invent new memes or to clarify existing ones. Memes that have
been implemented as actions, vocalizations, or objects are referred to
as artifacts.
3. The basic line of reasoning in this paper goes as follows. The
bottleneck to cultural evolution appears to be the capacity for a
self-sustained stream of thought that both structures and is structured
by an internal model of the world, or world-view. It is this capacity
that enables us to plan and predict, to generate novelty, and to tailor
behavior according to context. The question is: Until discrete memories
have been woven into a conceptual web, how can they generate a stream
of thought? And conversely, until a mind can generate a stream of
thought, how does it weave its memories into a world-view? Kauffman's
proposal that life originated with the self-organization of a set of
autocatalytic polymers suggests a mechanism for how this comes about.
Much as catalysis increases the number of different polymers, which in
turn increases the frequency of catalysis, reminding events increase
meme density by triggering symbolic abstraction, which in turn
increases the frequency of remindings. And just as catalytic polymers
undergo a phase transition to a state where there is a catalytic
pathway to each polymer present, and together they constitute a
self-replicating set, memes undergo a phase transition to a state where
each meme is retrievable through a pathway of remindings/associations,
and together they constitute a transmittable world-view. In the origin
of life scenario, since reactions occur in parallel, autocatalytic
closure increases sharply as the ratio of reactions to polymers
increases. In the cultural analog, however, the retrieval and invention
of memes is funnelled through an attention/awareness mechanism, which
introduces a bottleneck. Therefore, this transition occurs gradually,
as increasingly abstract concepts are perceived and their implications
percolate through the memetic network. Social interaction and artifacts
facilitate the process, and ensure that the continued evolution of
memes does not hinge on the survival of any particular meme host.
II. BACKGROUND: THE ORIGIN OF LIFE AND ITS CULTURAL ANALOG
4. This section will present background material relevant to the
central thesis. We begin with a comparison of minds that are and are
not able to sustain cultural evolution, and we draw on what is known
about human cognition to make some hypotheses concerning the
differences between them. We then turn to the paradox of the origin of
life, and show how autocatalysis provides a potential solution.
II.1 A TRANSITION IN COGNITIVE CAPACITY
5. In Origins of the Modern Mind, Donald (1991) argues convincingly
that the capacity for abstract thought is the bottleneck of cultural
evolution, and that it came about during the transition from episodic
to mimetic culture following the arrival of Homo Erectus approximately
1.7 million years ago. Before this time the human memory system was
like that of a primate [FOOTNOTE 1], limited to the storage and cued
retrieval of specific episodes. Donald accordingly uses the term
episodic to designate a mind in which episodic memory is the only
memory system there is. An episodic mind is capable of social
attribution, insight and deception, and is sensitive to the
significance of events. With much training it can learn arbitrary
stimulus-response associations (such as pointing at a token of a
certain shape to obtain food). However, it cannot invent symbols or
abstractions on its own, or experiment with them. It has great
difficulty accessing memories independent of environmental cues, and is
unable to improve skills through self-cued rehearsal.
6. In contrast, the mimetic mind has, built upon its episodic
foundations, a multimodal modelling system with a self-triggered
rehearsal loop. In other words, it can retrieve and recursively operate
on memories independent of environmental cues, a process referred to by
Karmiloff-Smith (1992, 1994) as representational redescription.
Redescribing an episode in terms of what is already known roots it in
the network of understandings that comprise the world-view, and the
world-view is perpetually restructured as new experiences are
assimilated, and new symbols and abstract concepts are invented as
needed. A mimetic individual is able to rehearse and refine skills,
and therefore exhibits enhanced behavioral flexibility and more precise
control over intentional communication. The upshot is cultural
novelty. Mime, play, games, toolmaking, and reproductive memory, says
Donald are thus manifestations of the same superordinate mimetic
controller. The appearance of sophisticated stone tools, long-distance
hunting strategies, and migration out of Africa, as well as the rapid
increase in brain size at this time (Bickerton 1990; Corballis 1991;
Lieberman 1991), are cited as evidence for the transition from episodic
to mimetic culture. Donald claims it is not clear that the mimetic
controller must be localized in any single anatomical structure,
although it must have functional unity. Mimetic ability seems to
encompass a broad panoply of skills associated with several distinct
regions of the brain. Since miming accounts for only a small part of
what the mimetic mind can do, and since mimetic skill seems to boil
down to the capacity to evolve memes, we will use the term "memetic"
instead of "mimetic."
7. Donald's proposal is invaluable in that spurs us to consider the
cognitive basis of culture, but it leaves us hanging as to what sort of
functional reorganization could turn an episodic mind into a memetic
one. In particular, it leaves us with a nontrivial problem of origins.
In the absence of representational redescription, how are relationships
established amongst memes so that they become a world-view? And until a
memory incorporates relationships between stored items, how can one
meme evoke another which evokes another, etc., in a stream of
representational redescription? We know that the brains of an
ancestral tribe somehow turned into instruments for the variation,
selection, and replication of memes. What happened to get the ball
rolling, to enable the process of memetic evolution to take hold? When
Groga, a member of this tribe, had her first experience, there were no
previously stored episodes to be reminded of, just external and
internal stimuli (such as hunger). As episodes accumulated in her
memory, occasionally it happened that an instant of experience was so
similar to some stored episode that a retrieval process occurred, and
she was reminded of that past episode. Perhaps the retrieval elicited
a learned response. For example, the sight of a bumpy, red gourd might
have reminded her of a bumpy, yellow gourd that her brother once used
to carry water. Embedded in this recollection was the refreshing taste
of the water he shared with her. The memory might have inspired her to
use the red gourd to carry water. But since her memory consisted only
of stored episodes, no abstractions, this is the only kind of influence
it could exert; her awareness was dominated by the stimuli of the
present moment. At some point in her life, however, she managed to
wilfully direct her attention not to a particular sensory stimulus, nor
to the performance of a biological drive satisfying action, but to a
chain of symbol manipulation. She kept this stream of thought going
long enough to refine a concept or perspective, or invent a novel
artifact. But if you need an interconnected world-view to generate a
stream of thought, and streams of thought are necessary to connect
individual memes into a world-view, how could one have come into
existence without the other?
II.2 COGNITIVE ATTRIBUTES THAT ENABLE ABSTRACT THOUGHT
8. The first step toward an answer is to elucidate as best we can given
present knowledge, the cognitive mechanisms that distinguish a memetic
mind from an episodic one. This section describes a minimal,
biologically plausible cognitive architecture that could qualify as
memetic. This cognitive architecture is a best guess, drawn from
evidence in cognitive science, neuroscience, and artificial
intelligence and the knowledge that the memetic mind grew out of (and
therefore the potential for it was implicit in) the architecture of the
episodic mind. It has the following attributes. First, it can integrate
inputs from the sensorium and the drives with inputs from memory, and
it can dispense commands to the motor system. Second, the memory is
sparse, content addressable, distributed, modular, and habituates to
repeated inputs. This enables it to generate abstractions. Finally, it
can manipulate abstractions, consciously and recursively. We discuss
each attribute briefly, explain how together they accomplish a memetic
task, and then specify what is most likely lacking in the episodic
mind.
9. INTEGRATION OF MEMORY, STIMULI, AND DRIVES. Vital to both the
episodic and memetic mind is a means of integrating sensations, drives,
and stored memories to produce a seamless stream of conscious
experience and purposeful motor action. The place where this
information is coordinated need not correspond to a single anatomical
structure (though it is often suggested that the intralaminar nuclei of
the thalamus are involved). We will adopt Kanerva's (1988) term, the
focus, since it does not imply any commitment regarding centrality or
global penetration. We assume that the states of the neurons that
comprise the focus determine the content and phenomenal qualities of an
instant of awareness. A meme is then a high-dimensional vector of
difference relations (or continuous variables) that either is or has
been encoded in an individual's focus.
10. SPARSE MEMORY. Our sensory apparatus can register a tremendous
amount of information. Where n is the number of features the senses can
distinguish, N, the number of memes that could potentially be hosted by
the focus, = 2**n for boolean variables (an is infinitely large for
continuous variables). For example, if n = 1,000, N = 2**1,000 memes
[FOOTNOTE 2]. Assuming n is large, N is enormous, so the memory is
sparse in that the number of locations L where memes can be stored is
only a small fraction of the N perceivable memes. In other words,
neural pathways leading out from the focus do not receive inputs from
each of its n slots, but from some fraction of them. The number of
different memes actually stored at a given time, s, is constrained by
L. The set of all possible n-dimensional memes a mind is capable of
storing can be represented as the set of vertices (if features assume
only binary values) or points (if features assume continuous values) in
an n-dimensional hypercube, where the s stored memes occupy some subset
of these points. The distance between two points in this space is a
measure of how dissimilar they are, referred to as the Hamming
distance. Kanerva (1988) makes some astute observations about this
memory space. The number of memes at Hamming distance d away from any
given meme is equal to the binomial coefficient of n and d, which is
well approximated by a Gaussian distribution. Thus, if meme X is
111...1 and its antipode is 000...0, and we consider meme X and its
antipode to be the poles of the hypersphere, then approximately 68% of
the other memes lie within one standard deviation (sqrt(n)) of the
equator region between these two extremes (FIGURE 1). As we move
through Hamming space away from the equator toward either Meme X or its
antipode, the probability of encountering a meme falls off sharply by
the proportion sqrt(n)/n.
ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/Pictures/gabora.fig1
.html
FIGURE 1. DISTANCES BETWEEN MEMES IN MEMORY. Solid black curve is a
schematic distribution of the Hamming distances from the address of
a given meme to addresses of other memory locations in a sparse
memory. The Gaussian distribution arises because there are many
more ways of sharing an intermediate number of features than there
are of being extremely similar or different. A computer memory
stores each item in only the left-most address, whereas a
distributed network stores it throughout the network. A restricted
activation function, such as the radial basis function, is
intermediate between these two extremes. Activation decreases with
distance from the ideal address, as indicated by green shading.
11. In fact the space of possibilities is even larger if we assume that
the mind rarely if ever pays attention to all the stimulus dimensions
it is capable of detecting. Therefore the number of dimensions the
focus pays attention to, n, is only a subset of the maximum, M. The
strength of the signal on a neural pathway from memory, senses, or
drives must surpass some threshold before the dimension of the focus it
activates is attended. Since the memory can now store memes of any
length up to M, the number of possible memes is:
(1) N = 2**(M+1)-2 = approx: 2**(M+1)
ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/Pictures/eqn1.jpg
The bottom line is: the memory would probably have to be larger than
the number of particles in the universe to store all the permutations
of sensory stimuli it is capable of registering. It is therefore
sparse.
12. DISTRIBUTED REPRESENTATION. In a sparse memory, the probability
that a given meme in the focus is identical to one in storage is
virtually zero, which means that retrieval should be impossible. In
connectionist networks, this problem is solved by distributing the
storage of a meme across many locations. Likewise, each location
participates in the storage of many memes. The focus is represented as
input/output nodes, memory locations as hidden nodes, and their pattern
of connectivity as weighted links. An input touches off a pattern of
activation which spreads through the network until it relaxes into a
stable configuration, or achieves the desired input-output mapping
using a learning algorithm. The output vector is determined through
linear summation of weighted inputs. Thus a retrieved meme is not
activated from a dormant state, but reconstructed. This approach is
necessary if we aim to model cognition at a fine-grained level of
resolution -- down to the threshold of human discrimination. There is a
saying: You never step into the same stream twice; this applies to
streams of thought as well as streams of water. Right now I am
retrieving a memory of eating cinnamon toast; tomorrow I may retrieve
the same memory. But today it is colored by today's mood, today's
events; tomorrow it will be experienced slightly differently. It is not
the exact same information pattern conjured up time and again. The
reconstructive approach enables the memory to abstract a prototype,
fill in missing features of a noisy or incomplete pattern, or create a
new meme on the fly that is more appropriate to the situation than any
meme it has actually experienced (Rumelhart and McClelland 1986). For
example, if an autoassociative network has been fed vectors in which
feature one is present whenever feature two is present, and vice versa,
it will respond to an input that lacks information about feature one,
such as *101, by generating 1101. It may never actually have
encountered 1101 before, but given that in its world there exists a
correlation between features one and two, this is an appropriate
response. In addition to associations between inputs and outputs of
features, the network has learned a higher-level association between
two features. In effect, it contains more information than has been fed
into it.
13. A problem with distributed representation is that unless stored
patterns are perfectly orthogonal, they interfere with one another, a
phenomenon known as crosstalk. This is solved by restricting the
storage region. For instance, in Kanerva's (1988) Sparse Distributed
Memory (SDM)model, a meme is stored in all locations within a
hypersphere of addresses surrounding the ideal address. The smaller the
Hamming distance between two memes, the more their storage locations
overlap, so the higher the probability they are retrieved
simultaneously and blended in the focus. A more sophisticated way of
implementing this idea, for which there is neurobiological support, is
to use a radial basis function (RBF) (Clothiauxet al. 1991; Hancock et
al. 1991; Willshaw & Dayan 1990). Once again a hypersphere of locations
is activated, but this time activation is maximal at the center of the
RBF and tapers off in all directions according to a (usually) Gaussian
distribution (see FIGURE 1). Where x is an i-dimensional input vector,
k is the center of the RBF, and s is the width of the Gaussian, hidden
nodes are activated as follows:
(2) F(x) = e**-SIGMA[((x-m)/s)**2]
ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/Pictures/image11.jpg
By carving out a hypersphere in memory space, one part of the network
can be modified without affecting the capacity of other parts to store
other patterns. The further a stored meme is from k, the less
activation it not only receives but in turn contributes to the next
evoked meme, and the more likely that its contribution is cancelled out
by that of other memes. In neural networks, suitable values for k and s
are found during a training phase. In the brain, k values could be
modified by changing the pattern of neuronal interconnectivity.
Decreasing neuron activation thresholds would increase s.
14. ORGANIZED MODULARITY. Another way of avoiding cross-talk is to
induce a division of labor amongst competing subnetworks; in other
words, to make the memory modular (Nowlan 1990; Jacobs et al. 1991).
There is abundant evidence of modularity in the brain; its preservation
in phylogenetic history suggests that it is not arbitrary. We assume
that (1) the world we live in is highly patterned and redundant and
that (2) this pattern and redundancy is reflected in the connectivity
of the neurons where memes are stored. After birth there is a
large-scale pruning of neurons. It seems reasonable that the surviving
subset of the M possible inputs to each neural pathway is determined by
biological and cultural selective pressures, instead of at random.
These pressures sculpt the pattern of neuronal connectivity such that
the L (out of N possible) locations can store most of the memes we
stand a chance of encountering. This means that in practice the
sparseness of the memory does not interfere with its representational
capacity. It also means that the probability that a given stimulus
activates a retrieval event is not as low as the statistics suggest.
15. CONTENT ADDRESSABILITY. A computer reads from memory by simply
looking at the address in the address register and retrieving the item
at the location specified by that address. The sparseness of human
memory prohibits this kind of one-to-one correspondence. However,
content addressability can be feigned, as follows. The feature pattern
that constitutes a given meme causes some neurons leading out from the
focus to be excited and others to be inhibited. The ensuing chain
reaction activates memory neurons where the meme gets stored. The
address of a memory neuron amounts to the pattern of excitatory and
inhibitory synapses from focus to storage that make it fire, so there
is a systematic relationship between the information content of a meme
and the locations it activates. Thus, embedded in the neural
environment that supports their informational integrity, memes act as
implicit pointers to other memory locations. These pointers prompt the
dynamic reconstruction of the next meme to be subjectively experienced,
which is statistically similar to the one that prompted it. As a
result, the entire memory does not have to be searched in order for a
gourd to remind Groga of a previously encountered gourd. It is worth
stressing that there is no search taking place, just information
flowing through a system displaced from equilibrium. The current
instant of experience activates certain neurons, which in turn activate
certain other neurons, which leads to the distributed storage of that
experience, which activates whatever else is stored in those locations,
which then merges with any salient information from the senses and
drives to form the next instant of experience, etc. in an ongoing
cycle. What emerges is that the system appears to retrieve memories
that are similar, or concepts that are relevant, to the current
experience. But that is not magic; it is simply a side effect of the
fact that correlated memes get stored in overlapping locations.
16. HABITUATION. We do not want an ongoing stimulus, such as the sound
of rain, to recursively evoke remindings of rain. The nervous system
avoids this kind of perseveration as follows. First, neurons have a
refractory period during which they cannot fire, or their response is
greatly attenuated. Second, they team play; the responsibility for
producing a response is shared by a cooperative group of neurons such
that when one is refractory another is active. If exactly the same
neurons are stimulated repeatedly, they all become refractory, and
there is little or no response.
17. CAPACITY FOR SYMBOL MANIPULATION. The connectionist methods
described above are examples of the subsymbolic approach to cognition,
which works best for modelling perceptual and low-level cognitive
phenomena. These include detecting, representing, and responding
flexibly to patterns of correlation, learning fuzzy categories, and
solving simple constraint satisfaction problems. Subsymbolic processing
makes the world easier to navigate. But the world contains additional
structure that our brains are not hard-wired to capture. Therefore,
even after memes are stored in memory, they are clustered, rather than
being uniformly distributed throughout the space of possible memes; and
they contain implicit predicate logic relationships. This is where
symbolic processing is useful. Symbolic models of cognition focus on
the serial and potentially recursive application of logical operations
on symbols, without attempting to represent their internal structure.
They are particularly good at modelling the high-level cognitive
abilities that are unique to memetic minds, such as planning and
deductive reasoning. Arguments for a reconstructive view of retrieval
notwithstanding, highly abstract concepts that have been used thousands
of times, such as "space" or "equal" or "is," would be unlikely to
emerge from memory retaining the associations of any particular usage.
Thus it seems reasonable to begin with the working hypothesis that
subsymbolic processing predominates for low-level, parallel,
automatically generated cognitive phenomena, and that symbolic
processing provides a satisfactory approximation for many high-level,
serial, consciously directed aspects of cognition. (Creative processes
may draw heavily on both.)
18. Let us now examine how a cognitive architecture with these
attributes would accomplish a specific task. Consider the situation
wherein the sight of a rotting, striped, bumpy, red gourd reminds
Groga of the striped, bumpy, yellow gourd her brother used as to carry
water, which generates the desire to have water readily available in
the cave. Groga slashes the top off the red gourd and scoops water into
it. To her dismay, the water leaks out through a soft decay spot. Just
out of sight lies the intestine of a recently killed water buffalo.
What sort of cognitive dynamics would prompt Groga to tie one end of
the intestine and use it as a waterbag? It is unlikely that the ability
to classify gourd and knotted intestine as potentially substitutable
instances of the category container is hard-wired. No one in Groga's
tribe has previously conceived of an intestine as a container, so
social learning is not an option. This task involves a number of
difficult skills including abstract reasoning, uncued retrieval,
redescription, and manual dexterity. It lies beyond the horizon of what
the episodic mind can accomplish.
19. The sight of the red gourd is registered as a vector of features in
Groga's focus. This vector determines which synapses leading out from
the focus are excited and which are inhibited, which determines how
activation flows through her memory network, which in turn determines
the hypersphere of locations where "red gourd" is stored. The process
of storing to these neurons triggers retrieval from these neurons of
whatever has been stored in them. Of course, nothing is retrieved from
them if, after red gourd is stored, Groga's attention is directed
toward some stimulus or biological drive. But to the extent that memory
contributes to the next instant of awareness, storage of red-gourd
activates the retrieval of not only red-gourd itself but all other
memes stored in the same locations. The next meme to be encoded in the
focus is found by evaluating the contributions of all retrieved memes
feature-by-feature. Whereas the retrieved copies of red gourd reinforce
one another, the other retrieved memes contribute less, and are
statistically likely to cancel one another out. They do not cancel out
exactly, however, unless the distribution of stored memes within the
hypersphere of activated locations is uniformly dense. In this case it
is not. The meme yellow gourd container, which got stored when Groga
saw her brother carrying water in a yellow gourd, acts as an
attractor. The result is that the next meme ends up being red gourd
container. Though it is a reconstructed blend, something Groga has
never actually experienced, it can still be said to have been retrieved
from memory.
20. Groga pours water into the red gourd and, as we know, it leaks
out. Her mental model of the world was in error; not all gourds can
transport water. Stymied, memory is probed again, with knowledge of
relationships between objects and attributes guiding the process. The
second probing occurs with intensified activation of the pathway
leading from the concave slot of the focus, and inhibition of the
permeable slot. Let us now focus on the portion of Groga's memory that
deals with four discrete features: bumpy, striped, permeable, and
concave (FIGURE 2). These lie on the x1,x2, x3, and x4 axes,
respectively, and a black dot represents the center of a distributed
hypersphere where a meme is stored. The second probing of memory
activates a slightly different set of locations, which evoke the
abstract category container, the class of objects that are concave and
impermeable, and for which the attributes bumpy and striped are
irrelevant. Container was implicit in the meme-space; it covered the
two-dimensional yellow region of the original hypercube. More
generally, we can view an n-dimensional meme space as a set of nested
hypercubes, such that implicit in the outermost hypercube of memes with
all n dimensions there exist hypercubes of memes with n-1 dimensions,
n-2 dimensions, etc. Armed with the category container, Groga dips
into memory again to discover what else constitutes a member of this
category. The closest thing she can come up with is intestine. Symbol
manipulation now kicks in. She realizes that the intestine is
impermeable and almost concave. Knotted at one end, an intestine would
constitute another member of the category container. She could
therefore carry water in it. She runs off to fetch the intestine.
ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/Pictures/gabora.fig2
.html
FIGURE 2. FOUR-DIMENSIONAL HYPERCUBE REPRESENTING SEGMENT OF MEMORY
SPACE. Bumpy, Striped, Permeable and Concave lie on the x1, x2, x3,
and x4 axes respectively. Three memes are stored in the space:
Intestine, Yellow Gourd, and Red Gourd. Black dots represent the
centers of the distributed hyperspheres where they are stored. The
category container occupies the central yellow region. To make use
of this implicit abstraction, it is necessary to recognize that
concave and permeable are the relevant dimensions. An impermeable
gourd can be used as a container by cutting off the top, and an
intestine can be used as a container by tying a knot at one end.
21. The foregoing discussion may be wrong in the details, but hopefully
it captures the gist of memetic cognition. Now we ask: what is the
episodic mind lacking? Some are tempted to say that the ability of
animals to respond appropriately to salient stimuli, and even learn
arbitrary sensorimotor associations, indicates some capacity for
symbolic thought. However, animals' learned behavior is stereotyped and
brittle: it cannot be adapted to new contexts, which suggests that
they use symbols only in an iconic sense. They give no indication of
engaging in streams of thought that reorganize memes in ways that make
their similarities and differences more explicit. They could not
retrieve the memory that an intestine is in the cave, much less realize
that it was relevant to the goal of transporting water. Our best-guess
model of cognition suggests a number of possible reasons. First, the
resolution of the perceptual apparatus might not be high enough to
capture enough features of salient stimuli (large M). Second, there
might not be enough memory locations to keep these distinctions intact
during storage (large L). Third, the density s/N of stored memes might
be too low. In other words, there might not be enough different basins
of attraction for memes to slide into, or not enough of these
attractors might be occupied. Another possibility is that the neuron
activation threshold is too high (and thus s too narrow). The end
result is the same in all cases: rarely is there a stored meme within
retrievable distance of a given meme in the focus. Thus the memory does
not encode relationships, so rarely can a stream of interrelated
thoughts ensue. In fact, these explanations are connected. M limits L,
which in turn limits s. And since if s 0, the memory only retrieves
memes identical to the content of the focus and therefore cannot form
abstractions, s also limits s.
22. At this point we are in a position to reframe our central question.
We want to know how a mind comes to assume a self-sustained stream of
thought that progressively shapes and is shaped by, a world-view.
Abstract thinking requires each meme that enters the focus to activate
one or more memes already stored in memory enough to evoke a retrieval.
The memory must be traversed with tunnels that connect related concepts
like an apple crisscrossed with wormholes. However, representational
redescription is the process that puts related memes within working
memory reach of one another; it is what recognizes abstract
similarities and restructures the memory to take them into account. How
do you get the wormholes without the worms?
II.3 THE ORIGIN OF LIFE PARADOX
23. We will put aside the question of cultural origins for now, and
turn to the problem of biological origins. The paradox of the origin of
life can be stated simply: if living things come into existence when
other living things give birth to them, how did the first living thing
arise? That is, how did something complex enough to reproduce itself
come to be? In biology, self-replication is orchestrated through an
intricate network of interactions between DNA, RNA, and proteins. DNA
is the genetic code; it contains instructions for how to construct
various proteins. Proteins, in turn, both catalyze reactions that
orchestrate the decoding of DNA by RNA, and are used to construct a
body to house and protect all this self-replication machinery. Once
again, we have a chicken-and-egg problem. If proteins are made by
decoding DNA, and DNA requires the catalytic action of proteins to be
decoded, which came first? How could a system composed of complex,
mutually dependent parts come into existence?
24. The most straightforward explanation is that life originated in a
prebiotic soup where, with enough time, the right molecules collided
into one another at the same time and reacted in exactly the right ways
to create the DNA-RNA-protein amalgam that is the crux of life as we
know it. Proponents argue that the improbability of this happening does
not invalidate the theory because it only had to happen once; as soon
as there was one self-replicating molecule, the rest could be copied
from this template. Miller (1955) increased the plausibility of this
hypothesis by showing that amino acids, from which proteins are made,
form spontaneously when a reducing [FOOTNOTE 3] mixture of oxygen, hydrogen,
carbon, nitrogen, water, and ammonia is subjected to high energy. These
molecules were all likely to have been present on the primitive earth,
and energy could have come in the form of electric discharges from
thunderstorms, ultraviolet light, or high temperatures generated by
volcanoes. Other experiments have shown that the molecular constituents
of DNA and RNA, as well as the fatty acids from which membranes are
constructed, can be formed the same way. Unfortunately, the complexity
of the DNA-RNA-protein structure is so great, and in the earth's early
atmosphere the concentrations of the necessary molecules were so
dilute, that the probability of life originating this way is
infinitesimally low. Hoyle and Wickramasinghe (1981) likened it to the
probability that a tornado sweeping through a junkyard would
spontaneously assemble a Boeing 747.
25. The less complex something is, the more feasible its spontaneous
generation. The discovery of ribozymes -- RNA molecules that, like
proteins, are capable of catalyzing chemical reactions -- brought the
hope that the first living molecule had been found. With ribozymes you
would not need DNA or proteins to establish a self-replicating lineage;
these RNA molecules would do the job of all three. In practice,
however, self-replication of RNA is fraught with difficulties. It tends
to fold back on itself creating an inert, tangled mess (Joyce, 1987).
Furthermore, the probability of a ribozyme assembling spontaneously
from its components is remote (Orgel 1987), and even if it managed to
come into existence, in the absence of certain error-detecting proteins
found in all modern-day organisms, its self-replication capacity would
inevitably break down in the face of accumulated error over successive
generations (Eigen and Schuster, 1979). Thus it is far from obvious how
the chain of self-replicating systems that eventually evolved into you
and me got started.
II.4 THE AUTOCATALYSIS THEORY OF THE ORIGIN OF LIFE
26. Despite the myriad difficulties encountered attempting to get
ribozymes to self-replicate, the idea behind it -- that life originated
in a simple self-replicating system that over time evolved into the
familiar DNA-RNA-protein complex -- was a good one. Once you have some
sort of self-replicating structure in place, anything whatsoever that
accomplishes this basic feat, natural selection can enter the picture
and help things along. Kauffman (1991) suggested that knowing as much
as we do about what life is like now may actually get in the way of
determining how it began. He accordingly decided to focus on how to get
from no life at all to any kind of primitive self-replicating system,
and to hand the problem of getting from there to DNA-based life, over
to natural selection (as well as self-organizing processes). Given the
conditions present on earth at the time life began, how might some sort
of self-replicating system have arisen? His answer is that life may
have begun not with a single molecule capable of replicating itself,
but with a set of collectively self-replicating molecules. That is,
none of the molecules could replicate itself, but each molecule could
induce the replication of some other molecule in the set, and likewise,
its own replication was induced by some other member of the set. This
kind of dual role as both ingredient (or stimulant) and product of
different chemical reactions is not uncommon for polymers such as
protein and RNA molecules.
27. Polymers induce each other's replication by acting as catalysts.
Catalysts speed up chemical reactions that would otherwise occur very
slowly. An autocatalytic system is a set of molecules which, as a
group, catalyze their own replication. Thus, if A catalyzes the
conversion of X to B, and B catalyzes the conversion of Y to A, then A
+ B comprise an autocatalytic set (FIGURE 3). In an environment rich in
X and Y, A + B can self-replicate. A set of polymers wherein each
molecule's formation is catalyzed by some other molecule is said to
exhibit catalytic closure.
ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/Pictures/gabora.fig3
.html
FIGURE 3. AUTOCATALYTIC SET: A catalyses the formation of B, and
B catalyses the formation of A. Thick black arrows represent
catalyzed reactions. Thin green arrows represent catalysis.
28. It is of course highly unlikely that two polymers A and B that just
happened to bump into one another would happen to catalyze each other.
However, this is more likely than the existence of a single polymer
catalyzing its own replication. And in fact, when polymers interact,
their diversity increases, and so does the probability that some subset
of the total reaches a critical point where there is a catalytic
pathway to every member. To show that this is true we must show that
the number of reactions by which they can interconvert increases faster
than their total number. Given polymers made up of, say, two different
kinds of monomers, of up to a maximum length of M monomers each, then
N, the number of polymers, is 2**M+1 as per equation (1). Thus as M
increases -- which it obviously does, since two of the longest polymers
can always join to form a longer one -- the number of polymers
increases exponentially. Now we need to show that the number of
reactions between them increases even faster. We will be conservative
and consider only cleavage (e.g. 110 -> 1 + 10) and ligation (e.g. 1 +
10 -> 110) reactions on oriented polymers (such as protein and RNA
fragments). The number of possible reactions R is the product of the
number of polymers of a certain length times the number of bonds,
summed across all possible lengths:
(3) R = (2**M)(M-1)+(2**M+1)(M-2)+...+(2**M-(M-2))(M-(M-1))
ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/Pictures/eqn3a.jpg
= SIGMA[(2**n)(n-1)], n=2 --> M
ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/Pictures/image4.jpg
Dividing equation (3) by equation (1), we find that as M increases, the
ratio of reactions to polymers increases by a factor of M-2. This means
that if each reaction has some probability of getting carried out, the
system eventually undergoes a transition to a state where there is a
catalytic pathway to each polymer present. The probability of this
happening shifts abruptly from highly unlikely to highly likely as R/N
increases. This kind of sharp phase transition is a statistical
property of random graphs and related systems such as this one. Random
graphs consist of dots, or nodes, connected to each other by lines or
edges. As the ratio of edges to nodes increases, the probability that
any one node is part of a chain of connected nodes increases, and
chains of connected nodes become longer. When this ratio reaches
approximately 0.5, almost all these short segments become
cross-connected to form one giant cluster (FIGURE 4). Plotting the
size of the largest cluster versus the ratio of edges to nodes yields a
sigmoidal curve. The larger the number of nodes, the steeper the
vertical portion of this curve (referred to as the percolation
threshold).
ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/Pictures/gabora.fig4
.html
FIGURE 4. PERCOLATION THRESHOLD. When the ratio of edges to nodes
reaches approximately 0.5, short segments of connected nodes join
to form a large cluster that encompasses the vast majority of nodes.
29. Of course, even if catalytic closure is theoretically possible, we
are still a long way from knowing that it is the correct explanation
for the origin of life. How likely is it that an autocatalytic set
would have emerged given the particular concentrations of chemicals and
atmospheric conditions present at the time life began? In particular,
some subset of the R theoretically possible reactions may be physically
impossible; how can we be sure that every step in the synthesis of each
member of an autocatalytic set actually gets catalyzed? Kauffman's
response is: if we can show that autocatalytic sets emerge for a wide
range of hypothetical chemistries (i.e., different collections of
catalytic molecules), then the particular details of the chemistry that
produced life do not matter so long as it falls within this range. We
begin by noting that, much as several different keys sometimes open the
same door, each reaction can be catalyzed by not a single catalyst, but
a hypersphere of catalytic molecules, with varying degrees of
efficiency. So we assign each polymer an extremely low a priori random
probability P of catalyzing each reaction. The lower the value of P,
the greater M must be, and vice versa. Kauffman shows that the values
for M and P necessary to achieve catalytic closure with a probability
of < 0.999 are highly plausible given the conditions of early earth.
Experimental evidence for this theory using real chemistries (Lee et
al. 1996, 1997; Severin et al. 1997), and computer simulations (Farmer,
et al. 1986) have been unequivocally supportive. Farmer et al. showed
that in an artificial soup of information strings capable of cleavage
and ligation reactions, autocatalytic sets do indeed arise for a wide
range of values of M and P. FIGURE 5 shows an example of one of the
simplest autocatalytic sets it produced. The original set of polymers
from which an autocatalytic set emerges is referred to as the food set.
In this case it consists of 0, 00, 1, and 11. As it happens, the
autocatalytic set that eventually emerges contains all members of the
original food set. This is not always the case.
ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/Pictures/gabora.fig5
.html
FIGURE 5. TYPICAL EXAMPLE OF SMALL AUTOCATALYTIC SET. Reactions are
represented by thin black lines connecting ligated polymers to
their cleavage products. Thick green lines indicate catalysis. Dark
ovals represent food set.
30. An interesting question explored in this simulation is: once a set
of polymers has achieved autocatalytic closure, does that set remain
fixed, or is it able to incorporate new polymer species? Farmer et al.
found that some sets were "subcritical" (unable to incorporate new
polymers) and others were "supracritical" (incorporated new polymers
with each round of replication). Which of these two regimes a
particular set fell into depended on P, and the maximum length of the
food set polymers.
31. Now the question is: supposing an autocatalytic set did emerge, how
would it evolve? The answer is fairly straightforward. It is commonly
believed that the primitive self-replicating system was enclosed in a
small volume such as a coascervate or liposome to permit the necessary
concentration of reactions (Oparin 1971; Morowitz 1992; Cemin & Smolin,
in press). Since each molecule is getting duplicated somewhere in the
set, eventually multiple copies of all molecules exist. The abundance
of new molecules exerts pressure on the vesicle walls. This often
causes such vesicles to engage in a process called budding, where it
pinches off and divides into two twins. So long as each twin contains
at least one copy of each kind of molecule, the set can continue to
self-replicate indefinitely. Replication is far from perfect, so an
offspring is unlikely to be identical to its parent. Different chance
encounters of molecules, or differences in their relative
concentrations, or the arrival of new food molecules, could all result
in different catalysts catalyzing a given reaction, which in turn
alters the set of reactions to be catalyzed. So there is plenty of room
for heritable variation. Error catastrophe is unlikely because, as
mentioned earlier, initially each reaction can be catalyzed not by a
single catalyst but by a hypersphere of potential catalysts, so an
error in one reaction does not have much effect on the set at large
[FOOTNOTE 4]. Selective pressure is provided by the affordances and
limitations of the environment. For example, say an autocatalytic set
of RNA-like polymers arose. Some of its offspring might have a tendency
to attach small molecules such as amino acids (the building blocks from
which proteins are made) to their surfaces. Some of these attachments
inhibit replication, and are selected against, while others favor it,
and are selected for. We now have the beginnings of the kind of
genotype-phenotype distinction seen in present-day life. That is, we
have our first indication of a division of labor between the part of
the organism concerned with replication (in this case the RNA) and the
part that interacts with the environment (the proteins)
32. The autocatalytic origin of life theory circumvents the
chicken-and-egg problem by positing that the same collective entity is
both code and decoder. This entity does not look like a code in the
traditional sense because it is a code not by design but by default.
The code is embodied in the physical structures of the molecules; their
shapes and charges endow them with propensities to react with or
mutually decode one another such that they manifest external structure,
in this case a copy of its collective self. Since autocatalytic sets
appear to be a predictable, emergent outcome in any sufficiently
complex set of polymers, the theory suggests that life is an expected
outcome rather than a lucky long-shot.
III. THE EMERGENCE OF AUTOCATALYSIS IN A COGNITIVE SYSTEM
33. We have taken a look at two paradoxes -- the origin of culture and
the origin of life -- which from hereon will be referred to as OOC and
OOL respectively. The parallels between them are intriguing. In each
case we have a self-replicating system composed of complex, mutually
interdependent parts, and since it is not obvious how either part could
have arisen without the other, it is an enigma how the system came to
exist. In both cases, one of the two components is a storehouse of
encoded information about a self in the context of an environment. In
the OOL, DNA encodes instructions for the construction of a body that
is likely to survive in an environment like the one in which its
ancestors survived. In the OOC, an internal model of world encodes
information about the self, the environment, and the relationships
between them. In both cases, decoding a segment of this information
storehouse generates another class of information unit that coordinates
how the storehouse itself gets decoded. Decoding DNA generates proteins
that, in turn, orchestrate the decoding of DNA. Retrieving a memory or
concept from the world-view and bringing it into awareness generates an
instant of experience, a meme, which in turn determines which are the
relevant portion(s) of the world-view to use in constructing the next
instant of experience. In both cases it is useful to think of the
relevant class of information units as states in an information space,
each of which can act on a hypersphere of other states. In the memory
model it was the hypersphere of related memes, and in the OOL model it
was the hypersphere of potential catalysts.
34. We have argued that the most likely bottleneck in the OOC is the
establishment of a network of inter-related memes, a world-view, that
progressively shapes and is shaped by a stream of self-triggered
thought. We want to determine how such a complex entity might come to
be. Donald claims that the transition from episodic to memetic culture
would have required a fundamental change in the way the brain operates.
Drawing from the OOL scenario presented above, we will explore the
hypothesis that meme evolution begins with the emergence of a
collective autocatalytic entity that acts as both code and decoder.
This idea was mentioned briefly in Gabora (1996a; 1996b; 1997); here it
is fleshed out it in greater detail.
III.1 ESTABLISHING THE CAPACITY FOR ABSTRACTION
35. In the OOL case we asked: what was lying around on the primitive
earth with the potential to form some sort of self-replicating system?
The most promising candidate was catalytic polymers, the molecular
constituents of either protein or RNA. Here we ask an analogous
question: what sort of information unit does the episodic mind have at
its disposal? It has memes, specifically memories of episodes. Episodic
memes then constitute the food set of our system.
36. Next we ask: what happens to the food set to turn it into a
self-replicating system? In the OOL case, food-set molecules catalyzed
reactions on each other that increased their joint complexity,
eventually transforming some subset of themselves into a collective web
for which there existed a catalytic pathway to the formation of each
member molecule. An analogous process might conceivably transform an
episodic mind into a memetic one. Food-set memes activate
redescriptions of each other that increase their joint complexity,
eventually transforming some subset of themselves into a collective web
for which there exists a retrieval pathway to the formation of each
member meme. Much as polymer A brings polymer B into existence by
catalyzing its formation, meme A brings meme B into conscious awareness
by evoking it from memory. As in Section II.2, a retrieval can be a
reminding, a redescription of something in light of new contextual
information, or a creative blend or reconstruction of many stored
memes.
37. How might Groga's mind have differed from that of her ancestors such
that she was able to initiate this kind of transformation? In the OOL
case, it was crucial that the polymers be catalytic. We simply gave
each polymer a small, random probability P of catalyzing each reaction.
In the OOC case, we assume that each of Groga's L memory locations where
the s memes are stored has a RBF with a Gaussian distribution of width
s centered on it. Thus the probability that one meme evokes another is
determined by s rather than a random probability P. Let us consider
what would happen if, due to some genetic mutation, Groga's activation
threshold were significantly lower than average for her tribe. Thus s
is wider, which means that a greater diversity of memes are activated
in response to a given experience, and a larger portion of the contents
of memory merge and surface to awareness in the next instant. Since the
memory is content-addressable, when meme X goes fishing in memory for
meme X, sooner or later this large hypersphere is bound to catch a
stored meme that is quite unlike X. For example, let us say that Groga
sees rabbits every day, so there are lots of rabbit memories stored in
her brain. For simplicity, let us say they consist of a sequence of ten
0's followed by a five bit long variable sequence. She happens to look
off in the distance and see a grazing water buffalo, which gets
represented in her focus as 000000011101010. The buffalo meme will be
referred to as meme X. Because the hypersphere is wide, all the
rabbit memories lie close enough to meme X to get evoked in the
construction of X (as is X itself). Since all the components from which
X is made begin with a string of seven zeros, there is no question that
X also begins with a string of seven zeros. These positions might code
for features such as has eyes, eats, etc. The following set of
three 1's in the rabbit memes are cancelled out by the 0s in the buffalo
memes, so in X they are represented as *s These positions might code
for features such as floppy ears. The last five bits constituting the
variable region are also statistically likely to cancel one another
out. These code for other aspects of the experience, such as, say, the
color of the sky that day. So X turns out to be the meme
0000000********, the generic category animal;, which then gets stored
in memory in the next iteration. This evocation of animal by the
buffalo episode is not much of a stream of thought, and it does not bring
her much closer to an interconnected conceptual web, but it is an
important milestone. It is the first time she ever derived a new meme
from other memes, her first creative act.
III.2 ESTABLISHING A STREAM OF THOUGHT
38. Although lowering the neuron activation threshold was what enabled
Groga to create an abstraction, the penalty for having too low a
threshold would be very high, because successive thoughts would not
necessarily be meaningfully related to one another. Abstract thought,
unlike episodic thought, cannot rely on the continuity of the external
world (i.e., if a desk is in front of you now it is likely to still be
in front of you now) to lend coherence to conscious experience. Too low
a threshold might be expected to result in a cognitive rendition of
superconductivity, where lowering resistance increases correlation
distance and thus a perturbation to any one pattern percolates through
the system and affects even distantly related patterns. (The
free-association of a schizophrenic seems to correspond to what one
might expect of a cognitive system with this property (see Weisberg
1986).) However, if the threshold is extremely high, such that
distributions do not overlap, the attended meme must be identical to one
stored in memory to evoke a retrieval. To produce a steady stream of
meaningfully related yet potentially creative remindings, the threshold
must fall within an intermediate range. This is consistent with
Langton's (1992) finding that the information-carrying capacity of a
system is maximized when its interconnectedness falls within a narrow
regime between order and chaos.
39. Thus thoughts do not leap from one unexplored territory of
meme-space to another, but meander from one meme to a similar one in a
region that has proven fruitful and is therefore exceptionally
clustered with memes. This not only increases the frequency of
remindings and abstractions, it provides a thread of continuity linking
one meme to the next. Organized modularity also enhances continuity by
precluding the activation of irrelevant memes. Since statistical
similarity is preserved across sequentially evoked memes in a train of
thought, thinking can be viewed as an internal form of meme
self-replication. It could be argued that the correlation between
consecutive memes is so low that this hardly deserves to be called a
form of self-replication. One would not want consecutive memes to be
identical. Surely Eigen and Schuster's error catastrophe argument
applies here; that is, the copying fidelity of this process is so low
that errors would quickly accumulate and in no time the lineage would
die. But this argument does not apply. The only reason it is a pitfall
for biological evolution is that copying error tends to impair the
capacity to self-replicate. So long as offspring are as good as their
parents at reproducing themselves, and live long enough to do so, it
does not matter how much error is introduced from one generation to the
next. It is only when a generation dies without having reproduced that
there is a problem. In the biological world, once something is dead it
cannot bring forth life [FOOTNOTE 5]. But in memetic evolution this is
not necessarily the case. To show why this is so, say that half-way
through the train of consecutive memes in Einstein's brain that
culminated in the theory of relativity, a tiger burst in through the
window. The correlation between the relativity meme of one instant and
the tiger-perception-meme of the next instant would be almost zero.
This momentous memetic lineage would come to a screeching halt. But
would it be lost forever? No. Sooner or later, once the tiger situation
was taken care of, the relativity stream of thought would resume
itself. Memory (and external artifacts) function as a memetic sperm
bank, allowing a defunct ancestral line to be brought back to life and
to resume self-replication. The upshot is that in culture you can get away
with a much higher error rate than in biology.
40. Cultural evolution has not only an internal form of replication,
but also an internal means of generating variation. In a stream of
thought, consecutive memes are not exact replicas; each meme is a
variation of its predecessor. It also has an internal form of
selection. Selection comes in the form of drives, needs,
attention-focusing mechanisms, and the associative organization of
memory, which constrain how one meme evokes another. Thus all the
components of an evolutionary process take place in the mind of an
isolated individual. The memory-driven generation of a stream of
correlated memes can be viewed as a coevolutionary relationship between
replication, variation, and selection, and the process of
representational redescription can itself be redescribed as the
selective generation of variant replicants. Embedded in the outer,
inter-individual sheath of memetic evolution we find a second
intra-individual sheath, where the processes of replication, variation,
and selection are not spatiotemporally separated but intimately
intertwined. Together they weave a stream of thought, one meme fluidly
transmuting into the next.
41. Thus the semantic continuity of a stream of thought makes memory
navigable despite its sparseness. Once "animal" has been evoked and stored
in memory, the locations involved habituate and become refractory (so,
for instance, animal does not recursively evoke animal). However
locations storing memes that have some animal features, but that were
not involved in the storage of animal, are still active. Thus animal
might activate "tiger" which might evoke "hyena" etc., strengthening
associations between the abstract category and its instances. Other
abstractions, such as container, form in analogous fashion. As Groga
accumulates both episodic memes and abstractions, the probability that
any given attended meme is similar enough to some previously stored
meme to activate it increases. Therefore reminding acts increase in
frequency, and eventually become streams of remindings, which get
progressively longer. Groga is now capable of a train of thought. Her
focus is no longer just a spot for coordinating stimuli with action; it
is now a forum for abstractive operations that emerge through the
dynamics of iterative retrieval.
III.3 AUTOCATALYTIC CLOSURE IN A WEB OF SPARSE, DISTRIBUTED MEMES
42. We have seen how our best-guess model of human cognition achieves a
stream of thought. How do we know that streams of thought will induce a
phase transition to a critical state where for some subset of memes
there exists a retrieval pathway to each meme in the subset? In the OOL
case, we had to show that R, the number of reactions, increases faster
than N, the number of polymers. We found that R/N increased by a factor
of M2, where M was the maximum number of monomers per polymer. Because
of the highly parallel nature of this system, it was reasonable to
equate potential reactions with actual reactions, and therefore to
assume that the new polymers resulting from these reactions actually
exist (and can themselves partake in reactions). Similarly, we now want
to show that some subset of the memes stored in an individuals mind
inevitably reach a critical point where there is a path by which each
meme in that subset can get evoked. But here, it is not reasonable to
assume that all N perceptible memes actually exist (and can therefore
partake in retrieval operations). Their number is severely curtailed by
the number of memory locations, the variety of perceptual experience,
and the fact that meme retrieval, though distributed at the storage
end, is serial at the awareness end. The rate at which streams of
thought reorganize the memetic network is limited by the fact that
everything is funnelled through the focus; we can only figure one thing
out at a time. This presents a bottleneck that was not present in the
OOL scenario. As a result, whereas OOL polymers underwent a sharp
transition to a state of autocatalytic closure, any analogous
transition in inter-meme relatedness is expected to take place
gradually. So we need to show that R, the diversity of ways one meme
can evoke another, increases faster than not N but s, the number of
memes that have made it through this bottleneck. That is, as the memory
assimilates memes, it comes to have more ways of generating memes than
the number of memes that have explicitly been stored in it.
43. This brings us to another complication, which further prolongs
cognitive development. Since short, simple molecules are more abundant
and readily formed than long, complex ones, in the OOL case it made
sense to expect that the food set molecules were the shortest and
simplest members of the autocatalytic set that eventually formed.
Accordingly, in simulations of this process the direction of novelty
generation is outward, joining less complex molecules to form more
complex ones through AND operations (see FIGURE 5). In contrast, the
memetic food set molecules are complex, consisting of all attended
features of an episode. In order for them to form an interconnected
web, their interactions tend to move in the opposite direction,
starting with relatively complex memes and forming simpler, more
abstract ones through OR operations. The net effect of the two is the
same: a network emerges, and joint complexity increases. But what this
means for the OOC is that there are numerous levels of autocatalytic
closure, which convey varying degrees of world-view interconnectedness
and consistency on their meme host. These levels correspond to
increased penetration of the (n-1, n-2)-dimensional nested hypercubes
implicit in the memory space. Since it is difficult to visualize a set
of nested, multidimensional hypercubes, we will represent this
structure as a set of concentric circles, such that the outer skin of
this onion-like structure represents the hypercube with all n
dimensions, and deeper circles represent lower-dimensional hypercubes
(FIGURE 6). Obviously, not all the nested hypercubes can be shown. The
points of our original hypercube are represented as points along the
perimeter of these circles, and k values (centermost location where a
meme is stored) are shown as large, black dots. The outermost shell
encodes memes in whatever form they are in the first time they are
consciously encountered. This is all the episodic mind has to work with.
In order for one meme in this shell to evoke another, they have to be
extremely similar at a superficial level. In a memetic mind, however,
related concepts are within reach of one another because they are
stored in overlapping hyperspheres.
ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/Pictures/gabora.fig6
.html
FIGURE 6. ROLE OF ABSTRACTIONS IN CREATIVE THOUGHT. For ease of
visualization, the set of nested hypercubes representing the
space of possible memes is shown as a set of concentric circles,
where deeper circles store deeper layers of abstraction (lower
dimensional hypercubes). A black dot represents the centermost
storage location for a specific meme. Water Container is a more
general concept than Gourd or Knotted Intestine, and is therefore
stored at a deeper layer. Green circle around each stored meme
represents hypersphere where the meme gets stored and from which
the next meme is retrieved. Gourd and Knotted Intestine are too
far apart in Hamming distance for one to evoke the other
directly. However, by attending the abstraction Container, which
ignores all dimensions except concave, and permeable, the memetic
mind decreases the apparent Hamming distance between them.
44. Under what conditions does that R increase faster than s? As it
turns out, abstraction plays a crucial role. To determine how
abstraction affects R, let us assume for the moment that memory is
fully connected. Clearly this is not the case, but this simplification
illustrates some trends which also apply to a memory wherein sparseness
is compensated for by restricted distributed activation. We will be
conservative and limit the sort of retrieval event under consideration
to abstraction, and the redescription of a meme as an instance of an
abstraction (including analogical thought). Abstractions have n
dimensions, where n ranges from a minimum of m to a maximum of M.
RsubA), the number of ways a retrieval can occur through abstraction,
equals the number of retrieval paths allowed by an n-dimensional
abstraction, multiplied by the number of n-dimensional abstractions,
summed over all values of n from m to M-1. The number of retrieval
paths equals the number of memes that are instances of an n-dimensional
abstraction = 2**(M-n). The number of n-dimensional abstractions is
equal to the binomial coefficient of M and n. The result is multiplied
by two, since an abstraction can evoke an instance, and likewise, an
instance can evoke an abstraction.
(4) RsubA = 2(2**(M-m)[M::m]+2**(M-(m+1))[M::m+1]+...2[M::M-1])
ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/Pictures/image10.jpg
= 2(SIGMA(2**M-n[M::n]) for (n-m) --> M-1
ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/Pictures/image9.jpg
The key thing to note is that lower-dimensional memes allow
exponentially more retrieval paths. Abstraction increases s by creating
a new meme, but it increases R more, because the more abstract the
concept, the greater the number of memes a short Hamming distance away
(since |xi -ki| = 0 for the irrelevant dimensions). A second thing to
note is that the number of abstractions at a given value of n increases
up to M/2. Taken together, these points mean: the more deeply a mind
delves into lower-dimensional abstractions, the more the distribution
in FIGURE 1 rises and becomes skewed to the left. The effect is
magnified by the fact that the more active a region of meme space, the
more likely that an abstraction will be positioned there, and thus
abstractions beget abstractions recursively through positive feedback
loops. Reminding incidents also contribute to R. Hence the more likely
it is that some meme will get activated and participate in a given
retrieval. So, whereas R increases as abstraction makes relationships
increasingly explicit, s levels off as new experiences have to be
increasingly unusual in order to count as new and get stored in a new
constellation of locations. Furthermore, when the carrying capacity of
the memory is reached, s plateaus, but R does not. Thus as long as the
neuron activation threshold is large enough to permit abstraction and
small enough to permit temporal continuity, the average value of n
decreases. Sooner or later the system is expected to reach a critical
percolation threshold such that R increases exponentially faster than
s, as in FIGURE 4.
45. So long as R does indeed eventually increase faster than s, Groga's
memory becomes so densely packed that any meme that comes to occupy the
focus is bound to be close enough in Hamming distance to some
previously stored meme(s) to evoke it. The memory (or some portion of
it) is holographic in the sense that there is a pathway of associations
from any one meme to any other. Together they form an autocatalytic set
What was once just a collection of isolated memories is now a
structured network of concepts, instances, and relationships -- a
world-view. This most primitive level of autocatalytic closure is
achieved when stored episodes are interconnected by way of abstractions
just a few onionskin layers deep, and streams of thought zigzag amongst
these superficial layers. A second level occurs when relationships
amongst these abstractions are identified by higher-order abstractions
at even deeper onionskin layers, etc. Once Groga's memory has defined
an abstraction, identified its instances, and chunked them together in
memory, she can manipulate the abstraction much as she would a concrete
episode. Reflecting on an idea amounts to reflecting it back and forth
off onionskin layers of varying depths, refining it in the context of
its various interpretations. The conscious realization of the logical
operators and, or and not, are expected to significantly transform
Groga's world-view by enabling conscious symbol manipulation. Other
particularly useful abstractions such as mine, depth, or time, as well
as frames (Barsalou, in press), scripts (Schank & Abelson 1977), and
schemas (Minsky 1985), are also expected to induce reorganization. Just
as in a sand pile perched at the proverbial edge of chaos a collision
between two grains occasionally triggers a chain reaction that
generates a large avalanche, one thought occasionally triggers a chain
reaction of others that dramatically reconfigure the conceptual
network. Rosch's (1978) work on basic level categories suggests that
the way we organize information is not arbitrary but emerges in such a
way as to maximize explanatory power. It would not be surprising to
find that the number of categories and their degree of abstraction
exhibit the same kind of power law relationship as one finds in other
emergent systems (Bak, Tang, & Weisenfeld 1988).
46. How does an interconnected world-view help Groga manifest the skills
that differentiate a memetic mind from an episodic one? The capacity to
maintain a stream of self-triggered memes enables her to plan a course
of action, and to refine behavior by incorporating kinesthetic feedback
into a meme sequence. The ability to generate abstractions opens up a
vast number of new possibilities for Groga. It allows her to
incorporate more of the structure of the world into her mental model of
it. This increases behavioral flexibility by enabling her to define
elements of the world in terms of their substitutable and complementary
relationships. (For example, if she usually makes bows out of wood X,
but she cannot find any wood X, and if wood Y is as strong and flexible
as wood X, then wood Y might substitute for wood X.) The power of
abstraction also enables her to express herself artistically by
extricating memes from the constraints of their original domain and
filtering the resulting pattern through the constraints of other
domains. For example, she can translate the scene before her into a
sequence of motor commands that render it as a cave painting or stone
carving, or transform the pattern of information that encodes the
sorrow she experienced at her child's death into a song. Finally,
abstraction enables Groga to communicate with others through spoken or
nonverbal forms of language. This brings us to the issue: how does the
world-view replicate?
III.4 SOCIAL INTERACTION FACILITATES WORLD-VIEW EMERGENCE
47. Now that we have an autocatalytic network of memes, how does it
self-replicate? In the OOL scenario, polymer molecules accumulate one
by one until there are at least two copies of each, and their shell
divides through budding to create a second replicant. In the OOC
scenario, Groga shares concepts, ideas, stories, and experiences with
her children and tribe members, spreading her world-view meme by meme.
Categories she had to invent on her own are presented to and
experienced by others much as any other episode. They are handed a
shortcut to the category; they do not have to engage in abstraction to
obtain it.
48. Recall how the probability of autocatalysis in Kauffman's simulation
could be increased by raising either the probability of catalysis or
the number of polymers (since it varied exponentially with M).
Something similar happens here. Eventually, once enough of Groga's
abstractions have been assimilated, her tribe members memories become
so densely packed that even if their neuron activation thresholds are
higher than Groga's, a version of Groga's world-view snaps into place in
their minds. Each version resides in a different body and encounters
different experiences. These different selective pressures sculpt each
copy of Groga's original world-view into a unique internal model of the
world. Small differences are amplified through positive feedback,
transforming the space of viable world-view niches. Individuals whose
activation threshold is too small to achieve world-view closure are at a
reproductive disadvantage, and over time eliminated from the
population. Eventually the proclivity for an ongoing stream of thought
becomes so firmly entrenched that it takes devoted yogis years of
meditation to even briefly arrest it. There is selective pressure for
parents who monitor their childs progress in abstraction and interact
with the child in ways that promote the formation of new abstractions
the next level up. Recall the discussion in section II.4 concerning the
incorporation of new polymer species by supracritical autocatalytic
sets. This kind of parental guidance is analogous to handcrafting new
polymers to be readily integrated into a particular autocatalytic set;
in effect it keeps the childs mind perpetually poised at a
supracritical state. Language provides a means for individuals to
mutually enrich one anothers world-views, and to test their world-views
against each other, and in so doing prompt one another to penetrate
deeper and deeper into the onion.
49. Clearly social processes are an integral component of cultural
evolution. In fact the origin of culture is often unquestioningly
equated with onset of the capacity for social transmission. However, as
many authors have pointed out (e.g. Darwin 1871; Plotkin 1988),
although transmission is wide-spread in the animal kingdom, no other
species has anything remotely approaching the complexity of human
culture. Moreover, although in practice transmission plays an
important role, is it crucial? If, for example, you were the only one
human left on the planet, but you were able to live forever, would meme
evolution grind to a halt? If you were to come up with some unique
dance, would you not be exploring the space of possible dance memes even
though no one was watching? If you found an ingenious way to fix a
broken toaster, would you not still have invented a novel meme?
50. In biological evolution, transmission and replication go hand in
hand; genetic information gets replicated and is transmitted to
offspring. But that is not necessarily the only way of getting the job
done. In memetic evolution, the most obvious means of meme replication
is through social processes such as teaching or imitation, but there is
a second form of replication that takes place within an individual. We
noted earlier that in the mind of someone engaged in a stream of
thought, each meme is a statistically similar variant of the one that
preceded and prompted it. It is in this sense that they self-replicate
without necessarily being transmitted to another host. Thus there need
not necessarily be more than one individual for a meme to evolve.
Nevertheless, although intra-individual meme replication is sufficient
for evolving memes, the culture of a single individual would be
extremely impoverished compared to that of a society of interacting
individuals, because the number of memes increases exponentially as a
function of the number of interacting memetic-level individuals. As a
simple example, a single memetic individual who invents ten memes is
stuck with just those ten memes. A society of ten interacting
individuals, only one of whom has reached the memetic stage and can
invent ten memes, is no better off; there are still just ten memes. In
a society of ten noninteracting individuals, each of whom invents ten
memes but does not share them, each individual still has only ten memes.
But in a society where each of the ten interacting individuals invents
ten memes and shares them, each individual ends up with one hundred
memes. The bottom line is: culture as we know it, with its explosive
array of meaningful gestures, languages, and artifacts, depends on both
intra-individual and inter-individual meme replication.
51. In fact it is possible that cognitive closure as described above
first occurred at the level of the group, within a collection of
interacting individuals, and cognitive closure at the level of the
individual came into existence some time later. (The two need not be
mutually exclusive; it is possible that group-level closure could
persist after the arrival of individual-level closure.)
IV. IMPLICATIONS
52. In this section we explore some implications of the autocatalytic
cognition hypothesis. This is the most speculative section of what is
admittedly a speculative paper.
IV.1 WHY DON'T ANIMALS EVOLVE CULTURE?
53. As noted in Section III.2, the penalty for having too low a neuron
activation threshold is very high. Each meme has little relevance to
the one that preceded it, and thinking is so garbled that survival
tasks are not accomplished. On the other hand, too high a neuron
activation threshold is not life-threatening. The focus is virtually
always affected by external stimuli or internal drives, and memory is
reserved for recalling how some goal was accomplished in the past. This
may be the situation present in most brains on this planet, and though
not harmful, it has its own drawbacks. A stream of thought dies out
long before it produces something creative. However, this may not be of
practical consequence to other species. The advantages of a stream of
thought would largely be lost on nonhuman animals because they have
neither the vocal apparatus nor the manual dexterity and freedom of
upper limbs to implement creative ideas. (Language, for example,
drastically increases the degrees of freedom of what can be expressed.)
No matter how brilliant their thoughts were, it would be difficult to
do something useful with them. Moreover, in an evolutionary line there
is individual variation, so the lower the average activation threshold,
the higher the fraction of individuals for which it is so low that they
do not survive. It seems reasonable to suggest that animals are not
prohibited from evolving complex cognition a priori, but that there is
insufficient evolutionary pressure to tinker with the threshold until
it achieves the requisite delicate balance to sustain a stream of
thought, or to establish and refine the necessary feedback mechanisms
to dynamically tune it to match to the degree of conceptual fluidity
needed at any given instant. It may be that humans are the only species
for which the benefits of this tinkering process have outweighed the
risks.
IV.2 PSYCHOLOGICAL CONSIDERATIONS
54. Initially a child is expected to be unselective about meme
acquisition, since (1) it does not know much about the world yet, so it
has no basis for choosing, and (2) its parents have lived long enough
to reproduce, so they must be doing something right. However, just as
importing foreign plants can bring ecological disaster, the
assimilation of a foreign meme can disrupt the established network of
relationships amongst existing memes. Therefore the child develops
mental censors that ward off internalization of potentially disruptive
memes. Censors might also be erected when a meme is embarrassing or
disturbing or threatening to the self-image (Minsky 1985). This could be
accomplished by temporarily increasing the activation threshold so as
to prematurely terminate the meme's assimilation into the world-view.
Much as erecting a fence increases the probability that people will
stay on either one side or the other, censorship warps the probability
that a meme will partake in any particular stream of thought, such that
the individual either avoids the censored meme or dwells on it
excessively. (This seems to be consistent with our bipolar attitude
toward highly censored subjects such as aggression and sexuality.) Thus
censorship precludes incorporation of a meme into the autocatalytic
portion of the memory, and thereby interferes with its holographic
nature.
55. Categorization creates new lower-dimension memes, which makes the
space denser, and increases susceptibility to the autocatalytic state.
On the other hand, creating new memes by combining stored memes could
interfere with the establishment of a sustained stream of thought by
decreasing the modularity of the space, and thereby decreasing density.
If cross-category blending indeed disrupts conceptual networking, one
might expect it to be less evident in younger children than in older
ones, and this expectation is born out experimentally
(Karmiloff-Smith 1990). There is evidence of an analogous shift in
human history from an emphasis on ritual and memorization toward an
emphasis on innovation (Donald, 1991). As world-views become more
complex, the artifacts we put into the world become more complex, which
necessitates even more complex world-views, etc.; thus a positive
feedback cycle sets in.
56. We mentioned that animals are hard-wired to respond appropriately
to certain stimuli, as are humans. However, the ability of humans to
develop world-views with which they can make decisions about what
action to take may obviate the need for some of this hard-wiring.
Genetic mutations that interfere with certain regions of hard-wiring
may not be selected against, and may actually be selected for, because
in the long run they promote the formation of concepts that generate
the same responses but can be used in a more context-sensitive manner.
However, this increases the amount of computation necessary to achieve a
workable world-view.
IV.3 THE ROLE OF AUTOCATALYSIS IN EVOLUTIONARY PROCESSES
57. Returning briefly to the origin-of-life puzzle, recall that
traditional attempts to explain how something as complex as a
self-replicating entity could arise spontaneously entail the
synchronization of a large number of vastly improbable events.
Proponents of such explanations argue that the improbability of the
mechanisms they propose does not invalidate them, because it only had
to happen once; as soon as there was one self-replicating molecule, the
rest could be copied from this template. However, Kauffman's theory
that life arose through the self-organization of a set of autocatalytic
polymers suggests that life might not be a fortunate chain of accidents
but rather an expected event.
58. Although there is much evidence for this hypothesis, definitive
proof that it is the correct explanation of how life originated will be
hard to come by. However, if we are interested in the more general
question of how information evolves, we now have another data point,
another evolutionary process to figure into the picture. Culture, like
biological life, is a system that evolves information through
variation, selection and replication. In fact, it has two layers of
replication, one embedded in the other, and to actualize the inner
layer of replication, all members of the culture must establish their
own personal world-view, which generates their own unique autonomous
stream of sequentially activated self-similar patterns. Consistent with
Kauffman's assertion that the bootstrapping of an evolutionary process
is not an inherently improbable event, the "it only had to happen once"
argument does not hold water here because the cultural analog of the
origin of life takes place in the brain of every young child.
Autocatalysis may well be the key to the origin of not only biological
evolution, but any information-evolving process.
V. CONCLUSIONS
59. Cultural evolution presents a puzzle analogous to the origin of
life: the origin of an internal model of the world that both generates
and is generated by streams of self-sustained, internally driven
thought. In this target article we have explored a plausible scenario
for how cultural evolution, like biological evolution, could have
originated in a phase transition to a self-organized web of catalytic
relations between patterns. TABLE 1 presents a summary of how the
components of the proposed theory of cultural autocatalysis map onto
their biological counterparts.
EVOLUTIONARY SYSTEM BIOLOGY CULTURE
INFORMATION UNIT Polymer Molecule Meme
INTERACTION Catalysis Reminding, retrieval,
reconstruction
AUTOCATALYTIC SET Catalytically closed Network of inter-related
set of polymer memes; World-view
molecules; primitive
organism
REPLICATION Duplication of each Correlation between
molecule, segregation consecutive memes; Social
via budding learning; teaching, imitation
SELECTION Physical constraints Associations, drives;
on molecules, social pressures,
affordances and affordances and
limitations of limitations of
environment environment
VARIATION Novel food molecules, Sensory novelty,
nonspecificity of blending; expressive
catalysis, replication constraints,
error misunderstanding, etc.
TABLE 1: Components of an autocatalytic theory of biological evolution,
and their cultural counterparts.
60. The scenario outlined here is nascent. Putting the pieces together
would require the cooperation of neuroscientists, developmental
psychologists, cognitive scientists, sociologists, anthropologists,
archeologists, and perhaps others. Nevertheless, I know of no other
serious attempt to provide a functional account of how memetic
evolution got started. Whether or not the scenario outlined here turns
out to be precisely correct, my hope is that it draws attention to the
problem of cultural origins, suggests what a solution might look like,
and provides a concrete example of how we gain a new perspective on
cognition by viewing it as an architecture that has been sculpted to
support a second evolutionary process, that of culture.
ACKNOWLEDGEMENTS
I would like to thank David Chalmers, Merlin Donald,
Bruce Edmonds, Harold Edwards, Stuart Kauffman, Francis Heylighen,
Norman Johnson, Wolfgang Klimesch, William Macready, and Mario
Vaneechoutte for helpful discussion and comments on the manuscript.
FOOTNOTES
1. Although this may not be completely accurate; see Donald (1993) and
accompanying commentary.
2. This number is perhaps better appreciated when we realize that its
magnitude is 10,300
3. In a reducing atmosphere there is no free oxygen present. The
presence of ferrous (FeO) rather ferric (Fe2O3) iron in
primitive rock leads us to believe that the earth's atmosphere was
reducing when life began. (It is no longer so today.)
4. See Kauffman (1993) for an interesting discussion of why error
catastrophe becomes a serious problem as the parts of the system
becomes more co-adapted.
5. The niche it filled still exists, so there is still selective
pressure for it to evolve all over again. But the information has to
re-evolve (as opposed to being retrieved from storage).
REFERENCES
Bak, P., Tang, C., & Weisenfeld, K. (1988) Self-organized criticality
Phys. Rev. A 38: 364.
Barkow, J. H., Cosmides, L. & Tooby, J. (1992) The adapted mind:
Evolutionary psychology and the generation of culture. Oxford
University Press.
Barsalou, L. W. (In press) Perceptual symbol systems. Submitted to
Behavioral and Brain Sciences.
Bickerton, D. (1990) Language and species, University of Chicago
Press.
Cemin, S. C. & Smolin, L. (in press) Coevolution of membranes and
channels: A possible step in the origin of life. Submitted to Journal
of Theoretical Biology (October, 1997).
Corballis, M. C. (1991) The lopsided ape: Evolution of the generative
mind, Cambridge University Press.
Darwin, C. (1871) The descent of man, John Murray Publications.
Donald, M. (1991) Origins of the modern mind, Harvard University
Press.
Donald, M. (1993a) Precis of Origins of the modern mind: Three stages
in the evolution of culture and cognition. Behavioral and Brain
Sciences 16: 737-791.
ftp://ftp.princeton.edu/pub/harnad/BBS/.WWW/bbs.donald.html
Donald, M. (1993b) Human cognitive evolution: What we were, what we are
becoming. Social Research 60 (1): 143-170.
Eigen. M. & Schuster, P. (1979) The hypercycle: A principle of natural
self-organization. Springer.
Farmer, J. D., Kauffman, S. A., & Packard, N. H. (1987)
Autocatalytic replication of polymers. Physica D, 22 (50).
Hoyle, F. & Wickramasinghe, N. C. (1981) Evolution from space, Dent.
Joyce, G. F. (1987) Non-enzymatic, template-directed synthesis of
informational macromolecules. In: Cold Spring Harbor Symposia on
Quantitative Biology 52. Cold Spring Harbor Laboratory, New York.
Gabora, L. (1996a) Culture, evolution and computation. In Proceedings
of the Second Online Workshop on Evolutionary Computation. Society of
Fuzzy Theory and Systems.
http://www.bioele.nuee.nagoya-u.ac.jp/wec2/papers/p023.html
Gabora, L. (1996b) A day in the life of a meme. Philosophica, 57,
901-938. Invited manuscript for special issue on concepts,
representations, and dynamical systems.
http://www.lycaeum.org/~sputnik/Memetics/day.life.txt
Gabora, L. (1997) The origin and evolution of culture and creativity.
Journal of Memetics: Evolutionary Models of Information Transmission
Vol. 1, Issue 1.
http://www.cpm.mmu.ac.uk/jom-emit/1997/vol1/gabora_l.html
Hinton, G. E. & J. A. Anderson. (1981) Parallel models of associative
memory. Lawrence Erlbaum Associates.
Kanerva, P. (1988) Sparse distributed memory, MIT Press
Karmiloff-Smith, A. (1990) Constraints on representational change:
Evidence from childrens drawing. Cognition 34: 57-83.
Karmiloff-Smith, A. (1992) Beyond modularity: A developmental
perspective on cognitive science, MIT Press.
Karmiloff-Smith, A. (1994). Precis of Beyond modularity:
A developmental perspective on cognitive science. Behavioral and Brain
Sciences 17 (4): 693-745.
ftp://ftp.princeton.edu/pub/harnad/BBS/.WWW/bbs.karmsmith.html
Kauffman, S. A. (1993) Origins of order, Oxford University Press.
Langton, C. G. (1992) Life at the edge of chaos. In: Artificial life II,
eds. C. G. Langton, C. Taylor, J. D. Farmer & S. Rasmussen,
Addison-Wesley.
Lee, D. H., Granja, J. R., Martinez J. A., Severin, K. & Ghadiri, M. R.
(1996) A Self-Replicating Peptide. Nature. 382, 8 August, 525-528.
Lee, D. H., Severin, K., Yokobayashi & Ghadiri, M. R. (1997) Emergence
of Symbiosis in Peptide Self-Replication through a Hypercyclic
Network. Nature, 390, 11 December: 591-594.
Lieberman, P. (1991) Uniquely human: The evolution of speech, thought,
and selfless behavior, Harvard University Press.
Miller, S. L. (1955) Production of some organic compounds under
possible primitive earth conditions. Journal of the American Chemistry
Society 77:23-51.
Minsky, M. (1985) The society of mind, Simon and Schuster.
Morowitz, H. J. (1992) The beginnings of cellular life, Yale University
Press.
Olton, D. S. (1977) Spatial memory. Scientific American 236, 82-98
Oparin, A. I. (1971) Routes for the origin of the first forms of life.
Sub. Cell. Biochem. 1:75.
Orgel, L. E. (1987) Evolution of the genetic apparatus: A review. In
Cold Spring Harbor Symposia on Quantitative Biology Vol. 52. Cold
Spring Harbor Laboratory, New York.
Plotkin, H. C. (1988) The role of behavior in evolution, MIT Press.
Rumelhart, D. E. & J. L. McClelland. (1986) Parallel distributed
processing: Explorations in the microstructure of cognition. MIT
Press.
Rosch, E. (1987) Principles of categorization. In E. Rosch & B. B.
Lloyd (Eds.) Cognition and categorization. Erlbaum.
Schank, R., & Abelson, R. P. (1977) Scripts, plans, goals, and
understanding: An inquiry into human knowledge structures. Lawrence
Erlbaum Associates.
Severin, K., Lee, D. H., Kennan, A. J. & Ghadiri, M. R. (1997) A
Synthetic Peptide Ligase. Nature, 389, 16 October, 706-709.
Tomasello, M., Kruger, A. C. & Ratner, H. H. (1993) Cultural learning
Behavioral and Brain Sciences, 16, 495-552.
Tooby, J. & Cosmides, L. (1989) Evolutionary psychology and the
generation of culture, Part I. Ethology and Sociobiology, 10, 29-49.
Weisberg, R.W. (1986) Creativity: Genius and other myths, Freeman.