Computation and Human Experience

Phil Agre (pagre who-is-at weber.ucsd.edu)
Tue, 16 Sep 1997 20:07:31 -0700 (PDT)

(Please do not quote from this version, which changed slightly in proof.)

1. Activity

Computational inquiry into human nature originated in the years after
World War II. Scientists mobilized into wartime research had developed
a series of technologies that lent themselves to anthropomorphic
description, and once the war ended these technologies inspired novel
forms of psychological theorizing. A servomechanism, for example,
could aim a gun by continually sensing the target's location and pushing
the gun in the direction needed to intercept it. Technologically
sophisticated psychologists and mathematicians observed that this
feedback cycle could be described in human-like terms as pursuing a
purpose based on awareness of its environment and anticipation of the
future. New methods of signal detection could likewise be described
as making perceptual discriminations, and the analytical tools of
information theory soon provided mathematical ways to talk about
communication.

In the decades after the war, these technical ideas provided the
intellectual license for a counterrevolution against behaviorism and a
restoration of scientific status to human mental life. The explanatory
power of these ideas lay in a suggestive confluence of metaphor,
mathematics, and machinery. Metaphorical attributions of purpose
were associated with the mathematics of servocontrol and realized
in servomechanisms; metaphorical attributions of discrimination were
associated with the mathematics of signal and noise and realized in
communications equipment; and metaphorical attributions of communication
were associated with the mathematics of information theory and realized
in coding devices. The new psychology sought to describe human
beings using vocabulary that could be metaphorically associated with
technologically realizable mathematics.

The development of the stored-program digital computer put this project
into high gear. It is a commonplace that the computer contributed a
potent stock of metaphors to modern psychology, but it is important
to understand just how these metaphors informed the new research.
The outlines of the project were the same as with servocontrol,
signal detection, coding theory: a bit of metaphor attached to a bit
of mathematics and realized in a machine whose operation could then
be narrated using intentional vocabulary. But the digital computer
both generalized and circumscribed this project. By writing computer
programs, one could physically realize absolutely any bit of finite
mathematics one wished. The inside of the computer thus became an
imaginative landscape in which programmers could physically realize
an enormous variety of ideas about the nature of thought. Fertile as
this project was, it was also circumscribed precisely by the boundaries
of the computer. The feats of physics and chemistry that supported
the digital abstraction operated inside the computer, and not outside.

In this way, a powerful dynamic of mutual reinforcement took hold
between the technology of computation and a Cartesian view of human
nature, with computational processes inside computers corresponding
to thought processes inside minds. But the founders of computational
psychology, while mostly avowed Cartesians, actually transformed
Descartes's ideas in a complex and original way. They retained the
radical experiential inwardness that Descartes, building on a long
tradition, had painted as the human condition. And they retained the
Cartesian understanding of human bodies and brains as physical objects,
extended in space and subject to physical laws. Their innovation
lay in a subversive reinterpretation of Descartes's ontological dualism
(Gallistel 1980: 6-7). In _The Passions of the Soul_, Descartes
had described the mind as an extensionless *res cogitans* that
simultaneously participated in and transcended physical reality. The
mind, in other words, interacted causally with the body, but was not
itself a causal phenomenon. Sequestered in this nether region with its
problematic relationship to the physical world, the mind's privileged
object of contemplation was mathematics. The "clear and distinct ideas"
that formed the basis of Descartes's epistemology in the _Meditations_
were in the first instance *mathematical* ideas (Rorty 1979: 57-62;
cf. Heidegger 1961 [1927]: 128-134). Of course, generations of
mechanists beginning with Hobbes, and arguably from antiquity, had
described human thought in monistic terms as the workings of machinery
(Haugeland 1985: 23). But these theorists were always constrained
by the primitive ideas about machinery that were available to them.
Descartes's physiology suffered in this way, but not his psychology.
Although they paid little heed to the prescriptive analysis of thought
that Descartes had offered, the founders of computational psychology
nonetheless consciously adopted and reworked the broader framework
of Descartes's theory, starting with a single brilliant stroke. The
mind does not simply contemplate mathematics, they asserted; the mind
is *itself* mathematical, and the mathematics of mind is precisely a
technical specification for the causally explicable operation of the
brain.

This remarkable proposal set off what is justly called a "revolution"
in philosophy and psychology as well as in technology. Technology is in
large measure a cultural phenomenon, and never has it been more plainly
so than in the 1950s. Computational studies in that decade were studies
of faculties of *intelligence* and processes of *thought*, as part
of a kind of cult of cognition whose icons were the rocket scientist,
the symbolism of mathematics, and the computer itself. The images
now strike us as dated and even camp, but we are still affected by the
technical practice and the interpretation of human experience around
which artificial intelligence, or AI, was first organized.

I wish to investigate this confluence of technology and human
experience. The philosophical underside of technology has been deeply
bound up with larger cultural movements, yet technical practitioners
have generally understood themselves as responding to discrete
instrumental "problems" and producing technologies that have "effects"
upon the world. In this book I would like to contribute to a *critical
technical practice* in which rigorous reflection upon technical ideas
and practices becomes an integral part of day-to-day technical work
itself.

I will proceed through a study in the intellectual history of research
in AI. The point is not to exhaust the territory but to focus on
certain chapters of AI's history that help illuminate the internal
logic of its development as a technical practice. Although it will be
necessary to examine a broad range of ideas about thought, perception,
knowledge, and their physical realization in digital circuitry,
I will focus centrally on computational theories of action. This
choice is strategic, inasmuch as action has been a structurally
marginal and problematic topic in AI; the recurring difficulties in
this computational research on action, carefully interpreted, motivate
critiques that strike to the heart of the field as it has historically
been constituted. I aim to reorient research in AI away from
*cognition* -- abstract processes in the head -- and toward *activity*
-- concrete undertakings in the world. This is not a different subject,
but a different approach to the same subject: different metaphors,
methods, technologies, prototypes, and criteria of evaluation.
Effecting such a reorientation will require technical innovation, but
it will also require an awareness of the structure of ideas in AI and
how these ideas are bound up with the language, the methodology, and
the value systems of the field.

[...]

My project is both critical and constructive. By painting computational
ideas in a larger philosophical context, I wish to ease critical
dialogue between technology and the humanities and social sciences
(Bolter 1984; Guzeldere and Franchi 1995). The field of AI could
certainly benefit from a more sophisticated understanding of itself as
a form of inquiry into human nature. In exchange, it offers a powerful
mode of investigation into the practicalities and consequences of
physical realization.

[...]

2. Planning

Although the AI tradition has placed its principal emphasis on processes
it conceives of as occurring entirely within the mind, there does exist
a more or less conventional computational account of action. The early
formulation of this account that had the most pervasive influence was
George Miller, Eugene Galanter, and Karl Pribram's book, _Plans and
the Structure of Behavior_ (1960). These authors rejected the extreme
behaviorist view that the organized nature of activity results from
isolated responses to isolated stimuli. Instead, they adopted the
opposite extreme view that the organization of human activity results
from the execution of mental structures they called Plans. Plans were
*hierarchical* in the sense that a typical Plan consisted of a series
of smaller sub-Plans, each of which consisted of yet smaller sub-Plans,
and so forth, down to the primitive Plan steps, which one imagines
to correspond to individual muscle movements. (Miller, Galanter, and
Pribram capitalized the word "Plan" to distinguish their special use
of it, especially in regard to the hierarchical nature of Plans, from
vernacular usage. Subsequent authors have not followed this convention.
I will follow it when I mean to refer specifically to Miller, Galanter,
and Pribram's concept.)

[...]

Miller, Galanter, and Pribram applied the term "Plan" as broadly
as they could. In considering various aspects of everyday life, they
focused everywhere on elements of intentionality, regularity, and
goal-directedness and interpreted each one as the manifestation of a
Plan. As with the servos, radars, and codes that first inspired Miller
and his contemporaries in the 1940s, the concept of a Plan combined the
rhetoric of structured behavior with the formalisms of programming and
proposed that the latter serve as models of biological systems. A great
difficulty in evaluating this proposal is the imprecise way in which
Miller, Galanter, and Pribram used words like "Plan". They demonstrated
that one can find aspects of apparent planfulness in absolutely any
phenomenon of human life. But in order to carry out this policy
of systematic assimilation, important aspects of activity had to be
consigned to peripheral vision. These marginalized aspects of activity
were exactly those which the language of Plans and their execution tends
to deemphasize.

These ideas had an enormous influence on AI, but with some differences
of emphasis. Although they occasionally employ the term "planning",
Miller, Galanter, and Pribram provide no detailed theory of the
construction of new Plans. The AI tradition, by contrast, has conducted
extensive research on plan-construction but has generally assumed that
execution is little more than a simple matter of running a computer
program. What has remained is a definite view of human activity that
has continued, whether implicitly or explicitly, to suffuse the rhetoric
and technology of computational theories of action. In place of
this view, I would like to substitute another, one that follows the
anthropologically motivated theoretical orientations of Suchman (1987)
and Lave (1988) in emphasizing the situated nature of human action. Let
me contrast the old view and the new point by point:

* Why does activity appear to be organized?

Planning view: If someone's activity has a certain organization, that is
because the person has constructed and executed a representation of that
activity, namely a plan.

Alternative: Everyday life has an orderliness, a coherence, and patterns
of change that are emergent attributes of people's interactions with
their worlds. Forms of activity might be influenced by representations
but are by no means mechanically determined by them.

* How do people engage in activity?

Planning view: Activity is fundamentally planned; contingency is a
marginal phenomenon. People conduct their activity by constructing and
executing plans.

Alternative: Activity is fundamentally improvised; contingency is
the central phenomenon. People conduct their activity by continually
redeciding what to do.

* How does the world influence activity?

Planning view: The world is fundamentally hostile, in the sense that
rational action requires extensive, even exhaustive, attempts to
anticipate difficulties. Life is difficult and complicated, a series of
problems to be solved.

Alternative: The world is fundamentally benign, in the sense that our
cultural environment and personal experiences provide sufficient support
for our cognition that, as long as we keep our eyes open, we need
not take account of potential difficulties without specific grounds
for concern. Life is almost wholly routine, a fabric of familiar
activities.

The alternative view of human activity that I have sketched here
contains a seeming tension: how can activity be both improvised and
routine? The answer is that the routine of everyday life is not a
matter of performing precisely the same actions every day, as if one
were a clockwork device executing a plan. Instead, the routine of
everyday life is an emergent phenomenon of moment-to-moment interactions
that work out in much the same way from day to day because of the
relative stability of our relationships with our environments.

[...]

3. Why build things?

Every discipline has its distinctive ways of knowing, which it
identifies with the activities it regards as its own: anthropologists do
fieldwork, architects design buildings, monks meditate, and carpenters
make things out of wood. Each discipline wears its defining activity
as a badge of pride in a craftworker's embodied competence. It will
be said, "You can read books all your life, but you don't really know
about it until you do it". Disciplinary boundaries are often defined
in such ways -- you are not an anthropologist unless you have spent
a couple years in the field; you are not an architect unless you have
built a building; and so forth -- and neighboring disciplines may
be treated with condescension or contempt for their inferior methods.
Each discipline's practitioners carry on what Schon (1983: 78) would
call "reflective conversations" with their customary materials, and
all of their professional interactions with one another presuppose
this shared background of sustained practical engagement with a more or
less standard set of tools, sites, and hassles.

[...]

The discipline in question here is computational modeling, and
specifically AI. Although I will criticize certain computational
ideas and practices at great length, my enterprise is computational
nonetheless. AI's distinctive activity is building things, specifically
computers and computer programs. Building things, like fieldwork
and meditation and design, is a way of knowing that cannot be reduced
to the reading and writing of books (Chapman 1991: 216-217). To the
contrary, it is an enterprise grounded in a routine daily practice.
Sitting in the lab and working on gadgets or circuits or programs,
it is an inescapable fact that some things can be built and other things
cannot. Likewise, some techniques scale up to large tasks and others
do not; and some devices operate robustly as environmental conditions
fluctuate, whereas others break down. The AI community learns things
by cultivating what Keller (1983) calls a "feeling for the organism",
gradually making sense of the resulting patterns of what works and
what does not. Edwards (1996: 250, italics in the original) rightly
emphasizes that much of AI's practitioners' technical framework "emerged
not abstractly but *in their experiences with actual machines*". And
Simon (1969: 20) speaks of computing as an "empirical science" -- a
science of design.

I take an unusual position on the nature of computation and
computational research. For my purposes, *computation* relates to
the analysis and synthesis of especially complicated things. These
analytic and synthetic practices are best understood as nothing less
grand or more specific than an inquiry into physical realization
as such. This fact can be lost beneath ideologies and institutions
that define computation in some other way, whether in terms of Turing
machines, mathematical abstraction, intentionality, symbolic reasoning,
or formal logic. Nonetheless, what truly founds computational work
is the practitioner's evolving sense of what can be built and what
cannot. This sense, at least on good days, is a glimpse of reality
itself. Of course, we finite creatures never encounter this "reality"
except through the mediation of a historically specific ensemble of
institutions, practices, genres, ideologies, tools, career paths,
divisions of labor, understandings of "problems" and "solutions", and so
forth. These mediating systems vary historically through both practical
experience and broader shifts of social climate. Nonetheless, at each
point the technologist is pushing up against the limits of a given
epoch's technology, against the limits of physical reality conceived
and acted upon in a specific way. These limits are entirely real. But
they are not simply a product of reality-in-itself; nor are they simply
internal consequences of the idea-systems on their own, considered in
abstraction from particular attempts to get things to work.

This is the sense in which people engaged in technical work are -- and,
I think, must be -- philosophical realists. The something-or-other
that stands behind each individual encounter with the limits of physical
realization I would like to call *practical reality*. Practical reality
is something beyond any particular model or ontology or theory. A
given model might seem like the final word for years or centuries, but
ultimately the limit-pushing of technical work will reveal its margins.
The resulting period of stumbling and improvisation will make the
previously taken-for-granted model seem contingent: good enough perhaps
for some purposes, but no longer regarded (if it ever has been) as
a transparent description of reality. Much of the sophistication of
technical work in mechanical engineering and semiconductor physics, for
example, lies in the astute choice of models for each purpose.

Technical communities negotiate ceaselessly with the practical reality
of their work, but when their conceptions of that reality are mistaken,
these negotiations do not necessarily suffice to set them straight.
Computational research, for its part, has invested an enormous amount
of effort in the development of a single model of computation: the dual
scheme of abstraction and implementation that I will describe in Chapter
4. This framework has motivated a multitude of technical proposals,
but it has also given rise to recurring patterns of technical trouble.
Although computationalists do possess a certain degree of critical
insight into the patterns of trouble that arise in their work, they
also take a great deal for granted. Beneath the everyday practices
of computational work and the everyday forms of reasoning by which
computationalists reflect on their work, a vast array of tacit
commitments lies unexamined. Each of these commitments has its margins,
and the field's continual inadvertent encounters with these margins have
accumulated, each compounding the others, to produce a dull sense of
existential chaos. Nobody complains about this, for the simple reason
that nobody has words to identify it. As successive manifestations of
the difficulty have been misinterpreted and acted upon, the process has
become increasingly difficult to disentangle or reverse.

In trying to set things right, a good place to start is with AI
researchers' understanding of their own distinctive activity: building
computer systems. AI people, by and large, insist that nothing is
understood until it has been made into a working computer system.
One reason to examine this insistence critically is its association
with research values that disrupt interdisciplinary communication.
This disruption goes in two directions -- from the inside out (i.e.,
from AI to the noncomputational world) and from the outside in (i.e.,
the other way round) -- and it is worth considering these two directions
separately.

Research based on computer modeling of human life often strikes people
from other fields as absurd. AI studies regularly oversimplify things,
make radically counterfactual assumptions, and focus excessive attention
on easy cases. In a sense this is nothing unusual: every field has to
start somewhere, and it is usually easier to see your neighbor's leading
assumptions than your own. But in another sense things really are
different in AI than elsewhere. Computer people only believe what they
can build, and this policy imposes a strong intellectual conservatism
on the field. Intellectual trends might run in all directions at any
speed, but computationalists mistrust anything unless they can nail down
all four corners of it; they would, by and large, rather get it precise
and wrong than vague and right. They often disagree about *how much*
precision is required, and *what kind* of precision, but they require
ideas that can be assimilated to computational demonstrations that
actually get built. This is sometimes called the *work ethic*: it
has to work. To get anything nailed down in enough detail to run on
a computer requires considerable effort; in particular, it requires
that one make all manner of arbitrary commitments on issues that may
be tangential to the current focus of theoretical interest. It is no
wonder, then, that AI work can seem outrageous to people whose training
has instilled different priorities -- for example, conceptual coherence,
ethnographic adequacy, political relevance, mathematical depth, or
experimental support. And indeed it is often totally mysterious to
outsiders what canons of progress and good research *do* govern such a
seemingly disheveled enterprise. The answer is that good computational
research is an evolving conversation with its own practical reality; a
new result gets the pulse of this practical reality by suggesting the
outlines of a computational explanation of some aspect of human life.
The computationalist's sense of bumping up against reality itself -- of
being *compelled* to some unexpected outcome by the facts of physical
realizability as they manifest themselves in the lab late at night --
is deeply impressive to those who have gotten hold of it. Other details
-- conceptual, empirical, political, and so forth -- can wait. That, at
least, is how it feels.

[...]

To understand what is implied in a claim that a given computer model
"works", one must distinguish two senses of "working". The first,
narrow sense is "conforms to spec" -- that is, it works if its behavior
conforms to a pregiven formal-mathematical specification. Since
everything is defined mathematically, it does not matter what words
we use to describe the system; we could use words like "plan", "learn",
and "understand"; or we could use words like "foo", "bar", and "baz".
In fact, programmers frequently employ nonsense terms like these when
testing or demonstrating the logical behavior of a procedure. Local
programming cultures will frequently invent their own sets of commonly
used nonsense terms; where I went to school, the customary nonsense
terms also included "blort", "quux", and "eep". But nonsense terms are
not adequate for the second, broad sense of "working", which depends
on specific words of natural language. As I mentioned at the very
beginning, an AI system is only truly regarded as "working" when its
operation can be narrated in intentional vocabulary, using words whose
meanings go beyond the mathematical structures. When an AI system
"works" in this broader sense, it is clearly a discursive construction,
not just a mathematical fact, and the discursive construction only
succeeds if the community assents. Critics of the field have frequently
complained that AI people water down the meanings of the vernacular
terms they employ, and they have sought to recover the original force
of those terms, for example through the methods of ordinary language
philosophy (Button, Coulter, Lee, and Sharrock 1995). But these critics
have had little influence on the AI community's own internal standards
of semantic probity. The community is certainly aware of the issue;
McDermott (1981: 144), for example, forcefully warns against "wishful
mnemonics" that lead to inflated claims. But these warnings have had
little practical effect, and the reward systems of the field still
depend solely on the production of technical schemata -- mathematically
specified mechanisms and conventions for narrating their operation in
natural language. The point, in any case, is that the practical reality
with which AI people struggle in their work is not just "the world",
considered as something objective and external to the research. It
is much more complicated than this, a hybrid of physical reality and
discursive construction. The trajectory of AI research can be shaped by
the limitations of the physical world -- the speed of light, the three
dimensions of space, cosmic rays that disrupt small memory chips --
and it can also be shaped by the limitations of the discursive world --
the available stock of vocabulary, metaphors, and narrative conventions.
Technical tradition consists largely of intuitions, slogans, and lore
about these hybrids, which AI people call "techniques", "methods", and
"approaches"; and technical progress consists largely in the growth
and transformation of this body of esoteric tradition. This is the
sense in which computers are "language machines" (e.g., Edwards 1996:
28). Critical reflection on computer work is reflection upon both its
material and semiotic dimensions, both synchronically and historically.

More specifically, the object of critical reflection is not computer
programs as such but rather the *process* of technical work. Industrial
software engineering is governed by rigid scripts that are dictated more
by bureaucratic control imperatives than by the spirit of intellectual
inquiry (Kraft 1977), but research programming is very much an
improvisation -- a reflective conversation with the materials of
computational practice. As it expands its collective understanding
of this process, AI will become aware of itself as an intellectual
enterprise whose concerns are continuous with those of numerous other
disciplines. We are a long way from that goal, but any journey begins
with wanting to go somewhere, and above all it is that desire itself
that I hope to cultivate here.

To sum up, programming is a distinctive and valuable way of knowing.
Doing it well requires both attunement to practical reality and
acuity of critical reflection. Each of these criteria provides an
indispensable guide and reinforcement to the other. Research always
starts somewhere: within the whole background of concepts, methods,
and values one learned in school. If our existing procedures are
inadequate, practical reality will refuse to comply with our attempts
to build things. And when technical work stalls, practical reality
is trying to tell us something. Listening to it requires that we
understand our technical exercises in the spirit of reductio ad
absurdum, as the deconstruction of an inevitably inadequate system
of ideas. Technique, in this sense, always contains an element of
hubris. This is not shameful; it is simply the human condition.
As the successive chapters of this book lay out some technical exercises
of my own, the attentive reader will be able to draw up an extensive
intellectual indictment of them, consisting of all the bogus assumptions
that were required to put forth *some* proposal for evaluation. But
the point of these technical exercises does not lie in their detailed
empirical adequacy, or in their practical applicability; they do not
provide canned techniques to take down from a shelf and apply in other
cases. Instead, each exercise should be understood in the past tense
as a case study, an attempt in good faith to evolve technical practice
toward new ways of exploring human life. What matters, again, is the
process. I hope simply to illustrate a *kind* of research, a way of
learning through critical reflection on computational modeling projects.
Others are welcome to form their own interpretations, provided that the
practice of interpretation is taken seriously as a crucial component of
the discipline itself.

4. How computation explains

[...]

5. Critical orientation

Several previous authors have cast a critical eye on AI research.
Weizenbaum (1976), for example, draws on the critique of technology in
the Frankfurt School. Focusing on the culture of computer programmers
and their use of machine metaphors for human thought, he argues that
AI promotes an instrumental view of human beings as components in a
rationalized society. In doing so, he largely accepts as practicable
the construction of rationality found in AI and other engineering
fields, even though he rejects it on ethical grounds.

Other authors have argued that AI as traditionally conceived is not
just wrong but impossible, on the grounds that its technical methods
presuppose mistaken philosophies. The first and most prominent of these
critics was Dreyfus (1972), who pointed out that symbolic methods in AI
are all based on the construction of rules that gloss English words and
sentences as formally defined algorithms or data structures. Although
these rules seem perfectly plausible when presented to audiences or
displayed on computer screens, Dreyfus argued that this plausibility was
misleading. Since philosophers such as Heidegger and Wittgenstein had
shown that the use of linguistic rules always presupposes an embodied
agent with a tacit background of understanding, attempts to program
a computer with formal versions of the rules would necessarily fail.
Unable to draw on tacit understandings to determine whether and how a
given rule applied to a given situation, the computer would be forced
into a regressive cycle of rules-about-how-to-apply-rules (Collins
1990). Later, Winograd and Flores (1986) extended this argument by
describing the numerous ways in which language use is embedded in a
larger way of life, including an individual's ceaseless construction of
self and relationship, that cannot itself be framed in linguistic terms
except on pain of a similar regress.

The AI community has, by and large, found these arguments
incomprehensible. One difficulty has been AI practitioners' habit,
instilled as part of a technical training, of attempting to parse all
descriptions of human experience as technical proposals -- that is, as
specifications of computing machinery. Given the currently available
schemata of computational design, this method of interpretation will
inevitably make the theories of Dreyfus and of Winograd and Flores sound
naive or impossible, or as deliberate obscurantism, or even, in many
cases, as mystical rejections of the realizability of human thought in
the physical, causal world.

Another difficulty has been the hazardous procedure, shared by
practitioners and critics alike, of "reading" computer programs and
their accompanying technical descriptions as if they encoded a framework
of philosophical stances. Of course, technical *ideas* and *discourses*
do encode philosophical stances, and these stances generally *are*
reflected in the programs that result. But as I have already
observed, the programs themselves -- particular programs written on
particular occasions -- inevitably also encode an enormous range of
simplifications, stopgap measures, and practical expedients to which
nobody is necessarily committed. As a result, many members of the
AI community do not believe that they have actually embraced the
philosophical stances that their critics have found wanting. In fact,
AI's engineering mindset tends to encourage a pragmatic attitude toward
philosophical stances: they are true if they are useful, they are useful
if they help to build things that work, and they are never ends in
themselves. If a particular stance toward rules, for example, really
does not work, it can be abandoned. Instead, the fundamental (if
often tacit) commitment of the field is to an inquiry into physical
realization through reflective conversations with the materials of
computer work. This is not always clear from the rhetoric of the
field's members, but it is the only way I know to make sense of them.

Dreyfus as well as Winograd and Flores have conducted their critiques of
AI from a standpoint outside of the field. Dreyfus is a philosopher by
background, though in recent work with Stuart Dreyfus he has increased
his constructive engagement with the field by promoting connectionism
as a philosophically less objectionable alternative to the symbolic
rule-making of classical AI (Dreyfus and Dreyfus 1988). Winograd began
his career as a prominent contributor to AI research on natural language
understanding and knowledge representation (1972), but his critical
writing with Flores marked his departure from AI in favor of research
on computer systems that support cooperative work among people (Winograd
1995). Dreyfus and Winograd both define themselves against AI as
such, or the whole realm of symbolic AI, and they advocate a wholesale
move to a different theory. Each of them effectively posits the
field as a static entity, doomed to futility by the consequences of an
impracticable philosophy.

Another approach, which I adopt in this book, takes its point of
departure from the tacit pragmatism of engineering. I regard AI as
a potentially valuable enterprise, but I am equally aware that right
now it is a misguided enterprise as well. Its difficulties run deep:
we could sink a probe through the practices of technology, past the
imagery of Cartesianism, and into the origins of Western culture without
hitting anything like a suitable foundation. And yet it is impossible
simply to start over. The troubles run deeper than anyone can currently
articulate, and until these troubles are diagnosed, any new initiative
will inevitably reproduce them in new and obscure forms. This is why we
need a critical technical practice.

The word "critical" here does not call for pessimism and destruction
but rather for an expanded understanding of the conditions and goals
of technical work. A critical technical practice would not model itself
on what Kuhn (1962) called "normal science", much less on conventional
engineering. Instead of seeking foundations it would embrace the
impossibility of foundations, guiding itself by a continually unfolding
awareness of its own workings as a historically specific practice. It
would make further inquiry into the practice of AI an integral part of
the practice itself. It would accept that this reflexive inquiry places
all of its concepts and methods at risk. And it would regard this
risk positively, not as a threat to rationality but as the promise of a
better way of doing things.

One result of this work will be a renewed appreciation of the extent
to which computational ideas are part of the history of ideas. The
historicity of computational ideas is often obscured, unfortunately,
by the notion that technical work stands or falls in practice and not
in principle. Many times I have heard technical people reject the
applicability of philosophical analysis to their activities, arguing
that practical demonstration forms a necessary and sufficient criterion
of success for their work. Technical ideas are held to be perfectly
autonomous, defined in self-sufficient formal terms and bearing no
constitutive relationship to any intellectual context or tradition.
The Cartesian lineage of AI ideas, for example, is held to be interesting
but incidental, and critiques of Cartesianism are held to have no
purchase on the technical ideas that descend from it. This view,
in my opinion, is mistaken and, moreover, forms part of the phenomenon
needing explanation. Technical practitioners certainly put their
ideas to the test, but their understandings of the testing process
have placed important limits on their ability to comprehend or learn
from it. Advances in the critical self-awareness of technical practice
are intellectual contributions in their own right, and they are also
necessary conditions for the progress of technical work itself.

Limitations in a certain historical form of technical practice do not,
however, result from any failings in the people themselves. Such is
the prestige of technical work in our culture that the AI community
has attracted a great many intelligent people. But they, like you and
I, are the products of places and times. The main units of analysis in
my account of technical practice are discourses and practices, not the
qualities of individual engineers and scientists. A given individual
can see only so far in a fog. Periodic moments of clarity are but the
consolidation of changes that have been gathering in the works; their
full-blown emergence in the composition of great books is a convenient
outcome to an orderly process. Equipped with some understanding of
the mechanics of this process, critical inquiry can excavate the ground
beneath contemporary methods of research, hoping thereby to awaken from
the sleep of history.

In short, the negative project of diagnosis and criticism ought to
be part and parcel of the positive project: developing an alternative
conception of computation and an alternative practice of technology.
A critical technical practice rethinks its own premises, revalues its
own methods, and reconsiders its own concepts as a routine part of its
daily work. It is concerned not with destruction but with reinvention.
Its critical tools must be refined and focused: not hammers but
scalpels, not a rubbishing of someone else but a hermeneutics and a
dialectics of ourselves.

6. Outline

Chapter 2 states the book's theses: the reflexive thesis (which concerns
the role of metaphors in computer modeling), the substantive thesis
(which proposes replacing one set of metaphors with another), and the
technical thesis (which describes the basis in technical experience
for proposing such a shift). It then discusses the reflexive thesis at
length, developing a vocabulary for analyzing the metaphors in technical
research. The point is not simply to discover the right set of
metaphors but to encourage a critical awareness of the role of metaphors
in research. The chapter concludes with a sketch of reflexive issues
that must await further work.

Chapter 3 concerns the substantive thesis. It describes the metaphor
system of mentalism, which portrays the mind as an abstract territory
set apart from the "outside world". Put into practice in day-to-day
technical work, mentalism participates in characteristic patterns of
success and failure, progress and frustration. An alternative is to
ground AI in interactionist metaphors of involvement, participation, and
reciprocal influence. I introduce some vocabulary and methodological
ideas for doing interactionist AI research.

Chapter 4 analyzes the mentalist foundations of computing, starting
with the tension between abstraction and implementation in conventional
computer science. The technical notion of a variable provides an
extended case study in this tension that will turn up in subsequent
chapters. The tension between abstraction and implementation is also
evident in the history of cognitive science, and its outlines provide
some motivation for interactionist alternatives.

Chapter 5 continues the analysis of conventional computer science with
a critical introduction to the workings of digital logic. Computers
these days are made of digital logic, and throwing out digital logic
altogether would leave little to build models with. Instead, this
chapter prepares the way for a critical engagement with digital logic
by describing the peculiar ideas about time that have accompanied it.

Chapter 6 shifts into a more technical voice, developing a set of fairly
conventional ideas about the relationship between digital logic and
human reasoning. It is a costly and difficult matter to think anything
new, and so "dependencies" provide a means of recording, storing, and
automatically recapitulating common lines of reasoning. In addition
to presenting dependencies as a technical proposal, this chapter also
briefly recounts the tradition of ideas about habit and learning from
which they arise.

Chapter 7 introduces a simple rule-based programming language called
Life that aids in the construction of artificial agents whose reasoning
can be accelerated through dependency maintenance. Execution of Life
programs depends on some formal properties of conventional AI rule
languages that I define just well enough to permit an expert to
reconstruct the details. Some cartoon programming examples demonstrate
the Life language in use.

Chapter 8 presents a detailed analysis of the early planning literature,
from Lashley's (1951) "serial order" paper to Newell and Simon's (1963)
GPS program to Miller, Galanter, and Pribram's (1960) theory to the
STRIPS program of Fikes, Hart, and Nilsson (1972). Through this
history, a complicated pattern of difficulties develops concerning the
relationship between the construction and execution of plans. Viewed
in retrospect, this pattern has pushed AI research toward a different
proposal, according to which activity arises through improvisation
rather than the execution of plans.

Chapter 9 develops this proposal in more detail by introducing the
notion of a running argument, through which an agent improvises by
continually redeciding what to do. This scheme will not work in a
chaotic world in which novel decisions must be made constantly, but it
might work in a world in which more routine patterns of activity are
possible. A set of Life rules is described through which an agent might
conduct arguments about what to do. The chapter concludes by describing
an architecture for such an agent, called RA.

Chapter 10 demonstrates RA in action on a series of simple tasks drawn
from AI's conventional "blocks world". The chapter detects a series
of difficulties with the program and, in the spirit of reductio ad
absurdum, traces these back to common assumptions and practices. In
particular, difficulties arise because of the way that the system's
ideas about the blocks are connected to the blocks themselves.

Chapter 11 takes heed of this conclusion by reexamining the mentalist
understanding of representation as a model of the world. The
shortcomings of this view emerge through an analysis of indexical
terms like "here" and "now", but they also emerge through the technical
difficulty of maintaining and reasoning with such a model. The path to
interactionist alternatives begins with the more fundamental phenomenon
of intentionality: the "aboutness" of thoughts and actions. Some
phenomenologists have proposed understanding intentionality in terms of
customary practices for getting along in the world.

Chapter 12 attempts to convert this idea into a technical proposal.
The basic idea is that an agent relates to things through time-extended
patterns of causal relationship with it -- that is, through the
roles that things play in its activities. The concept of deictic
representation makes this proposal concrete.

Chapter 13 describes a computer program called Pengi that illustrates
some of these ideas. Pengi plays a video game calling for flexible
actions that must continually be rethought. As with RA, reflection on
the strengths and weaknesses of this program yields lessons that may
be valuable for future theorizing and model-building. Some of these
lessons concern the tenacity of mentalism in the face of attempts to
replace it; others concern the role of attention in the organization of
improvised action.

Chapter 14 summarizes a variety of other research projects whose
approaches converge with my own. It also offers some reflections on the
reflexive thesis concerning the role of metaphor in technical modeling.

end