Many linguists and others have noticed an interesting phenomenon:
if you spend a good part of your workday studying a certain formal
aspect of human life -- say, a certain grammatical form -- then you
will start spontaneously noticing examples of it in your life outside
of your research. Many linguists collect the examples they notice
this way, making themselves nuisances at dinner parties when they
suddenly jump up and point out the unusual phrase construction of
someone's previous utterance. But we had never heard of anybody
actually making this phenomenon into a deliberate strategy of
research. That's what we tried to do.
We were graduate students in artificial intelligence. Our basic
motivation was our belief that AI's ways of talking about people's
lives were wildly at odds with the reality of those lives. But this
was a hard argument to make, since AI regularly proceeds by making up
little stories that sound like plausible things that could happen in
real life while also corresponding conveniently to the capacities of
particular technical schemes. How could we show that these types of
stories misrepresented everyday life (i.e., real, genuine, authentic
everyday life and not the fictional constructions of it in AI papers)?
Could we show that such things *never* happened? That they were
atypical of everyday life in some statistical sense?
The only way to begin, we thought, was to start collecting real
stories of everyday life. But how to select these stories? We shot
several videotapes of people as they made dinner, but turned out to be
largely uninformative, and we weren't about to invent a coding scheme
to categorize an hour of complicated videotape. This, then, was the
attraction of the spontaneously noticed stories: they were relevant
to the theoretical points and we didn't have to undertake any special
effort to gather them beyond remembering to write them down. It will
no doubt be objected that we couldn't remember the stories accurately
etc. But keep in mind that our baseline was the totally fictional
stories of AI papers; if anything the biases of memory would bring
the real stories back into line with that sort of artificial neatness.
And we were only after heuristic stimulation, not hard data in any
traditional sense.
We began to develop a methodology. The first step is choosing a
formal category that you're interested in, let us say "mistakes".
Now, it turns out that "mistakes" is far too abstract and general
to provoke much noticing. But let's say that we notice a particular
mistake and take the trouble to write it out. For example, last night
I was using an automatic teller machine and twice hit too many zeroes
when entering amounts with the keyboard. It's crucial, it turns
out, to do two things: (1) write out the story from memory in extreme
detail, as much detail as you can remember; and (2) invent a category
that includes this story but is half as abstract -- let us say,
mistakes caused by trying to do something repetitive too fast, or
even mistakes caused by trying to do something repetitive too fast
and doing one too many -- and then write out an explanation of that
category into one's notebook. We referred to this second step as
"intermediation", since it involved the invention of a category that
is intermediate in its abstraction between the existing abstract
category and the specific concrete example at hand. It doesn't
matter whether you formulate this category "correctly" -- different
people would no doubt formulate it in different ways. What matters
is the act of formulating it, noticing how it subsumes the example,
and noticing how the more abstract category subsumes it in turn.
Intermediation is a sure-fire way to provoke noticing. The effect is
amazing. What's really amazing is what happens if you make a habit
of it. We spent an hour or two every day writing out episodes that
we had noticed and intermediating from them. The more we did this,
the more episodes we would notice. After a while we learned that we
could deliberately "steer" our noticing in one direction or another,
depending on what theoretical questions we were interested in --
just choose the aspect of the new episode that interests you most and
define an intermediate category appropriately. After a while you'll
accumulate what mathematicians call a "lattice" -- a structure defined
by a partial order, in this case the order "is a more general category
than". It helps to draw the lattice on a sheet of paper.
You may ask, what earthly use is this? We found it fabulously useful
as a way to establish contact between abstract theories and empirical
reality. It is similar in this regard to ethnography or other kinds
of qualitative description. It is less appealing in that it doesn't
seek a thick theorization of its materials, but on the other hand
it grounds the concepts in one's own subjectivity as spontaneously
noticed and not in the systematically observed behavior of someone
else. I find it very difficult to explain except to say that I found
it deeply compelling and kept doing it, as I say, for a few years.
The main reason I thought to explain all of this on xmca concerned a
particular observation we made using the method. One application of
the method was to explore a particular theory invented by my friend
David Chapman, called "semantic cliches". Semantic cliches are simple
formal structures that seem to recur frequently in the world's ideas.
Mostly they correspond to simple mathematical structures. Take for
example the notion of a total order: a structure consisting of a set
of entities and a relation on them, such that every pair of entities
which are different from one another has a "greater" or a "lesser"
according to the relation. Examples are endless in the folk theories
of the world: temperature, loudness, smartness, hotness, powerfulness,
etc. The point isn't that the *reality* has that structure but that
the *ideas* have that structure, though of course the relationship
between the ideas and reality is probably not arbitrary. In his paper
on semantic cliches -- which, true to the culture of the lab where we
did our graduate work, was only published as an internal lab report
-- David identified a few dozen of these cliches. Another one is
propagation: you have a mathematical graph, and one of the vertices
has a certain property at a certain time, and then this property
spreads out across the arcs of the graph to successively broader sets
of vertices. You can refine the cliches to make them more specific
(once again, in a lattice). So for example, one kind of total order
is a finite totally ordered set, which of course will have a greatest
and a least element.
We were studying semantic cliches, then, and this caused us to notice
things in the world that were examples of the particular semantic
cliches we were studying. So of course we set about intermediating
the various semantic cliches and the various other concepts associated
with the semantic cliches. Along the way we discovered a great many
examples of semantic cliches, based on episodes we noticed in which
one or another property of them was at stake. And more interestingly,
we started noticing lots of analogies between different parts of life
that we did not formerly think of as analogous. More interestingly
still, we noticed our lives start changing rapidly -- not in deep,
meaningful ways, but in lots and lots of small, simple, logistical
sorts of ways. For a long time we thought that we had discovered a
previously undetected phenomenon: the continual evolution of the most
ordinary routines of daily life. And so we set about intermediating
on the category of routine evolution. This was what my dissertation
was originally going to be about, until I got derailed by the immense
difficulty of using AI's technical concepts to build anything that
has any genuine relationship to people's everyday lives.
In any case, we eventually discovered that the routine evolution
we were noticing had various components -- that is, a variety of
qualitatively different mechanisms of change -- and that the most
productive of these components was being induced by our method of
investigation. That is, the cycle of noticing, writing down stories,
intermediating, noticing again, etc etc was causing our lives to
change. Why? Precisely because the various categories, and most
especially the semantic cliches, were mediating numerous analogies
in our heads. Now, if you've read the cognitive science literature
on analogical reasoning (and particularly if you've read Jean Lave's
critique of it in "Cognition in Practice") then you're aware that
people only really make experience-distant formal analogies when
their attention is somehow brought to the analogy. Their attention
can be directed in several ways: experimenters can point out the
analogy, metaphors or other linguistic means can be used to draw
the analogous situations under a common description, printed forms
or other mediating artifacts can be used to structure the situations
within a common form of activity, and so on. Some of these means
might be consciously aimed at causing people to notice analogies
and others might be fortuitous, or might be part of a culture's more
deeply meaningful set of metaphors and categorizations, or whatever.
In our case, our attention was drawn to the analogies because we were
deliberately using a certain abstract vocabulary to describe the forms
of everyday events, and the common vocabulary we assigned to otherwise
dissimilar situations was causing us to spontaneously notice analogies
between them. These analogies, moreover, were frequently causing us
to notice slightly better ways to do things that we already did in
basically acceptable ways on a routine basis everyday.
Let me give you an example. We had an acetyline torch in our kitchen
that was operated by a trigger that generated a piezoelectric spark.
I often used this torch in the dark. Don't ask why. The problem was
that the torch only worked when a certain knob, which turned to one
of perhaps four positions, was turned to the second position. For a
long time I would have to squint at the knob in the darkness to see
if it was in the right position. Eventually, somehow, I came up with
the idea of turning the knob all the way to its counterclockwise limit
and then turning it one notch clockwise, after which I could guarantee
that it would be in the second position. Well, it so happened that a
few days later I was in a car with an automatic transmission, shifting
back and forth between drive and reverse repeatedly to get out of a
tight parking space while pedestrians kept jumping between cars trying
to get me to break their legs. Whereupon *poof* I noticed the analogy
with the torch and started whacking the shifter into park and then one
notch right into reverse rather than looking at the shifter each time
I shifted from drive into reverse. (Excuse me if I've misremembered
the relative positions of drive and reverse on an automatic shifter;
it has been a while since I've used one.)
I'm quite sure that the semantic cliche of a finite total order
mediated this analogy. Why am I sure of this? Because I was quite
conscious of it at the time. Why did I *notice* and think to write
down the fact that the analogy was mediated by a semantic cliche?
Yes, that's right, because I had been intermediating on the phenomena
of analogical transfer through intermediated categories. By that time
it had grown quite common for noticings to trigger other noticings to
three or four deep: I would notice an instance of some intermediated
category in the midst of taking out the trash, whereupon I would
notice that that noticing was itself an instance of some completely
different intermediated category, whereupon I would notice that *that*
noticing was itself an instance of yet a third completely different
intermediating category. It would take quite a while to write all
of this down on paper. If I wrote it all down right away, or within
an hour or two, I could be quite confident of having remembered it
all pretty accurately, since the intermediated categories provided a
precise vocabulary for articulating what had just happened and then
writing it all down. I also intermediated on the process of writing
the stuff down. I'll never forget one day toward the end of this
whole experiment, when I was writing out a particularly complex chain
of these noticings, and found that something I had just thought while
writing had triggered a sequence of noticings that chained so fast
that I could not remember it all. It was a bizarre, quasi-mystical
experience. It persuaded me that it was time to stop this absurd
exercise and start writing my dissertation. It was 1984.
What did I gain from this exercise? It would be very hard to tell
you, much less convince you. For my own purposes, though, I am quite
convinced that a couple of years of regular intermediation literally
made me considerably smarter. I think the part of it that made me
the most smarter was intermediating on the formation of analogies.
As I wrote out my thoughts on a variety of topics in my notebook, I
would often notice analogies between ideas that I had never connected
together before, and even if the analogies seemed pointless I always
wrote them out and followed through all of the suggestions that each
analogous thought would make for the line of thinking represented
by the other. Many of my best ideas in graduate school arose this
way, and it is commonly held that many important discoveries (ones
far more important than mine) also arose through the noticing of
analogies. By intermediating on the process of noticing and working
through analogies, I found that I noticed lots more analogies than I
had before, and that I therefore had many more ideas than I had had
before. They were not always good ideas, but that's alright, since
you only need one really good idea to contribute something adequate
to the world before you die.
Eventually I stopped intermediating and stopped noticing things in
that spontaneous way -- at least I stopped noticing things any more
than anybody else does. But I do believe that my experience of
intermediation left me thinking much more clearly than I did after
my rigorless schooling and the murky commercial culture upon which I
wasted so much of my childhood. I got some idea of what concreteness
means, and abstractness, and the difference between an idea chattering
in my head and an idea that I can see in my own experience. I learned
to be open to spontaneous noticing, and I learned to have respect for
the immense complexity and wisdom and order of my own everyday life
beyond my conscious awareness of it. And above all I learned to get
intellectual concepts -- those of AI, and by extension all others --
in perspective. We don't really know that much, but we know a few
good things, and through discipline and humility we can open ourselves
to learning more from the simplest things around us.
Phil Agre