Kathie and Edouard both raise some important points about the limits of
technological utopianism. Many people who see promise in information
technologies, including for education, also believe we are a bit overdue
for a thoughtful critique; clearly there have been excesses of optimism,
esp. among Americans, who are culturally conditioned to optimism as well as
optimization... :)
Yes, I would agree that the engine room scenario is conceived in a way that
lends itself to technological duplication (ie. replacing a robotized human
teacher with a real robot) ... but it was conceived that way by the US
Navy, not by me. And it is sending a message about what kind of human being
is valued in their organization, at least at the relevant lowly ranks ...
but presumably to some extent mindless learning of this kind does need to
be done sometime, somewhere ... unless we want to argue that it is only the
kinds of modern technologies we have that force us to this kind of
learning, and we'd be better off without them. For the long haul, my view
is that these technologies are transitional, as early industrialism (still
going on in third world countries, indeed) was transitional, and that their
mature successors will be less inhuman, at least less so in the particular
ways that today's are. These issues have been pretty well argued through
since the industrial revolution, I think. Machine-like technologies tend to
force some people into more machine-like molds, and this is bad. But
machine-like technologies in this sense are transitional; I do not feel
particularly dehumanized by the telephone (a bit perhaps) or electric
lights. I am a lot more dehumanized by MS Word, and was terribly
dehumanized by the factory production line I (briefly) worked on once. (Of
course I mean 'dehumanized' in its colloquial and metaphorical sense; a
less humane way of still being quite human, by my cultural values.)
Maybe it's my biased point of view, but I think that the teacher in that
engine room is being more dehumanized than the students, and short of not
having submarines, or navies, or running them like videogames (on the view
that play is humanizing, which may not always be the case) ... I'd rather
liberate the teacher with a robot. Perhaps some day the students will send
their robots to class to learn from our robots, and all the real human
beings can go do something more interesting. Obviously one can critique the
Navy, or capitalist economics, for creating dehumanizing learning needs and
learning situations ... my only point, however, was to compare the
situation with and without robot.
Kathie offers another point: how will students feel if they are marked
wrong by a robot vs. a human teacher? she worries that the robot will carry
more institutional authority, a greater aura of infallibility. I hope that
her concern overestimates people's respect for 'authorities' ... cultures
vary greatly in this way, but even in high-respect cultures, there is
plenty of room for excepting individuals ... I am more inclined to believe
that many students fake their respect, and that all that really concerns
them is our power. Teaching robots could be made small, easily damaged by
humans, and cheaply replaceable. This might give some concrete meaning to
the notion of student 'empowerment'. It might also be necessary to insist
that robots be built to be randomly fallible some set percentage of the
time ... and to 'know' this. Of course the larger issue here is again not
about the robots, it is about the institutions, and their power over us.
Which brings me to the Golem, the dark double of the cute or useful Robot.
We build institutions to empower us (through collective action), but they
can easily come to dominate us, and to have (I do not merely
anthropomorphize) a logic, and goals, and agency of their own. It is our
large-scale institutions that we need to fear, not cute little robots. The
Golem is the personification of Faustian hybris in the Jewish tradition:
built as a servant or slave (Jews could not enslave other Jews by their own
law, and could not enslave or have Gentiles as servants by Gentile law),
like the Sorcerer's apprentice, it runs amok and threatens its 'Master'.
Like all good myths, there are multiple meanings, perhaps unlimited ones
here. Anything powerful enough to be a good servant (cf. Djinns in the
Islamic mythos) is powerful enough to threaten its Master. "Do not call up
That Which you cannot put down" is the cardinal rule of demonology. Too
many pecking orders (ie. male dominance hierarchies) are potentially
overturned.
Another aspect of the Golem brings me finally to Edouard's central concern:
limited human Reason seeking to replace a limitless trans-human Spirit, or
at least ambitious to equal it on our own terms. This is the Frankenstein
aspect: make a human body and animate it, playing God, and you might find
yourself making an inhuman soul, or a soulless human, contrary to God's
wisdom ... our lack of that Wisdom is the best reason not to do these
foolish things ... but then God did make us foolish in just this way, that
we are always seeking to reach further. We are supposed to reach for the
transcendant, but more often we reach for something a bit less noble.
Babel. the Golem. the State. the University. the Navy.
The particular unwisdom that worries Edouard at this point is that a
limited model of rationality is being used to define the project of the
artificial human, and in the process our artifacts are likely to push us
toward being this lesser sort of human (also Kathie's concern). This is
certainly happening. Cognitive science and AI theory, esp. the dominant
versions of the 70s and 80s, had/have a really stupid view about the nature
of human intelligence that derives rather directly from the very worst
aspects of capitalist dehumanization (from Taylorism to Simony is a small
step if any at all) -- not of course that most of its advocates or
utilizers had much inkling about this provenience. But even if we enlarge
the view of rationality, Edouard is not likely to be too much happier ...
one wise lesson in many religions is that rationality is dangerous unless
it is subordinated to ... and here is where different religious views
differ, all agreeing that it must be subordinated to something deriving
from the Transcendant-Immanent, but in various cases that is
Morality/Ethics, and/or Spirituality, and/or Wisdom, and/or Tradition,
and/or Love, and/or Charity, and/or Justice, and or Obedience to the Will
of God, and/or Oneness with the Godhead ... etc., but what matters, so far
as modernity is concerned, is that in all cases these Higher Principles
supersede human rationality and cannot be reduced to it.
At this level, altruistic leftwing political beliefs seem to me to
basically carry the same message, even those that de-deify the Higher
Principle (after all Confucianism does this, too, and so does Zen and many
another branch of the Indo-Buddhist line) and identify my 'rationality'
with the specific logic of capitalism ... though I worry that such belief
systems count just a bit too much on the emergence of some post-capitalist
Rationality, or even on the rationality of critique within and against a
capitalist logic, and do not seem to quite realize that ANY human
rationality will tend to run amok, and ANY human rationality will make
itself blind to many dimensions of experience and reality, in dangerous and
unwise ways, unless it must take account of something MORE.
As to what that MORE might be, by its nature it cannot be defined,
delimited ... it is simply always MORE than what is dreamt of in our
philosophies today, or any day. The NEED for that MORE, of course, is
powerfully used as a tool of domination, but doing so reduces IT back to
just another limited rationality, and so actually _increases_ the NEED
again for the _truly_ MORE.
My hunger for Robots is not a hunger for convenient mindless servants, it
is just a form of my desire to speed the transformation of our activities
and their/our logics so that all rationalities are kept off-balance, lagged
in time, leaving a space in which to feel something MORE.
JAY.
---------------------------
JAY L. LEMKE
PROFESSOR OF EDUCATION
CITY UNIVERSITY OF NEW YORK
JLLBC who-is-at CUNYVM.CUNY.EDU
<http://academic.brooklyn.cuny.edu/education/jlemke/index.htm>
---------------------------