>What with impeachment, war, ceaseless convocation requirements,
>and lack of sleep from wakeful children, I'm in a bad mood today,
>and so will reply in a contrary manner.
Well, I also share some of your afflictions - particularly the
international ones, but I'll see your contrary manner and raise the
stakes in the bet that is actually lurking behind this enterprise.
>It is inevitable that the Turing
>test (essentially a test as to whether a programmed computer can get
>us to identify with it as a cognizing agent) will be passed.
Well, to be factually accurate, the Turing test has already been met (for
a change I know the details of this subject! :-) The MIT experiment in
which a not very "intelligent" A.I. program was passed off as a
psychotherapist - very successfully, shows that the Turing test isn't the
best metric for intelligence (Human or otherwise.)
However, that experiment I think points out something far more troubling
about A.I. systems: it isn't that they adapt to us, we adapt to them.
The fellow who taught me LISP and A.I., Charles Woodson, made two
troubling predictions that I think may come to pass. First, he suggested
that natural language processing need not be anywhere near 100% accurate.
Going back to our discussions about universal languages, he predicted
that new language would emerge that computers could understand and that
serves the needs of humans adequately. We then would all adopt the new
language for the convenience of being able to talk to our computers with
99.99% accuracy.
His second prediction was far more chilling. He proposed building robot
"playmates." He argued that parents had become too important a commodity
to allow them to devote large amounts of time to their children.
Instead, children would be given devices with limited intelligence - and
given no other choice, children would simply make due with that -
certainly better off than the latchkey children of today. The new toys
by Microsoft and others are a frightening step in that direction.
I think this group is in the best position to point out something
seriously overlooked by A.I. researchers: "Humans readily adapt to their
environment social, physical, and emotional." While A.I. researchers try
to make their systems more humanlike, as Hubert Dreyfus argued, we are
busy trying to make people more A.I. like.
Sadly it brings our discussion all the way back to economics and what is
valued in human society. The deciding factor for whether a robot teacher
is used will probably not be pedagogy, but cost. As a society, we do not
use our brightest minds to teach, we don't pay teachers enough for that.
Even University Professors are not paid for their teaching skills, nor do
they limit themselves to their teaching salary if they can supplement it.
If robots can teach anywhere near to the levels of human teachers, they
will replace humans - and the society will adjust to accept the resulting
product. It is just like how we buy "off the rack clothes" instead of
custom tailoring, eat manufactured food instead of hunting wild game, and
accept photographs instead of painted portraits.
I think we find ourselves at very troubling crossroad of human
civilization. We have already allowed ourselves to abandon many things
that make us human. Once poets wrote elaborate sonnets to woo their
love. Now we buy greeting cards in free verse and pay no attention to
the difference. It is truly human "progress" that the sonnet has become
all but an extinct art-form? It is progress that now we learn mainly
from books where once we learned first-hand from people with that
expertise? One could go all the back to Plato and ask if we haven't lost
something essentially human when we lost the ability to memorize and
recount long narratives.
Hubert Dreyfus strongly warns us that if we want A.I. people - we will
get our wish. Graham Hancock raises the troubling question that a Lost
Civilization may have known something that we do not. At the very least,
careful inspection of what might be meant by "progress" is sorely
overdue. We are already in a world largely of our own making, and it is
that very world that is putting David, myself, and a cast of millions, in
a bad mood. Have we already forsaken the only means to satisfaction and
human happiness? I'm confident of one thing, no A.I. researcher has
proven we haven't.
Peace, Edouard
============================================
Edouard Lagache, PhD
Webmaster - Lecturer
Information Technologies
U.C. San Diego, Division of Extended Studies
Voice: (619) 622-5758, FAX: (619) 622-5742
email: elagache who-is-at weber.ucsd.edu
============================================
BETWEEN THE IVORY TOWER AND THE STREETS
(Dedicated to everyone in the Social and
Cultural Studies Proseminar ~ 1994/5)
(Sonnet 90)
I set upon a voyage of discovery
with my eyes wide open and innocent.
Alas,who could have known that this journey
would have revealed a world of discontent.
How can those born with dreams of justice true
cope with a life where none of that will do?
To try to care and not let blindness be.
To see what most don't find in front of them.
Alas, to perceive is not to be free.
We too are bought and sold for the cool gem.
No one seeks to hurt others in their name.
Still, that is power's curious and cruel game.
To try to care calls back to Bethlehem. . .
Is caring rather like a requiem?
Edouard Lagache
April 13, 1995