RE: AI agency

Jay Lemke (jllbc who-is-at cunyvm.cuny.edu)
Wed, 07 Jan 1998 01:29:46 -0500

I'd never read about Turing's AI 'child' ideas, but strangely I had exactly
the same proposal some years ago when asked how to teach a machine to talk
in English: equip it with a learning mechanism and a schematic of a
functional grammar and dialogue with it. The principle here was not
linguistic but cultural, that machines don't talk well in English (or any
other language) not because they don't know enough grammar, but because
they don't know what to say. This assumes that semantics depends as much on
cultural conventions about how reality works and what it makes sense to
say, as on purely formal syntax-like rules.

I suppose one might not be surprised to find Turing, as a gay man being
persecuted by the medieval (actually quite modern) superstitions of the
very homoerotically based English establishment (cf. English public
schools), fantasizing about artificial fatherhood and only too well aware
of how anyone 'different' might fare at school. Better that the child
should rely on his own inner resources and learn from the Books of Life and
Nature with as little scholastic mediation as possible ...

... which reminds me of Gerald Edelman's proposed secret ingredient for a
learning neural net AI robot: perceptual-motor interaction with the
environment (supplemented by some 'value' biases, perhaps not unlike the
'positive/negative' inputs of Turing to substitute for 'emotions' in the AI
child) ...

As to Linnda Caporael's implicit and explicit queries:

I agree with Turing, the objection to thinking machines is emotional (i.e.
has a basis other than the reasons actually advanced. There are actually
two bases for these feelings, I think. One derives from the deep cultural
principles by which humans distinguish ourselves, in many cultures at any
rate, from animals and from the inanimate. A great deal of our systems of
thought, identity, morality, etc. rest on these cultural foundations. Once
they are brought into question, there is genuinely a threat of anomie if
not anarchy. (I happen not to regard anarchy as necessarily a bad thing, in
moderation.) The recent moral panic over the possibility of human cloning
has I think a similar basis. The problem is that the people objecting have
never really analyzed the links between basic categorical assumptions about
natural kinds and complex social structures of legality, morality,
identity, etc. Their reasons are vague, prompted by generalized anxiety
rather than specific fears.

The second basis, I think, is the equally inarticulate presentiment that,
our cherished myths notwithstanding, it is perfectly clear that machines in
some sense will very soon succeed 'us' in the course of evolution. Of
course since evolution is not linear or successional in any simple way, we
can imagine that some sort of cyborg that carries on our lineage, with
machine (probably organic-like in its physical basis, but not necessarily)
enhancements, might dominate next. My own guess is that the distributed
computing model will lead to network systems in which the roles of more
human-like and more AI-like components will emerge as conscious
intelligences at a higher level than what we experience now as organisms.
In any case, we are not going to be the crown of creation for very much
longer. Merely one of its lineage ancestors.

On the second great question of anthropomorphism, I think my own view is
pretty close to Linnda's. Human thinking about everything is some sort of
modified extension of human social thinking. There is some evidence in
language development for a prototypical role for interpersonal semantics in
the development of propositional semantics (the actual distinctions are
somewhat different, see Halliday's focus article in _Linguistics and
Education_ 5(2), 1993). I have always thought a lot of things made more
sense on the assumption that we think (i.e. talk, suppose) about things as
if they related to us just as other people do. In fact, I really don't
think modernist scientific discourse has done much other than disguise this
basic schema, not actually replace it with something else. Perhaps the
disguise has to some degree taken on a life of its own, and to that extent
it is so counter-intuitive a way of thinking that it is very difficult for
many people to learn to do it (see Halliday & Martin, early chapters,
_Writing Science_). This is however a very complex subject, since it
depends on the relations between our qualitative and quantitative ways of
making meaning about both people and things. Maybe a better term is really
'sociomorphism'.

JAY.

---------------------------
JAY L. LEMKE

CITY UNIVERSITY OF NEW YORK
JLLBC who-is-at CUNYVM.CUNY.EDU
---------------------------