Michael,
This will have to be short since I'm in the middle of writing.
The so-called learning paradox is like many of its ilk (paradoxes) is based
on special assumptions that restrict the world of the paradox in ways that
make the paradox possible. The paradoxical nature of paradoxical statements
invariably arises from the contradiction between the constructed world of
the paradox and the world we actually live and work in (sorry about my
Englisch).
The learning paradox is an interesting one, it's a most elegant rendering of
the paradox engendered by philosophical idealism and especially subjective
philosophical idealism of the Kantian kind. The special assumptions of the
learning paradox is that we only know through thought and that thought is
essentially subjective. Both Pragmatism - GHM, Dewey, and so on - and
Historical Materialism regard thought as:
1. Social and therefore external to the subject.
2. Only one of many means by which men interact with the world.
These two minor modifications of the concept of man's relation to the world
effectively vaporize the learning paradox. If learning is:
1. a matter of adopting socially engendered and enabled consciousness and
purposes, and if
2. the realisation of social consciousness and purpose in the world is
effected by the interaction of the individual with conditions that include
human thought, but only as a part of the totality of the extant sensable
world (universe perhaps?) then
3. the learning paradox evaporates into thin air, leaving behind it the
realization that the peculiarly European intellectualist aberration of
regarding the world solely in terms of thought and of regarding thought as
strictly subjective activity is totally inadequate for the explication of
the educational process.
With highest regards,
Victor
----- Original Message -----
From: "Michael Glassman" <MGlassman@hec.ohio-state.edu>
To: <xmca@weber.ucsd.edu>
Sent: Wednesday, July 28, 2004 6:06 AM
Subject: RE: Learning Paradox
So I don't know which direction to go with this - so I'm just going to forge
ahead with something interesting I read related to this. And sticking to
this whole Pragmatism trip I'm on I thought it might be interesting to pose
it as a thought experiment (not by me, but by Daniel Dennett -did I get the
name right). It seems this is a big argument among the cognitive scientists
themselves. With the Pragmatic AI cognitive scientists (Dennett lists them
all but I can't remember - but he lists Rorty who's not AI or a cognitive
scientist, but always a good ally in a pinch I suppose) against the more
nativist cognitive scientists such as Fodor and Searle.
Before I copy the thought experiment, and it is a little long, and can hurt
the head under some circumstances, I do want to say that Dennett makes the
distinction between simple physiological systems such as plants and humans
(sort of following on what Geoff says here - which sort of made me think of
this whole thing).
Here is the thought experiment with apologies to Professor Dennett (I hope
this is legal). I will offer a couple of lines on the end concerning my own
thinking. If there are no responses I will assume everybody's plate is too
full, or everybody went to the beach (can't do that in Ohio). Oh, one more
thing, which I think is really interesting thinking about this,
Von Glaserfield (again, possible apologies about the name, but it's late and
I don't want to look it up) suggests that the Learning Paradox focuses on
the benefits of inductive logic, while Joe Glicks accomodatioin,
assimilation, and adaptation focuses more on abductive logic (I got a C- in
logic in college so I'm not going any farther with that).
Here goes,
Suppose you decided, for whatever reasons, that you wanted to experience
life in the 25th century, and suppose that the only known way of keeping
your body alive that long required it to be placed in a hibernation device
of sorts, where it would rest, slowed down and comatose, for as long as you
liked. You could arrange to climb into the support capsule, be put to sleep,
and then automatically awakened and released in 2401. This is a time-
honored science fiction theme, of course.
Designing the capsule itself is not your only engineering problem, for the
capsule must be protected and supplied with the requisite energy (for
refrigeration or whatever) for over 400 years. You will not be able to count
on your children and grandchildren for this stewardship, of course, for they
will be long dead before the year 2401, and you cannot presume that your
more distant descendants, if any, will take a lively interest in your
well-being. So you must design a supersystem to protect your capsule, and to
provide the energy it needs for four hundred years.
Here there are two basic strategies you might follow. On one, you should
find the ideal location, as best you can foresee, for a fixed installation
that will be well supplied with water, sunlight, and whatever else your
capsule (and the supersystem itself) will need for the duration. The main
drawback to such an installation or "plant" is that it cannot be moved if
harm comes its way--if, say, someone decides to build a freeway right where
it is located. The second alternative is much more sophisticated, but avoids
this drawback: design a mobile facility to house your capsule, and the
requisite early-warning devices so that it can move out of harm's way, and
seek out new energy sources as it needs them. In short, build a giant robot
and install the capsule (with you inside) in it.
These two basic strategies are obviously copied from nature: they correspond
roughly to the division between plants and animals. Since the latter, more
sophisticated strategy better fits my purposes, we shall suppose that you
decide to build a robot to house your capsule. You should try to design it
so that above all else it "chooses" actions designed to further your best
interests, of course. "Bad" moves and "wrong" turns are those that will tend
to incapacitate it for the role of protecting you until 2401--which is its
sole raison d'tre. This is clearly a profoundly difficult engineering
problem, calling for the highest level of expertise in designing a "vision"
system to guide its locomotion, and other "sensory" and locomotory systems.
And since you will be comatose throughout and thus cannot stay awake to
guide and plan its strategies, you will have to design it to generate its
own plans in response to changing circumstances. It must "know" how to "seek
out" and "recognize" and then exploit energy sources, how to move to safer
territory, how to "anticipate" and then avoid dangers. With so much to be
done, and done fast, you had best rely whenever you can on economies: give
your robot no more discriminatory prowess than it will probably need in
order to distinguish what needs distinguishing in its world.
Your task will be made much more difficult by the fact that you cannot count
on your robot being the only such robot around with such a mission. If your
whim catches on, your robot may find itself competing with others (and with
your human descendents) for limited supplies of energy, fresh water,
lubricants, and the like. It would no doubt be wise to design it with enough
sophistication in its control system to permit it to calculate the benefits
and risks of cooperating with other robots, or of forming alliances for
mutual benefit. (Any such calculation must be a "quick and dirty"
approximation, arbitrarily truncated. See Dennett forthcoming.)
The result of this design project would be a robot capable of exhibiting
self-control, since you must cede fine-grained real-time control to your
artifact once you put yourself to sleep). Endnote 2 As such it will be
capable of deriving its own subsidiary goals from its assessment of its
current state and the import of that state for its ultimate goal (which is
to preserve you). These secondary goals may take it far afield on century-
long projects, some of which may be ill-advised, in spite of your best
efforts. Your robot may embark on actions antithetical to your purposes,
even suicidal, having been convinced by another robot, perhaps, to
subordinate its own life mission to some other.
But still, according to Fodor et al., this robot would have no original
intentionality at all, but only the intentionality it derives from its
artifactual role as your protector. Its simulacrum of mental states would be
just that-- not real deciding and seeing and wondering and planning, but
only as if deciding and seeing and wondering and planning.
All right, now away from Dennett's brilliance to my more mundane questions.
Is it possible that the robot cannot, will not have any original
intentionality beyond what we have created for it, and if so is it possible
for the robot, and us as the creators to survive? Isn't extinction
inevitable if we follow the whole idea of the learning paradox. By the way,
according the Dennett, Fodor seems to hate evolutionary theory (or is at the
very lead annoyed by it).
Hey, did anybody see Obama tonight? He rocked! And a great Pragmatist!
Michael
________________________________
From: Geoff Hayward [mailto:geoff.hayward@edstud.ox.ac.uk]
Sent: Tue 7/27/2004 6:03 PM
To: xmca@weber.ucsd.edu
Subject: RE: Learning Paradox
Physiological metaphors one and all and a physiological system can but
react according to the set parameters (thank you Claude) and that is the
learning paradox. But if you move beyond the physiological individual
you find some bootstrapping devices, albeit limited by our collective
intelligence - which begs another question ....... an additional unit of
analysis - the activity system. But how does this arise and how well
does Activity Theory deal with issues of identity ... grey moments in an
English summer.
Geoff
Dr Geoff Hayward
Associate Director SKOPE
OUDES
15 Norham Gardens
Oxford
OX2 6PY UK
Phone: +44 (0)1865 274007
Fax: + 44 (0)1865 274027
e-mail: geoff.hayward@edstud.ox.ac.uk
-----Original Message-----
From: Glick, Joseph [mailto:JGlick@gc.cuny.edu]
Sent: 27 July 2004 18:47
To: 'xmca@weber.ucsd.edu'
Subject: RE: Learning Paradox
Assimilation, accommodation, adaptation, organization anyone?