However, it seems quite possible and even likely that something like a PDP
network coupled to a categorization-on-value engine such as Edelman's Darwin
III will allow the self-organized ontogeny of a machine intelligence. My own
suspicion is that the capacity to closely inspect the machine's inner states
will not provide powerful answers if for no other reason than that by the time
the machine becomes "interesting" it will already be too complex to completely
understand, accurately predict or control its behavior. It seems to me these
are things we already know about ourselves but perhaps that is the "ultimate"
purpose of such research -- to demonstrate the point more conclusively to
those who remain in doubt.
That aside, I find much of the research in cognitive science fascinating and
potentially useful so long as the objectivist stance is kept explicit; that
is, these are modeling efforts directed toward a viable theory of machine
cognition independent of whether or not this theory corresponds to some "real"
state of affairs in terms of human activity. To the degree such a theory might
predict human activities it could also come to have instrumental value but
even then the assumption of isomorphism would probably be suspect (as in my
humble opinion it usually is in this and in other matters).
Bruner, J. S. (1990). _Acts of Meaning_. Cambridge, MA: Harvard University
Press.
Edelman, G. M. (1992). _Bright Air, Brilliant Fire: On the Matter of the
Mind_. New York: BasicBooks.
Lakoff, G. (1987). _Women, Fire, and Dangerous Things: What Categories
Reveal About the Mind_. Chicago: University of Chicago Press.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Rolfe Windward (UCLA GSE&IS, Curriculum & Teaching)
ibalwin who-is-at mvs.oac.ucla.edu (text)
rwindwar who-is-at ucla.edu (text/BinHex/MIME/Uuencode)
CompuServe: 70014,00646 (text/binary/GIF/JPEG)
"I respect belief, but doubt is what gets you an education." W. Mizener