Re: a request / Connectionism

Jay Lemke (jllbc who-is-at cunyvm.cuny.edu)
Thu, 26 Mar 1998 23:40:05 -0500

I am about 30 messages away from being caught up with xmca right now, but
wanted to make a small remark about the rules/connectionist and innate
language acquisition debate.

The argument that there is insufficient environmental info available to
allow rules to be inferred without innate help seems flawed to me in two
basic respects. First, as said by others here, that language production is
itself rule-based rather than merely rule-describable (I am not sure it's
even the latter, but more nearly so). It seems much more likely that, like
all complex human behavior (and behavior in complex systems much more
primitive than people) it arises from the interaction of more elementary
tendencies, as an emergent phenomenon in a self-organizing system where the
"rules" or simpler procedures do not simply add up, but couple to one
another, feedback on one another, etc. Surely even with a set of rules of
the degree of complexity envisioned in UG theory, if there was strong
coupling among such procedures, the system would be quite unpredictable;
i.e. the theory would be useless except as posthoc description. That does
mean simply that you can't predict what people will say, which is obvious
for other reasons (the larger and even more complex system including their
environments and histories), but that you could not predict the form of
their sentences/utterances from the grammatical theory. UG requires that
the rules be strictly limited in their interdependence, and this is
contrary to general experience in modeling related biological phenomena.
Thus the odds are that (1) there are a lot fewer basic procedures than UG
rules, and (2) they are much more complexly interlinked.

Secondly, the arguments about low info in the linguistic environment are
mainly predicated on the assumption, fundamental to UG-like theories, that
grammatical information is not available above the level of the clause or
sentence; i.e. discourse level information is ignored. So also is
situational-contextual information, by and large. This view made some sense
in the old days when formal grammars tried to be independent of semantic
considerations, ie. meanings, but this is no longer true even in UG, where
semantic features of the lexicon (i.e. ways particular words have
idiosyncratic grammatical properties related to their meanings) are
increasingly seen as basic to practical description. Once discourse-level
and situational constraints via semantics can contribute to learning
grammar, the amount of available constraining information increases
drastically.

In short, I don't think we learn rules, nor are rules innate, nor do we
produce language by way of rules of the UG sort. We do learn some sorts of,
call them meaning-procedures, which semiotically relate meanings to
wordings, and which neurologically are so far as I know entirely
undescribed as yet (but some Edelman-like re-entrant connectivity of
neuronal maps seems promising) -- and which are unlikely to map one-to-one
onto any semiotically meaningful procedures anyway, particularly if every
human brain learns to do these things in its own unique way (which seems
likely). The neurological processes are tightly interlinked, leading to the
same for any equivalent set of semiotically meaningful procedures. Hence
relatively few such can produce very complex, emergent, and
context-dependent behaviors. No formal grammars, not even the ones I like
to use, account for language in ways that I would consider even slightly
realistic from a neurological viewpoint. They can't, because people can't
make such models and see their consequences, hence such models have no
analytical-conceptual usefulness. They can however be made in computational
forms, to simulate language production, and probably will be some day. They
will work, but no one will understand how they work.

How, as organisms, we make meaningful behavior, and how it is useful to
talk meaningfully about that behavior are matters belonging to quite
different domains that we may not be able to usefully map onto one another.
Computational simulations will show that such mappings are possible, but
they will not be very enlightening for humans. They may be quite
practically useful, however. This case may become the paradigm instance of
a general intellectual shift in our future culture, that we will give up
the project of explaining complex systems in simple, human-comprehensible
terms, and be content with usefully simulating such systems for practical
purposes. Like the Copernican and Darwinian revolutions, this may be
Western culture's next big lesson in intellectual humility. No
master-narratives, no masters. Just intelligent participants.

JAY.

---------------------------
JAY L. LEMKE

CITY UNIVERSITY OF NEW YORK
JLLBC who-is-at CUNYVM.CUNY.EDU
---------------------------