I do agree that theory-builders and users can find it valuable to
examine the cross-connections among elements of their toolkits.
Most uses require several tools in conjunction, and we need to
know better what happens when we combine and connect. And, yes,
of course I also agree that when modifications are made in one
tool, we need, for the same reasons to consider what happens to
the relations of that tool to others. I do not however think this
process is like that in an axiomatic system where a change in one
thing _forces_ changes in other parts, like dominoes falling. The
problem I think with the architectural metaphor, or the axiomatic
ideal, is an overly rigid model of interdependence.
I realize Bill is not pushing for axiomatics, but it seems to be
implicit in how most people justify the need for theoretical
'rigor'. Theories cannot be rigorous, and Bill is quite right
that many mathematicians prefer to ignore Goedel. If they took
him seriously they would have to redefine the entire enterprise
of mathematics, and that is not institutionally very likely. The
analogue in language studies is the sort of opposition Rob Spence
constructed between the 'pure' linguists and the 'applied' ones.
Pure linguistics is not a meaningful enterprise for me, or rather
what it claims to be and desire is not believable. I don't
believe that you can frame an inquiry autonomously; there must
always be some external value-principles, some reasons to look at
things as you do, which are not derivable from your 'data' or
internally within a pseudo-axiomatic scheme. The MIT school has
taken this 'pure' approach to language studies to such an
extreme, at least rhetorically. This approach is in fact
_rhetorically_ very powerful. It is nearly impossible to argue
with it unless you enter its framing assumptions, and once inside
them, you have already lost all the important arguments.
Bill McGregor, I'd say, is as 'applied' a linguist as any of us,
and maybe more than most. His work derives from the enterprise of
anthropological linguistics, saying interesting and useful things
about what people do with language and how. The questions he
poses, the features of his data he foregrounds or defines into
being, have a lot to do with understanding the role of language
in culture, and vice versa. But he, and many of us (me less than
many of you) has another sort of problem: the rhetorical one.
"Why should any other linguist believe me otherwise?" he asks.
How can he justify the tools he uses to another linguist? I would
ask, why should any other linguist care or challenge which tools
you use? Why are so many academic disciplines built on an
adversarial rhetoric? There are important cultural answers to
that questions, having not a little to do with dominance
hierarchies among males, and the masculinist image of many
disciplines. I don't care about the truth of your claims, or your
choice of tools, except as I can use them or appropriate them. I
will look at what you say you are trying to describe or account
for (as close as you can let me get to your starting point, the
events, the tapes), at how you analyzed and reconstituted that
into something your tools could work on, what tools you used and
how, and what you have to say in the end and what use it might be
to me or someone else. I can learn, potentially, a lot (or
nothing) from this sort of reading of another person's work. As I
read I am, when moved, going to reimagine my own analysis, my own
interpretation, using other tools. I will not set one against the
other in terms of 'truth', but will add one on top of the other,
enlarging the repertory of views, so that something useful might
more likely come from the whole set.
When I argue against a position, I do it because some positions,
if taken as the final word on a matter, can foreclose inquiries
by other, especially less established investigators. Alternatives
are salutary, preventing closure is necessary. I argue against
narrow theories and interpretations that foreclose, and in favor
of looser ones that open up more possibilities. I argue against
truths and for tools.
My answer to the question, Why should I use it (a tool, the
metafunction hypothesis, or better, perspective)?, is: never
because you _have to_, never because theory or truth demands it
(i.e. because someone is very, very cleverly trying to coerce you
to think one way rather than another), certainly not because 'the
data' demands it, but because -- it's fun, it's interesting, it's
useful, it helps for some further purpose outside the data, the
theory, the discipline. Because it gets you to perspectives or
problems or further questions that you and/or other people find
worthwhile -- by many value-criteria.
The most interesting point in Bill's message, for me, was the one
about iconicity between theories and 'data' (there is no such
thing as 'data' in the classic sense: facts given apart from
positioned, selective construction), or, here, 'phenomena' such
as language-in-use. In physics, there is supposed to be a certain
homology, a possible 1-1 mapping between elements of the theory
and elements of the data. More precisely, there are a set of
these mappings, from most abstract theory, to specific model, to
instance of the model. Most of the interesting and hard work,
most of a theoretician's expertise, lies hidden in the mapping
strategies, while only the tip of the iceberg is visible in the
theory or model as such. Knowing physics is knowing _how to use_
the theory or model. Just learning what the theory or model says
is much easier, and quite useless.
There is also another sense in which theories or models and
'data' (which are really just another, lower-level model; it's
models all the way down!) are dissimilar. They have some sort of
'structural' similarity (the basis of the homology), but are made
of different 'stuff', their units, their constituent elements are
on different plane, or in different registers, different domains
of semantics. The higher levels could not be more abstract
otherwise.
But this raises a very important problem. The strategy of
abstract modelling, so successful in some areas of natural
science, breaks down when we wish to model the relatively unique
features of "individuals" (i.e. systems or phenemona or instances
whose features of interest are not the ones, or not just the
ones, they share with other instances). Aristotle said there
could be no science of the particular, but today we are all
trying to build just such discourses. The systems and phenomena
of the human sciences are not like electrons, all of which are
identical and interchangeable; know one, you know them all --
their only features of interest are the ones they share with all
others of their kind. Texts are not like that, neither are
languages, or cultures, or communities, or ecosystems, or complex
organisms. The basic strategies and goals of science itself need
to be rethought in these circumstances. This rethinking is just
beginning at the close of this century. It may be that what is
needed now is not abstract modelling, but something more like
system simulation, in which the simulation still has homologies
to the system (phenomenal, not SFG 'system'), and is still made
of a different 'stuff', but is not more abstract or applicable to
other systems. Again, it will be the know-how of building such
simulations, the hidden 'mapping' skills, that will mark the
accomplished analyst, using a toolkit that is useful for
simulating a range of different systems. And simulating aspects
of them that someone is interested in for some further purpose
which will define the usefulness of the simulation. JAY.
------------
JAY LEMKE.
City University of New York.
BITNET: JLLBC who-is-at CUNYVM
INTERNET: JLLBC who-is-at CUNYVM.CUNY.EDU