signals, communication, and bunnies

Jay Lemke (jllbc who-is-at cunyvm.cuny.edu)
Sun, 07 Feb 1999 20:22:46 -0500

Luiz and Bill have taken our discussions on information and meaning
further. Some responses.

Luiz is quite right in adding some more precise engineering analyses of my
signals examples. I was trying to make things as simple as possible in
describing both the information and the meaning paradigms, so that I could
make the more important point about how they relate to each other. The
spike does contain an infinite range of frequencies, but I really don't
think it contains very much information, and I doubt you could really set
many switches with one ... all the information would have to be coming from
the analyzer. Real signals are "modulated"; they are not like either spikes
or pure tones (one frequency). There is also a sort of paradox that arises
from the duality of digital and analogue signalling. As a digital signal,
the spike is simplest; as an analogue signal, the pure tone is simplest. A
wonderful mathematic trick (and theorem, due to Fourier) assures the
possibility of converting information between these two forms ... in the
digital case you get a whole long series of spikes and no-spikes (or in
reality big spikes and little spikes), in the analogue case you get very
complicated wave-forms (big waves with littler waves riding on them, and
still littler waves on those, etc.) Of course the infinite spike (the delta
function) is just a mathematical idealization, not something in the
physical world; a close approximation of it could be like a very complex
waveform, with lots of information, but compressed enormously in time. In
this sense it could contain a lot of information. But that information does
us no good unless we have an analyzer that can respond on that super-fast
timescale. Bandwidth is not the only factor limiting how much information
gets communicated; there is also the characteristics of the
analyzer-receiver, and there response time or time-resolution or
frequency-resolution -- all matters of timescale --- are critical.

Luiz also emphasizes, and I very strongly agree, that you can't get very
far in actually applying the information or cybernetic engineering paradigm
without also understanding communication in terms of human cultural
systems, and so some form of the meaning paradigm. This is what is at stake
in the current debates, as Don Cunningham noted (his original post got to
me but not xmca, unfortunately), surrounding Informatics programs ... they
shouldn't limit themselves to bit-logic, they need social informatics or
one of the meaning-based approaches (CHAT, ANT, social semiotics,
oecologies, etc.) to provide essential contextualization. I was only
separating these two paradigms in order to show how and why they need to be
brought back together again.

Bill wonders whether a notion of information is optional, in the sense that
we can describe system behavior quite well without it. In his most
interesting example, the bunny who anticipates veggie scraps, he offers an
alternative account that is based on uniting bunny and people and scraps
into a system with a history. This is an example of a complementary
explanatory strategy that is always available, and worth taking note of. I
tend myself to use this strategy a lot; it represents an interesting
variant on the semiotic paradigm.

We can make the semiotic paradigm look rather close to the
information-signal paradigm, if we want to. That's what I did in my earlier
long post in order to more easily contrast and combine their features. In
effect we make both paradigms look at information/meaning in terms of
_communication_. The sign is then like a signal, the information it carries
like the meaning of the sign, and the analyzer like the interpreter. It can
all be made to look like there is a source of info/meaning and a receiver,
as well as a medium. What Bill does, or I often do, is to reject the
factorization of the system, to reject the initial (and usually hidden)
step of seeing the source and receiver, or the sign-vehicle and
interpreter, as a priori separate entities. In the alternative view, we
consider them, much more justifiably usually, to be just different
components of the same larger, compound system.

Now our task is to describe what is happening in terms of internal
correlations or processes within that system, and we may account for these
in terms of the system's history (or design, or functionality, or
evolution, or development, or emergent properties). Many years ago (late
1970s), I started from some suggestions of Bateson's and re-did semiotics
in terms of the internal "meta-redundancies" within a complex multi-level
system. (If interested, see the Postscript to my _Textual Politics_, 1995.)
In this view there is really very little difference at all between
information theory and semiotics, except that information theory is
extended to take into account context and the 'codes' that link information
capacity to actual meaning-content. Semiotics also appears in the fact that
the minimal unit of analysis has three rather than two constituents. But
formally, all one does in this alternative model is just to describe the
internal properties of the joint system. It was my first indication that
semiotic systems have to be multi-leveled, logically and materially. It
also fits quite well with models of joint activity, and situated or
distributed cognition. Its weakness was that it was not historical; I did
not yet see how time-scales also had to play a role. It did not account for
change, was not very dynamical. That came later.

JAY.

---------------------------
JAY L. LEMKE
PROFESSOR OF EDUCATION
CITY UNIVERSITY OF NEW YORK
JLLBC who-is-at CUNYVM.CUNY.EDU
<http://academic.brooklyn.cuny.edu/education/jlemke/index.htm>
---------------------------