meaning vs. information

Jay Lemke (jllbc who-is-at cunyvm.cuny.edu)
Sat, 06 Feb 1999 01:05:04 -0500

In our expansive forays into the semiosphere, we have not actually
addressed Mike's other question: the relationship between "information" and
"meaning".

I've had occasion to think about this question rather a lot. A few years
ago I was a consultant to a large research organization that was in the
process of merging its computing division with its information division.
Management was a bit worried that people from the two former units kept
talking at cross-purposes, using words like 'information' in basically
different ways and not understanding each other's discourses. The info unit
included the library, the archives, internal and external publications,
communication and editing. For these people information was about what made
any two texts of the same length different. The computing unit did research
on hardware and software and ran networks, provided computing services to
the other divisions. For them information was about what made any two texts
of the same length the same. What mattered to the computing unit was _how
much_ information a text (or anything) contained. What mattered to the
librarians was _what_ information a text or film contained.

I hope this begins to illustrate both the range of meanings of the term
"information" and the core of the relationship between information in its
most technical sense (the computer version) vs. information as a close
synonym for "meaning" (the librarians and writers version).

In fact, as you might suspect, it is actually rather hard to make a clean
conceptual separation between "information" and "meaning", despite the
polarization of usage that happens in extreme cases. But there are two
rather different, and interestingly related, conceptual schemes behind
these notions, regardless of what we call them, or of the fact that both
schemes merely tell different parts of the same story.

One scheme, which I will call the info scheme, traces its ancestry back
from computers (bits) to electrical engineering (switches and signals) to
railroading (switches and signals!). Interest in this scheme arises from a
profound and simple phenomenon: it is possible to build a machine in which
a small amount of energy (a signal) can influence (via a "switch") what
happens to a large amount of energy (e.g. a speeding train). Thus is born
the science of control, formally called cybernetics, but for the most part
just an integral aspect of all engineering. In most simple natural systems
it is rare for a small-energy event to significantly influence the course
of a large-energy system. The normal rule of physics and chemistry is that
it takes something big to affect something big. There are exceptions:
sometimes the future of the big thing is 'on the edge' between two possible
alternatives (a bifurcation) and a small push can make the difference which
path it takes. We design our technologies so that puny little amounts of
human muscular energy can control vast amounts of matter and energy in some
larger system. It's all done with switches. What controls the switch is a
"signal", a small burst of energy that sets the switch ON or OFF, LEFT or
RIGHT.

The simplest possible signal is a 'spike', a very short-lived burst of
energy. There has to be a "rule" or code that connects the signal to the
state of the switch: does a spike set the switch to ON or does it set it to
OFF? the simplest possible switch has to have two states, and so there are
really also always at least two signals: spike and no-spike, or tone vs.
no-tone. The idea of a 'signal' is misleading ... it is the DIFFERENCE
between spike and no-spike, tone and no-tone, light and dark, red and
green, that is needed to set the switch properly. A difference that makes a
difference.

In electrical engineering (and in human hearing, for that matter) it is
possible for one signal to set more than one switch. A signal does not have
to be simple like a spike or a tone, it can be a complex pattern of
continuously varying energy over time. You can build 'analyzers' that are
sensitive to whether this pattern contains certain features (e.g. whether
certain frequencies of sound are present in a spoken word), and wire a
different switch to the presence or absence of each feature (or to whether
the amount of the feature is greater than some threshhold value). Now, one
complex signal can set many switches. Many differences within the signal,
as interpreted by the analyzer, can now make a difference to many different
switches. Problem: what is the largest number of switches you can set with
a given signal?

The solution to this problem gives rise to the technical concept of
information. The amount of information that a signal can carry, relative to
a given analyzer, is more or less the number of switches you can set with
it. In netspeak, this is often called the "bandwidth" of the signal: how
wide the band of frequencies in the signal is, with the analyzer set to
pick out one frequency for each switch. Each switch is one "bit" of
information. Since the signal also varies in time, we can talk about the
number of bits per second. There is also an important conceptual difference
between the amount of information a signal CAN carry (its
information-carrying capacity) and the amount that it DOES carry. DOES is
less than CAN because there is always NOISE ... which means that some
switches accidentally get set wrong ... and adapted and designed systems
compensate for this with REDUNDANCY: the signal repeats the information on
how each switch is to be set, which wastes capacity but can correct for
errors due to noise. Sometimes there is more redundancy than is actually
needed, in which case we can COMPRESS the information so that it can be
carried by a signal with a lower capacity (bandwidth). Sometimes we
compress it too much, and then we lose "quality" (watch video on a slow
computer over a slow modem!) -- we lose information, there is not enough
left in the signal to set all our switches right (our ears, eyes, and
brains are the analyzer).

None of this depends on which way the individual switches are actually set
in a particular case. That is, none of it depends on the content or meaning
of the information, only on "how much" information is needed or used. If I
wrote this next sentence in Navaho, or encrypted it into a string of
nonsense characters of equal length, it would require the same number of
bits of information to send it by email; the same amount of information
would be there, but you would probably get no meaning from it (or much less
than you're getting from this sentence now). At the level of information
bits, text, music, images, video ... all are quantitatively
inter-convertible, perfectly commensurable. At the level of meaning this is
almost certainly not so.

The second conceptual scheme, which I will call the meaning scheme, traces
its ancestry back through semiotics to linguistics and philology, and back
probably to medieval hermeneutics. It is all about the interpretation OF
information. All the switches have already been set, but what will the
machine DO? Most machines are designed so that for any given setting of the
switches there is one and only one thing the machine will always do. People
are not that simple, and neither are many sorts of complex systems.
Meaningful human artifacts, like texts or pictures, when interpreted by
people lead to a wide variety of human behavior (the evidence for their
having interpreted the items differently), often context-dependent. The
basic scheme for analyzing this phenomenon goes beyond the information
content of signals to discuss the meaning content of signs. Thus is born
the science of semiotics (vs. cybernetics, above).

When a signal sets a switch, there is a direct material interaction. A and
B interact with one another, they exchange energy, action and reaction.
There is no interpretation, no sign. A sign, unlike a signal, requires a
third element, a C. When the analyzer analyzes the signal, it only cares
(i.e. it only makes a difference for the switch) if the frequency is
present in the signal or not. The analyzer is not looking past the signal
to see what it is a signal OF! But the interpreter is: when I see a
particular setting of a switch, I want to know WHY it was set that way, I
want to know WHAT set it that way, I want to know what WILL happen because
it is set that way. The setting of the switch is for me a SIGN of something
else. A sign has MEANING because it is a sign OF something FOR some
interpreter. A kick sets a railroad switch -- that's a signal: there are
two elements in the relationship, it's a purely physical interactive
relationship, "causal". I see the switch, and for me it is a sign of
somebody having kicked it -- there are now three elements in the
relationship, and the interpretation of the state of switch is not a direct
causal effect of the switch on me in the same sense that the state of the
switch is a direct causal effect of the kick on the switch.

The state of the switch by itself provides one bit of information, less
than one vowel or one consonant of a spoken word, less than one letter of a
written word. The meaning, the significance FOR ME of that thrown switch,
might require an entire sentence to paraphrase. Where is the extra
information coming from? in one sense from me, but in a larger sense it is
coming from _contextualization_: I am reading the state of the switch not
just in and of itself, but also as an element of a more complex scenario.
The set of possible or plausible scenarios is built up out of cultural
resources; contextualizing the railroad switch with just one of these
represents the setting of a lot more interpretative switches. What is the
SI, the System of Interpretation, the system in which interpretation in the
semiotic sense get done? it is a very large system indeed, some subset of a
community's networks of activity over an extended period of time, including
the activity of me interpreting this switch now.