Re: Usability and semiotics

Clay I Spinuzzi (spinuzzi who-is-at iastate.edu)
Thu, 06 Nov 1997 11:48:43 CST

>To me, the issue of teacher assessment in relation to how to use these
>multimedia tools for learning is pretty pivotal. Teacher assessment exerts
>social pressure on the kids that is at least as powerful as the actual
>programmed structures of the software. (ZPD) The larger point I'm getting at
>here is that, nothing in any design seems etched in stone (or silicon, for
>that matter). The users (their context and their activity) negotiate the
>ultimate workable design of any cultural artifact. The goal of usability
>research, as far as I can understand it, is to keep chasing this butterfly,
>trying to get at the problems that emerge in practice and come up with
>adequate responses...

The problem is that the currently dominant tradition of human-computer
interaction (HCI) design and evaluation--computational psychology--is
predicated on certain (frankly Cartesian) ideas about users. e.g.

- the human mind and the computer are essentially similar data-processing
units, whose behavior is governed by their innate characteristics
("wiring") and their input ("conditions")

- human-computer interaction, then, is largely a matter of getting the
input and output of these two "machines" to synchronize

- semiotics, therefore, is usually explored using the conduit metaphor
or as a code system (e.g. what meaning matches which symbol)

- artifacts, then, are cultural only in the sense that the code
originated in a particular culture--otherwise, they're things-
in-themselves

- since users are all essentially the same as each other and the same
as their computers, the individual should be the molar unit of analysis

- AND since users have similar innate characteristics and similar
"conditions," usability is not negotiated but rather an ultimate,
universally achievable standard

Nardi's (1996) _Context and Consciousness_ has several critiques of
computational psychology. CompPsych's assumptions shape HCI research and
restrict what researchers "see"--even much ethnographic research in HCI
focuses on specific "situations" in which individuals or dyads essentially
react to conditions surrounding them. And certainly other modes of HCI
research (such as experimentation) tend to be used with the above notions
and have to some extent embedded these notions.

Although a LOT of HCI researchers are trying to work outside the tradition
(such as our friends at Xerox PARC, Rank Xerox Research Center, and Nardi's
group at Apple), it's still dominant. To use your metaphor: researchers in
this tradition are not only chasing the butterfly, they believe that they
have it cornered and they're about to capture it--and once they do, they can
pin it up, enter it into their logs, and not have to chase it again. When
someone asks about the butterfly, they will be able to refer to their notes
and answer any questions one might have. Their methods of chasing the
butterfly are conducted with these goals in mind.

I'm intensely interested in whether our usability testers have had to
struggle with this tradition. Have you been asked to study individuals
rather than workgroups? Have you seen your situated recommendations turned
into universals? Have your thick ethnographic descriptions been generalized
as "the user experience"? Or have your managers been willing to see your
ethnographic work as localized and of heuristic value?

-----
Clay Spinuzzi
206 Ross Hall
Iowa State University
Ames, IA 50011
spinuzzi who-is-at iastate.edu
http://www.public.iastate.edu/~spinuzzi