RE: Of forks and computers

Eugene Matusov (ematusov who-is-at UDel.Edu)
Wed, 20 May 1998 21:22:14 -0400

Hello everybody--

Can computers think or will? I don't know that for sure but what I do know
is that computers can remove family from welfare social support.

When my family was on welfare at the beginning of our immigrant life in US,
we were several times removed from financial and food stamps assistance
because of "a computer mistake" (The "computer mistake" label or social
construction was a result of discourse between us and our social workers and
in some cases their supervisors). In our experience, "the computer" in a
social service department was much more powerful agent and decision maker
than social workers and even supervisors. The real people were in service
of "the computer" to enter the data and to lawfully interpret its commands
(printouts).

God bless people inventing "computers." Just a thought.

Eugene

> -----Original Message-----
> From: Naoki Ueno [mailto:nueno@nier.go.jp]
> Sent: Tuesday, May 19, 1998 10:04 PM
> To: xmca who-is-at weber.ucsd.edu
> Cc: Naoki Ueno
> Subject: Re: Of forks and computers
>
>
> At 10:01 AM 5/19/98 -0400, Bill Barowy wrote:
> >I think so. There is the need to capture the difference between
> people and
> >not-people, or perhaps in an broader sense,
> *cognizers/actors/volitizers*,
> >and not *...*. Not to apologize for, nor to defend
> dichotomies, here the
> >dichotomy is just a viable construct, with viability that is a
> function of
> >the moment. Lasagna may take part in a system with me, but it lacks
> >cognition, volition, intent, and memory in doing so. Lasagna does not
> >think, does not make choices, does not remember, does not need.
>
> Bill,
>
> How about ELIZA, DOCTOR program? Are "they" cognizers/actors/volitizers*?
>
> In a specificly organized context, people treat ELIZA as if it has
> intelligence,
> intention or motivation. This treating ELIZA as
> *cognizers/actors/volitizers*
> will organize a specific interaction between that person and ELIZA.
>
> In other words, in a specific context, we organize an interaction
> as if the partner is *cognizers/actors/volitizers*.
> However, I am not sure that "human kind" really has intelligence,
> intention or
> motivation inside of them.
>
> The term "volition" is a very convenient tool in order to account people's
> ( or machine's) actions and to give a consistent account of
> people' s actions.
> Further, this account will be part of resources for organizing a specific
> interaction.
>
> However, It will be difficult to answer the questions as following.
> "Where is volition in brain?", "What is volition"?"
> I think there is no answer to these questions.
>
> If so, the terms such as *cognizers/actors/volitizers* are not the name
> of some entities, but one of resources for organizing a specific
> interaction. And this specificly organized interaction elaborates
> the meaning of *cognizers/actors/volitizers* as interactive resources.
>
> AI research had attempted to design the machine actor or "interactive
> machine" who has plan, intelligence, intention or motivation
> inside of it.
> And we know it was illusion. How was it illusion?
>
> Naoki Ueno
> NIER, Tokyo
>
>
>
>
>
>
>
>
>
>