I remember when I was 6-year old I physically punished my skis for falling
me down by hitting them with ski sticks and yelling at them. I do not want
to admit what I and other adults do sometimes when things get wrong with our
everyday objects because it may sound insane :-) I think we can
perceptionally project an agency (an "other") in anything that resists us.
We also can easily perceptionally incorporate into our body any tool that
serves us well (remember Bateson's example of a blind man with a stick
tapping the road).
It is interesting that my son recently admit that he can't play chess with
himself because he constantly cheats the other side. I remember having
similar problem myself. I wonder if this is a universal phenomenon and
people can't honestly compete with an imaginary other.
What do you think?
Eugene
-----Original Message-----
From: maria judith <costlins who-is-at ism.com.br>
To: xmca who-is-at weber.ucsd.edu <xmca@weber.ucsd.edu>
Date: Monday, March 23, 1998 6:26 PM
Subject: Re: vvd AND contradication
>Hi Eugene and everybody,
>what do you think about the imaginary people have about computer? We
>can think about this idea when you speak of the chess game and also when
>we talk with teachers who want to begin with computers in classrooms as
>a magic tool. Thanks, Maria Judith Lins
>
>Eugene Matusov wrote:
>>
>> Hi Mike and everybody--
>>
>> Mike wrote,
>>
>> >
>> >Hi Eugene. Yes, the issue you raise concerning Vasiliy'Vasil'evich and
>> >connectionism is very relevant. It came up in a different form in
>> >Yrjo's AT class. Is it possible to model dialectical logic in a computer
>> >program?
>> >
>>
>> I'm not sure I fully understand your question or better to say the
context
>> in which you asked it. In my view, everything models dialectics simply
>> because dialectics tries to reflect everything and, thus, is reflected in
>> everything. For example, the relationship between computer software and
>> computer hardware is dialectical -- they mutually constitute each other
>> can't exist without each other (computer hardware without software is
"empty
>> abstraction" -- Davydov would say).
>>
>> Computer is a tool, a "cognitive amplifier" and as a tool it can help us
to
>> increase our cognitive power of dialectical thinking. When we model an
>> ecological system with a computer the computer is a part of our
dialectical
>> thinking (although, we do not have any other).
>>
>> If you asked me can a computer (in our current understanding of
computers)
>> "think" or, better to say, become self-organizing system, I'd say no,
>> although I believe that we, human, can create artificial self-organizing
>> system out of non-organic material but on some other principles than we
>> build computers. When we say that the Big Blue computer (or whatever it
is
>> called) won Kasparov in chess, in my view, what we say is a metaphor.
>> Factually, people who designed the Big Blue won the chess tournament.
They
>> won equipped with the Big Blue. Do not read me wrong -- I think it is a
>> great achievement for machine builders. It proves that we can amplify
>> cognitive power of chess players/computer builders to such a degree that
>> they (not the machine) can win the strongest "naked" opponent through
their
>> machine. This is the issue of agency and I believe that agency can be
only
>> a self-organizing system. Computers do not have agency and probably
won't
>> have until they are build as a human tool.
>>
>> Why aren't computers self-organizing systems? I'm not a specialist on
>> self-organizing systems (John, Jay and other more knowledgeable people,
>> please help). My insights are JPF insights (Just Plain Folks -- the term
I
>> read from Jean Lave):
>>
>> 1) "Bad news 1". A self-organizing system is highly concerned about its
>> existence. It's biased (e.g., likes water and avoids acid) and biased
means
>> being alive. Computers are indifferent to their existence and functions.
>> Switch them off or on -- no difference to them.
>>
>> 2) "Bad news 2". Parts of a self-organizing system die without the system
>> (unless the system is simulated). CPUs, hard disks, memory chips are
nicely
>> stored in computer stores without being damaged being outside the
computer.
>>
>> 3) "Good news 1". Both self-organizing systems and computers consist of
"the
>> same" indifferent matter.
>>
>> My conclusion: computers can't become a self-organizing agency because
>> currently they are build by humans to serve human agency rather than to
be
>> an agency. The first principle of serving an agency is being
non-resistant
>> (i.e., obedient -- "do what I want you to do") and, thus, indifferent
which
>> my computer nicely is (at least now :-).
>>
>> What do you think?
>>
>> Eugene