Eugene Matusov wrote:
>
> Hi Mike and everybody--
>
> Mike wrote,
>
> >
> >Hi Eugene. Yes, the issue you raise concerning Vasiliy'Vasil'evich and
> >connectionism is very relevant. It came up in a different form in
> >Yrjo's AT class. Is it possible to model dialectical logic in a computer
> >program?
> >
>
> I'm not sure I fully understand your question or better to say the context
> in which you asked it. In my view, everything models dialectics simply
> because dialectics tries to reflect everything and, thus, is reflected in
> everything. For example, the relationship between computer software and
> computer hardware is dialectical -- they mutually constitute each other
> can't exist without each other (computer hardware without software is "empty
> abstraction" -- Davydov would say).
>
> Computer is a tool, a "cognitive amplifier" and as a tool it can help us to
> increase our cognitive power of dialectical thinking. When we model an
> ecological system with a computer the computer is a part of our dialectical
> thinking (although, we do not have any other).
>
> If you asked me can a computer (in our current understanding of computers)
> "think" or, better to say, become self-organizing system, I'd say no,
> although I believe that we, human, can create artificial self-organizing
> system out of non-organic material but on some other principles than we
> build computers. When we say that the Big Blue computer (or whatever it is
> called) won Kasparov in chess, in my view, what we say is a metaphor.
> Factually, people who designed the Big Blue won the chess tournament. They
> won equipped with the Big Blue. Do not read me wrong -- I think it is a
> great achievement for machine builders. It proves that we can amplify
> cognitive power of chess players/computer builders to such a degree that
> they (not the machine) can win the strongest "naked" opponent through their
> machine. This is the issue of agency and I believe that agency can be only
> a self-organizing system. Computers do not have agency and probably won't
> have until they are build as a human tool.
>
> Why aren't computers self-organizing systems? I'm not a specialist on
> self-organizing systems (John, Jay and other more knowledgeable people,
> please help). My insights are JPF insights (Just Plain Folks -- the term I
> read from Jean Lave):
>
> 1) "Bad news 1". A self-organizing system is highly concerned about its
> existence. It's biased (e.g., likes water and avoids acid) and biased means
> being alive. Computers are indifferent to their existence and functions.
> Switch them off or on -- no difference to them.
>
> 2) "Bad news 2". Parts of a self-organizing system die without the system
> (unless the system is simulated). CPUs, hard disks, memory chips are nicely
> stored in computer stores without being damaged being outside the computer.
>
> 3) "Good news 1". Both self-organizing systems and computers consist of "the
> same" indifferent matter.
>
> My conclusion: computers can't become a self-organizing agency because
> currently they are build by humans to serve human agency rather than to be
> an agency. The first principle of serving an agency is being non-resistant
> (i.e., obedient -- "do what I want you to do") and, thus, indifferent which
> my computer nicely is (at least now :-).
>
> What do you think?
>
> Eugene