[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [xmca] Message in a Bottle



Let's see. This computer is now producing a web "page" which is of course entirely an optical illusion produced by light emitting diodes. It is located somewhere on a "site" which is really located nowhere at all. 
 
Now,if I were a hardware engineer, it would be useful for me to treat the page and the site as being somehow "in" the computer, while if I were a software engineer, it's probably more useful for me to think of it as being a place out there which my browser has to "visit". 
 
But it doesn't EVER seem useful for me to imagine that the reality of the page and the site are simply in the way I pretend that they are there. I am not saying it isn't true (I suspect that whether or not it is true depends very much on what mean by "reality" and by "true"). I'm just saying that it's a bit of cleverness that I can't really do anything with, like being able to read Chaucer with authentic Middle English pronunciation. 
 
And I think exactly the same thing is true of the realization that Jay is urging upon me, the realization that other people only have minds to the extent that we are all willing to pretend that we do. I think it is a very FUNCTIONALIST way of looking at consciousness, and so I am within my rights to demand what FUNCTION this kind of view might have.
 
I'm afraid I don't believe for a nanosecond that there it has some kind of moral function; that somehow I will be a more modest and unassuming sentient being if I recognize the potential sentience of chatbots and artificially intelligent artefacts. I might linger over the argument for a nanosecond more if there were any sustained attempt to teach the chatbot to be modest, unassuming and empathetic instead of narcissistic, manipulative and time-wasting. As far as I can see, (in Friesen's data just for example), there isn't.
 
It seems to me that the real issue at stake here is not morality but political economy. The middle class is, as Jay has so eloquently pointed out, in the process of being completely hollowed out. The chatbot and the phone tree are the twenty-first century equivalent of the Jacquard loom, and the "cruel" people that Bruce Robinson complains about are simply Luddites trying to protect their way of life. Me too. 
 
There's a methodological issue too. Throughout "Thinking and Speech", we find LSV doing a peculiar kind of triangulation; he has a functional account, a structural and a genetic account all at the same time. For example, he argues that a word which FUNCTIONS like a concept can be a concept for others, but not a concept for me (because I am thinking of concrete groups of apples and not apples as opposed to hawthorns or crabapples). 
 
He argues that Piaget's attempt to look ONLY at logical relations and not the way in which concepts actually work or where they actually come from reduces his framework to empty STRUCTURALISM. And he tells us that two processes (e.g. thinking and speech) may have entirely different genetic roots (practical intelligence and animal communication) but nevertheless fuse and transform each other. So no one explanation is ever sufficient.
 
Certainly, the functionalist explanation of consciousness that Jay is proposing can never be enough. If it were, I don't think "Thinking and Speech" ever would have been written. The Turing Test, the Chinese Room, and Saussure's "speech circuit" (see p. 136 of the Friesen article for an almost pure example) would be all we know and all we need to know.
 
David Kellogg
Seoul National University of Education
 


--- On Sun, 6/7/09, Jay Lemke <jaylemke@umich.edu> wrote:


From: Jay Lemke <jaylemke@umich.edu>
Subject: Re: [xmca] Message in a Bottle
To: "eXtended Mind, Culture, Activity" <xmca@weber.ucsd.edu>
Date: Sunday, June 7, 2009, 3:08 PM


In between other things, I've tried to catch up a bit with some recent threads. I was interested particularly in some of the comments around chatbots, AIs, etc. in relation to humans, and esp. to the popular-with-philosophers "theory of mind" -- i.e. that people act on the belief that other people have something we're taught to call a/our mind.

That we learn to behave according to such a theory, a genuine folk-theory I'd say, I don't much doubt. That we actually have what the theory calls for, I don't much believe. Or, perhaps in more elaborate terms, the reality of minds is the reality we make by acting as if we and others had them ... which is after all a kind of reality like any other kind of reality, just not perhaps the kind most people think.

So if we treat robots or AIs or chatbots "as if" they had minds, then for many purposes they would have them. That people design them doesn't seem to me a big issue: we design one another in these respects (parents, elders, peers design us). And before long AIs will be designing other AIs no doubt (or mutating and editing them).

But, as in so much else, we consider OUR way of pretending to have a mind the REAL deal, and we look down on any other way. Out of arrogance, and in general, rather than looking to see how specifically in each case a way of behaving as-if-mindedly works, matters, can be engaged with for various purposes/functions.

What is our identity if we have to defend it by denying what we value in ourselves to every Other?

JAY.

Jay Lemke
Professor
Educational Studies
University of Michigan
Ann Arbor, MI 48109
www.umich.edu/~jaylemke




On Jun 5, 2009, at 1:46 PM, Steve Gabosch wrote:

> Hi Bruce,
> 
> In another thread David Kg makes an interesting point about how one has to to stop driving an automobile to repair a mechanical problem, but one has to continue having a conversation to fix a misunderstanding.  He suggests that this is the essential difference between talking to a chatbot, and two people engaging in voluntary conversation.
> 
> David then relates this point to discourse analysis (and discursive analysis, discussed in the Friesen article):
> 
> "This is actually how conversation analysis really works. We start from the premise that the tools for understanding and misunderstanding and fixing the misunderstanding are all right there in the conversation ... "
> 
> These insights may shed a little light on your experience, Bruce, that programming chatbots revealed that "users can be remarkably cruel, always looking for something that would cause the program to crash or give a silly answer..."
> 
> If the user can cause the conversation to break down in some way (logically or mechanically) and can therefore prove that the chatbot program is helpless to fix the problem, then hail to real people who can actually carry on a real conversation and fix them as they go!  So chalk one up for the human being versus the machine!  And although perhaps not consciously, score a point of protest against the social relationships this contention represents, the owners of these machines versus those that are exploited by them ...
> 
> Which is to say, commiserating with your experience, that trying to create such programs under these conditions and tensions is probably not always fun and games.  Sharp frustrations can emerge.
> 
> Turning to your discussion of ever-repeated arguments "in which claims are made for ... [AI devices such as chat-bots] being human-like ... which are then repudiated by others, both referring to abstract models of what it is to be human," I find myself thinking about Marx's analysis, often referred to in CHAT theory, about social relations in modern class-divided society appearing as relations between things.
> 
> The more perplexed they get about what it is to be human, and why society has the kinds of social relations it does, the more at least some people seem to turn to their **relations with objects** to try to make sense of human affairs.
> 
> This is done first of all by trying to construct, and theorize about, AI devices, and other kinds of "intelligent" machines.  And many also seem to appreciate building such things in **fiction**.  It has been the norm for a while now for science fiction to explore extreme possibilities of computer programs.  The Terminator and Matrix series are two examples.
> 
> Interestingly, there are at least a half a dozen books in print with essays analyzing the philosophical implications of the Matrix movies, some finding their way into philosophy courses, I understand.  This is part of a whole emerging genre of intellectual essays aimed at the general reader analyzing works in popular culture.  Fans are looking for and finding serious discussions of their favorite works.  This can be seen all over the internet, on Amazon.com, etc.
> 
> One of these essays I read a couple years ago made an interesting parallel between the Matrix program, which orchestrated a collective illusion of a common human "reality" while humans unknowingly slept in isolated compartments (in a world dominated by machines that were disgusted by humans) ... and Kant's view of phenomenon (that which is perceived through the senses) versus noumenon (that which is objectively existing), the latter which Kant denied as being directly accessible to humans.  The Matrix movies do indeed provide an interesting twist on how to look at reality that makes one think ...
> 
> So it has become a kind of cultural norm in the US, UK, and other countries to take serious looks at fanciful computer-like objects, and through them, try to understand human life.  Long before supercomputers began beating the best human chess players (Deep Blue versus Kasparov in 1997, Deep Fritz versus Kramnik in 2006), people had been primed with the idea of computers being "smarter" than people.  You might have a chess program on your computer right now that can beat you and anyone you know!  So what? most would today say as they yawned ... it was just a matter of time ...
> 
> But underneath that sentiment, which on one hand may reflect an unbounded optimism for human inventiveness, on the other hand may harbor a sense of defeatism.  In this defeatist view, this social system, which turns and replaces relations between humans into relations with things, is going to be around for a very, very long time.  Maybe not just human relations but human beings themselves, chess masters and all, will be eventually replaced by computers.  In fact, there are probably no few people in the world today who are out looking for work - or are worried they will soon have to - that may feel this is already well under way! ...
> 
> In a sense, David's definition of a human conversation as being by nature mutually voluntary has a parallel to one of Marx's greatest concepts, that the task of the emancipation of the working class is a task for the workers themselves.  Just as only a human can fix a conversation while having it, only the workers can fix their own future - and still have one.  Although they keep telling us we can't, humanity actually can have its cake, and eat it, too.  Or at least, it can bake new cakes - real ones that are truly noumenal and really phenomenal :-)) - while consuming the old ones.  And in doing so, I believe we shall master our computers, and finally be done with the silly notion that computers - and other social classes - can master us.
> 
> Cheers,
> - Steve
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> On Jun 3, 2009, at 6:18 AM, Bruce Robinson wrote:
> 
>> Steve,
>> 
>> I think that if the detractors, supporters and users of AI all began to just think of things like chatbots as what you call 'sophisticated objects' or perhaps what we might call mediating artefacts, we would avoid the sort of ever repeated argument in which claims are made for them being human like which are then repudiated by others, both referring to abstract models of what it is to be human. Users would then have a better understanding of what they can or cannot do and those who see them as teaching assistants would have fewer legs to stand on. (Philosophers might lose an area of debate, though.)
>> 
>> In this respect, though I could agree with a lot of his argument, I don't think Friesen's article gets us much further towards understanding chatbots as artefacts - though to be fair that probably wasn't what he was trying to do. The example of dialogue he gave just makes the rather obvious point that they cannot converse  like humans because they only have a restricted domain of operation. There are a lot more interesting questions about what might be needed to make chatbots more *useful*  and what their potential and limitations are in this respect.
>> 
>> I also thought of ELIZA after reading the paper.  The gullibility of its users led Weizenbaum, its creator, to give up work in AI and instead become its critic. Maybe it only worked because Rogerian therapy is restricted and stereotyped too inits responses so that he'd picked a domain in which it was relatively easy to create something that could appear intelligent.
>> 
>> As someone who did write programs in this area in the 80s, I don't know if I had a wicked sense of humour, but I did learn that users can be remarkably cruel, always looking for something that would cause the program to crash or give a silly answer...
>> 
>> Bruce
>> 
>> 
>> ----- Original Message ----- From: "Steve Gabosch" <stevegabosch@me.com>
>> To: "eXtended Mind, Culture, Activity" <xmca@weber.ucsd.edu>
>> Sent: Wednesday, June 03, 2009 11:54 AM
>> Subject: Re: [xmca] Message in a Bottle
>> 
>> 
>>> Your thought on chatbots copied here has had me thinking a little,
>>> David:
>>> 
>>> On May 26, 2009, at 6:49 PM, David Kellogg wrote:
>>> 
>>>> [For voluntary communication to take place] ... there has to be
>>>> exactly what is missing when a human pretends to communicate with a
>>>> chatbot ... : there has to be a theory of reciprocal willingness to
>>>> communicate based on the assumption that the other is a subject like
>>>> oneself. That is the key distinction between subject-subject
>>>> relations and subject-object relations that I think Leontiev ignored.
>>> 
>>> The chatbot example is a very good one to make your point.  As long as
>>> you play along and act as though (or perhaps even believe) that the
>>> computer program behind a chatbot represents a reciprocal willingness
>>> to communicate as a real person, you can keep up a real dialogue.
>>> 
>>> I remember a few years ago playing with the Eliza program, a chatbot
>>> developed in the 1960's that is alive and well on the internet.  This
>>> automated Rogerian-style therapist asks things like "how do you feel
>>> about that?"  It repeats things you say back in question formats that
>>> are designed to elicit you to talk more about yourself.  As long as
>>> you play along, it works surprisingly well, especially if you don't
>>> try to give it trick questions.  Doing this is an application of that
>>> subjective thing we so often do in the movies, the "suspension of
>>> disbelief."  At first, one may feel inclined give the chatbot the
>>> benefit of the doubt, and actually try to seriously talk to it.  This
>>> kind of dialogue could even be a little therapeutic!  Maybe you could
>>> use a few moments to describe how you feel about something ...
>>> 
>>> But as soon as you become exasperated with your interlocuter being
>>> just a computer program, the communication breaks down.  And what
>>> happens next is just what you suggest: you no longer communicate as
>>> though there is reciprocal willingness from a fellow subject.  You now
>>> talk only as though you are speaking with a sophisticated object.  You
>>> may even get the impulse to devise ways to trick it into acting like
>>> the dumb machine you know it really is!  That is when you may discover
>>> that programmers can have a wicked sense of humor about these things ...
>>> 
>>> Your generalization about Leontiev makes me want to read where he
>>> spoke about subject-subject relations.  Given the general, mediational
>>> character of human activity, I am wondering, from a CHAT framework,
>>> what a "subject-subject" relation actually is.  Isn't culture
>>> (objects, artifacts, words, bodies, etc. etc.) always in the middle?
>>> 
>>> Cheers,
>>> - Steve
>>> 
>>> 
>>> _______________________________________________
>>> xmca mailing list
>>> xmca@weber.ucsd.edu
>>> http://dss.ucsd.edu/mailman/listinfo/xmca
>> 
>> 
>> --------------------------------------------------------------------------------
>> 
>> 
>> 
>> No virus found in this incoming message.
>> Checked by AVG - www.avg.com
>> Version: 8.5.339 / Virus Database: 270.12.51/2151 - Release Date: 06/02/09 17:53:00
>> 
>> _______________________________________________
>> xmca mailing list
>> xmca@weber.ucsd.edu
>> http://dss.ucsd.edu/mailman/listinfo/xmca
> 
> _______________________________________________
> xmca mailing list
> xmca@weber.ucsd.edu
> http://dss.ucsd.edu/mailman/listinfo/xmca
> 
> 

_______________________________________________
xmca mailing list
xmca@weber.ucsd.edu
http://dss.ucsd.edu/mailman/listinfo/xmca



      
_______________________________________________
xmca mailing list
xmca@weber.ucsd.edu
http://dss.ucsd.edu/mailman/listinfo/xmca