[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [xmca] Message in a Bottle
- To: "eXtended Mind, Culture, Activity" <xmca@weber.ucsd.edu>
- Subject: Re: [xmca] Message in a Bottle
- From: Mike Cole <lchcmike@gmail.com>
- Date: Sat, 6 Jun 2009 16:40:30 -0700
- Delivered-to: xmca@weber.ucsd.edu
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:reply-to:in-reply-to :references:date:message-id:subject:from:to:content-type; bh=4JTDNmKMBHJgbuMkourRHva2yVR6fSOC5JxQMBcRamE=; b=fluUBUKaY3SafzLbg1wuUKCMdunNBTSyXOVIZN0KFpiuBfW69jSGkX3go/3X90N1vH ulNr+Rr2xZ1CqzwiZx3xPBcAyrZSSZYzU/3Vjdut61yuA2EtXA7LrN8ZGgjUO8L256Hn 0JZ3zv/4DckZ/Fb0M7Usg2Dx/bCUlBQbCxcPc=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:reply-to:in-reply-to:references:date:message-id :subject:from:to:content-type; b=xhMwCSgWQX+kRMZJyqYFCYyshRM/DFemH/vhglX4wC7ikg2AD5G54pikXe68K9lEO9 jfA+Lmr7LMdooDXLaWGNU+jcPdRxoc4ZBFIKbQShwyYxPQTbtYo1qTz5mfVkApDtKV1a /wxOsmZgtwA3iPLLrr4fiH8bpsqbrkwVP3Mlk=
- In-reply-to: <D3836FD3-B596-4C55-AA41-4224FB8B82DC@me.com>
- List-archive: <http://dss.ucsd.edu/mailman/private/xmca>
- List-help: <mailto:xmca-request@weber.ucsd.edu?subject=help>
- List-id: "eXtended Mind, Culture, Activity" <xmca.weber.ucsd.edu>
- List-post: <mailto:xmca@weber.ucsd.edu>
- List-subscribe: <http://dss.ucsd.edu/mailman/listinfo/xmca>, <mailto:xmca-request@weber.ucsd.edu?subject=subscribe>
- List-unsubscribe: <http://dss.ucsd.edu/mailman/listinfo/xmca>, <mailto:xmca-request@weber.ucsd.edu?subject=unsubscribe>
- References: <683582.79040.qm@web110302.mail.gq1.yahoo.com> <7E51AAD7-057A-496F-A63D-F7070CBAE81C@me.com> <CE89D91B71274850BD783FA4D815C8B4@BRUCEROBINSOPC> <D3836FD3-B596-4C55-AA41-4224FB8B82DC@me.com>
- Reply-to: mcole@weber.ucsd.edu, "eXtended Mind, Culture, Activity" <xmca@weber.ucsd.edu>
- Sender: xmca-bounces@weber.ucsd.edu
That is a wonderfully thought provoking set of thoughts, Steve.
As it turns out, at UCSD there is a lot of interest in "intentional objects"
which, when embodied
in complex computer programs, of which Eliza was an early version, can
provide the illusion of
a real conversation, of a "voluntarily acting, humanoid 'partner'"/
The reminder of having to stop a car to fix it but having to continue
conversation to repair it
suggested by David Kg also gets me to thinking. Manny Schegloff of
conversational analysis
renown (former member of lchc of all things!), gave a paper a few years ago
on repair mechanisms in
conversation where he has something like a "three strikes and I am out of
this conversation" rule.
That is, if about 3 tries at repair within the conversation fail, the
persons involved give it up and change the topic. Or maybe simply stop
interacting, I forget the details. If they simply change the conversation
that would extend David's thought in an interesting new direction.
The same line of thinking takes us back to Luria's combined motor method
where he claims that we know other minds through the selective
discoordination of voluntary, culturally mediated, joint behavior.
thanks
mike
On Fri, Jun 5, 2009 at 4:46 AM, Steve Gabosch <stevegabosch@me.com> wrote:
> Hi Bruce,
>
> In another thread David Kg makes an interesting point about how one has to
> to stop driving an automobile to repair a mechanical problem, but one has to
> continue having a conversation to fix a misunderstanding. He suggests that
> this is the essential difference between talking to a chatbot, and two
> people engaging in voluntary conversation.
>
> David then relates this point to discourse analysis (and discursive
> analysis, discussed in the Friesen article):
>
> "This is actually how conversation analysis really works. We start from the
> premise that the tools for understanding and misunderstanding and fixing the
> misunderstanding are all right there in the conversation ... "
>
> These insights may shed a little light on your experience, Bruce, that
> programming chatbots revealed that "users can be remarkably cruel, always
> looking for something that would cause the program to crash or give a silly
> answer..."
>
> If the user can cause the conversation to break down in some way (logically
> or mechanically) and can therefore prove that the chatbot program is
> helpless to fix the problem, then hail to real people who can actually carry
> on a real conversation and fix them as they go! So chalk one up for the
> human being versus the machine! And although perhaps not consciously, score
> a point of protest against the social relationships this contention
> represents, the owners of these machines versus those that are exploited by
> them ...
>
> Which is to say, commiserating with your experience, that trying to create
> such programs under these conditions and tensions is probably not always fun
> and games. Sharp frustrations can emerge.
>
> Turning to your discussion of ever-repeated arguments "in which claims are
> made for ... [AI devices such as chat-bots] being human-like ... which are
> then repudiated by others, both referring to abstract models of what it is
> to be human," I find myself thinking about Marx's analysis, often referred
> to in CHAT theory, about social relations in modern class-divided society
> appearing as relations between things.
>
> The more perplexed they get about what it is to be human, and why society
> has the kinds of social relations it does, the more at least some people
> seem to turn to their **relations with objects** to try to make sense of
> human affairs.
>
> This is done first of all by trying to construct, and theorize about, AI
> devices, and other kinds of "intelligent" machines. And many also seem to
> appreciate building such things in **fiction**. It has been the norm for a
> while now for science fiction to explore extreme possibilities of computer
> programs. The Terminator and Matrix series are two examples.
>
> Interestingly, there are at least a half a dozen books in print with essays
> analyzing the philosophical implications of the Matrix movies, some finding
> their way into philosophy courses, I understand. This is part of a whole
> emerging genre of intellectual essays aimed at the general reader analyzing
> works in popular culture. Fans are looking for and finding serious
> discussions of their favorite works. This can be seen all over the
> internet, on Amazon.com, etc.
>
> One of these essays I read a couple years ago made an interesting parallel
> between the Matrix program, which orchestrated a collective illusion of a
> common human "reality" while humans unknowingly slept in isolated
> compartments (in a world dominated by machines that were disgusted by
> humans) ... and Kant's view of phenomenon (that which is perceived through
> the senses) versus noumenon (that which is objectively existing), the latter
> which Kant denied as being directly accessible to humans. The Matrix movies
> do indeed provide an interesting twist on how to look at reality that makes
> one think ...
>
> So it has become a kind of cultural norm in the US, UK, and other countries
> to take serious looks at fanciful computer-like objects, and through them,
> try to understand human life. Long before supercomputers began beating the
> best human chess players (Deep Blue versus Kasparov in 1997, Deep Fritz
> versus Kramnik in 2006), people had been primed with the idea of computers
> being "smarter" than people. You might have a chess program on your
> computer right now that can beat you and anyone you know! So what? most
> would today say as they yawned ... it was just a matter of time ...
>
> But underneath that sentiment, which on one hand may reflect an unbounded
> optimism for human inventiveness, on the other hand may harbor a sense of
> defeatism. In this defeatist view, this social system, which turns and
> replaces relations between humans into relations with things, is going to be
> around for a very, very long time. Maybe not just human relations but human
> beings themselves, chess masters and all, will be eventually replaced by
> computers. In fact, there are probably no few people in the world today who
> are out looking for work - or are worried they will soon have to - that may
> feel this is already well under way! ...
>
> In a sense, David's definition of a human conversation as being by nature
> mutually voluntary has a parallel to one of Marx's greatest concepts, that
> the task of the emancipation of the working class is a task for the workers
> themselves. Just as only a human can fix a conversation while having it,
> only the workers can fix their own future - and still have one. Although
> they keep telling us we can't, humanity actually can have its cake, and eat
> it, too. Or at least, it can bake new cakes - real ones that are truly
> noumenal and really phenomenal :-)) - while consuming the old ones. And in
> doing so, I believe we shall master our computers, and finally be done with
> the silly notion that computers - and other social classes - can master us.
>
> Cheers,
> - Steve
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Jun 3, 2009, at 6:18 AM, Bruce Robinson wrote:
>
> Steve,
>>
>> I think that if the detractors, supporters and users of AI all began to
>> just think of things like chatbots as what you call 'sophisticated objects'
>> or perhaps what we might call mediating artefacts, we would avoid the sort
>> of ever repeated argument in which claims are made for them being human like
>> which are then repudiated by others, both referring to abstract models of
>> what it is to be human. Users would then have a better understanding of what
>> they can or cannot do and those who see them as teaching assistants would
>> have fewer legs to stand on. (Philosophers might lose an area of debate,
>> though.)
>>
>> In this respect, though I could agree with a lot of his argument, I don't
>> think Friesen's article gets us much further towards understanding chatbots
>> as artefacts - though to be fair that probably wasn't what he was trying to
>> do. The example of dialogue he gave just makes the rather obvious point that
>> they cannot converse like humans because they only have a restricted domain
>> of operation. There are a lot more interesting questions about what might be
>> needed to make chatbots more *useful* and what their potential and
>> limitations are in this respect.
>>
>> I also thought of ELIZA after reading the paper. The gullibility of its
>> users led Weizenbaum, its creator, to give up work in AI and instead become
>> its critic. Maybe it only worked because Rogerian therapy is restricted and
>> stereotyped too inits responses so that he'd picked a domain in which it was
>> relatively easy to create something that could appear intelligent.
>>
>> As someone who did write programs in this area in the 80s, I don't know if
>> I had a wicked sense of humour, but I did learn that users can be remarkably
>> cruel, always looking for something that would cause the program to crash or
>> give a silly answer...
>>
>> Bruce
>>
>>
>> ----- Original Message ----- From: "Steve Gabosch" <stevegabosch@me.com>
>> To: "eXtended Mind, Culture, Activity" <xmca@weber.ucsd.edu>
>> Sent: Wednesday, June 03, 2009 11:54 AM
>> Subject: Re: [xmca] Message in a Bottle
>>
>>
>> Your thought on chatbots copied here has had me thinking a little,
>>> David:
>>>
>>> On May 26, 2009, at 6:49 PM, David Kellogg wrote:
>>>
>>> [For voluntary communication to take place] ... there has to be
>>>> exactly what is missing when a human pretends to communicate with a
>>>> chatbot ... : there has to be a theory of reciprocal willingness to
>>>> communicate based on the assumption that the other is a subject like
>>>> oneself. That is the key distinction between subject-subject
>>>> relations and subject-object relations that I think Leontiev ignored.
>>>>
>>>
>>> The chatbot example is a very good one to make your point. As long as
>>> you play along and act as though (or perhaps even believe) that the
>>> computer program behind a chatbot represents a reciprocal willingness
>>> to communicate as a real person, you can keep up a real dialogue.
>>>
>>> I remember a few years ago playing with the Eliza program, a chatbot
>>> developed in the 1960's that is alive and well on the internet. This
>>> automated Rogerian-style therapist asks things like "how do you feel
>>> about that?" It repeats things you say back in question formats that
>>> are designed to elicit you to talk more about yourself. As long as
>>> you play along, it works surprisingly well, especially if you don't
>>> try to give it trick questions. Doing this is an application of that
>>> subjective thing we so often do in the movies, the "suspension of
>>> disbelief." At first, one may feel inclined give the chatbot the
>>> benefit of the doubt, and actually try to seriously talk to it. This
>>> kind of dialogue could even be a little therapeutic! Maybe you could
>>> use a few moments to describe how you feel about something ...
>>>
>>> But as soon as you become exasperated with your interlocuter being
>>> just a computer program, the communication breaks down. And what
>>> happens next is just what you suggest: you no longer communicate as
>>> though there is reciprocal willingness from a fellow subject. You now
>>> talk only as though you are speaking with a sophisticated object. You
>>> may even get the impulse to devise ways to trick it into acting like
>>> the dumb machine you know it really is! That is when you may discover
>>> that programmers can have a wicked sense of humor about these things ...
>>>
>>> Your generalization about Leontiev makes me want to read where he
>>> spoke about subject-subject relations. Given the general, mediational
>>> character of human activity, I am wondering, from a CHAT framework,
>>> what a "subject-subject" relation actually is. Isn't culture
>>> (objects, artifacts, words, bodies, etc. etc.) always in the middle?
>>>
>>> Cheers,
>>> - Steve
>>>
>>>
>>> _______________________________________________
>>> xmca mailing list
>>> xmca@weber.ucsd.edu
>>> http://dss.ucsd.edu/mailman/listinfo/xmca
>>>
>>
>>
>>
>> --------------------------------------------------------------------------------
>>
>>
>>
>> No virus found in this incoming message.
>> Checked by AVG - www.avg.com
>> Version: 8.5.339 / Virus Database: 270.12.51/2151 - Release Date: 06/02/09
>> 17:53:00
>>
>> _______________________________________________
>> xmca mailing list
>> xmca@weber.ucsd.edu
>> http://dss.ucsd.edu/mailman/listinfo/xmca
>>
>
> _______________________________________________
> xmca mailing list
> xmca@weber.ucsd.edu
> http://dss.ucsd.edu/mailman/listinfo/xmca
>
_______________________________________________
xmca mailing list
xmca@weber.ucsd.edu
http://dss.ucsd.edu/mailman/listinfo/xmca