[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [xmca] Message in a Bottle



Meantime, a local not-chatbox has created a bot slayer to protext xmca.
Otherwise, before we know
it, Borders will be advertising their books where YOUR books should be
featured. As always, volunteer help in maintaining xmca would be
appreciated.
mike

On Wed, Jun 3, 2009 at 6:18 AM, Bruce Robinson <bruce@brucerob.eu> wrote:

> Steve,
>
> I think that if the detractors, supporters and users of AI all began to
> just think of things like chatbots as what you call 'sophisticated objects'
> or perhaps what we might call mediating artefacts, we would avoid the sort
> of ever repeated argument in which claims are made for them being human like
> which are then repudiated by others, both referring to abstract models of
> what it is to be human. Users would then have a better understanding of what
> they can or cannot do and those who see them as teaching assistants would
> have fewer legs to stand on. (Philosophers might lose an area of debate,
> though.)
>
> In this respect, though I could agree with a lot of his argument, I don't
> think Friesen's article gets us much further towards understanding chatbots
> as artefacts - though to be fair that probably wasn't what he was trying to
> do. The example of dialogue he gave just makes the rather obvious point that
> they cannot converse  like humans because they only have a restricted domain
> of operation. There are a lot more interesting questions about what might be
> needed to make chatbots more *useful*  and what their potential and
> limitations are in this respect.
>
> I also thought of ELIZA after reading the paper.  The gullibility of its
> users led Weizenbaum, its creator, to give up work in AI and instead become
> its critic. Maybe it only worked because Rogerian therapy is restricted and
> stereotyped too inits responses so that he'd picked a domain in which it was
> relatively easy to create something that could appear intelligent.
>
> As someone who did write programs in this area in the 80s, I don't know if
> I had a wicked sense of humour, but I did learn that users can be remarkably
> cruel, always looking for something that would cause the program to crash or
> give a silly answer...
>
> Bruce
>
>
> ----- Original Message ----- From: "Steve Gabosch" <stevegabosch@me.com>
> To: "eXtended Mind, Culture, Activity" <xmca@weber.ucsd.edu>
> Sent: Wednesday, June 03, 2009 11:54 AM
> Subject: Re: [xmca] Message in a Bottle
>
>
>
>  Your thought on chatbots copied here has had me thinking a little,
>> David:
>>
>> On May 26, 2009, at 6:49 PM, David Kellogg wrote:
>>
>>  [For voluntary communication to take place] ... there has to be
>>> exactly what is missing when a human pretends to communicate with a
>>> chatbot ... : there has to be a theory of reciprocal willingness to
>>> communicate based on the assumption that the other is a subject like
>>> oneself. That is the key distinction between subject-subject
>>> relations and subject-object relations that I think Leontiev ignored.
>>>
>>
>> The chatbot example is a very good one to make your point.  As long as
>> you play along and act as though (or perhaps even believe) that the
>> computer program behind a chatbot represents a reciprocal willingness
>> to communicate as a real person, you can keep up a real dialogue.
>>
>> I remember a few years ago playing with the Eliza program, a chatbot
>> developed in the 1960's that is alive and well on the internet.  This
>> automated Rogerian-style therapist asks things like "how do you feel
>> about that?"  It repeats things you say back in question formats that
>> are designed to elicit you to talk more about yourself.  As long as
>> you play along, it works surprisingly well, especially if you don't
>> try to give it trick questions.  Doing this is an application of that
>> subjective thing we so often do in the movies, the "suspension of
>> disbelief."  At first, one may feel inclined give the chatbot the
>> benefit of the doubt, and actually try to seriously talk to it.  This
>> kind of dialogue could even be a little therapeutic!  Maybe you could
>> use a few moments to describe how you feel about something ...
>>
>> But as soon as you become exasperated with your interlocuter being
>> just a computer program, the communication breaks down.  And what
>> happens next is just what you suggest: you no longer communicate as
>> though there is reciprocal willingness from a fellow subject.  You now
>> talk only as though you are speaking with a sophisticated object.  You
>> may even get the impulse to devise ways to trick it into acting like
>> the dumb machine you know it really is!  That is when you may discover
>> that programmers can have a wicked sense of humor about these things ...
>>
>> Your generalization about Leontiev makes me want to read where he
>> spoke about subject-subject relations.  Given the general, mediational
>> character of human activity, I am wondering, from a CHAT framework,
>> what a "subject-subject" relation actually is.  Isn't culture
>> (objects, artifacts, words, bodies, etc. etc.) always in the middle?
>>
>> Cheers,
>> - Steve
>>
>>
>> _______________________________________________
>> xmca mailing list
>> xmca@weber.ucsd.edu
>> http://dss.ucsd.edu/mailman/listinfo/xmca
>>
>
>
>
> --------------------------------------------------------------------------------
>
>
>
> No virus found in this incoming message.
> Checked by AVG - www.avg.com
> Version: 8.5.339 / Virus Database: 270.12.51/2151 - Release Date: 06/02/09
> 17:53:00
>
>
> _______________________________________________
> xmca mailing list
> xmca@weber.ucsd.edu
> http://dss.ucsd.edu/mailman/listinfo/xmca
>
_______________________________________________
xmca mailing list
xmca@weber.ucsd.edu
http://dss.ucsd.edu/mailman/listinfo/xmca