[Xmca-l] Re: Interesting article on robots and social learning

Greg Thompson greg.a.thompson@gmail.com
Sat Jul 14 08:15:27 PDT 2018


Andy, thanks for sending this since it alerted me to Doug's message (which
seems to have not been included in this thread for me and so this is the
first time I'm seeing it - not sure if the XMCA list is "playing with us"
or something...)

Doug,
I agree with what you have pointed to here as far as the important role of
embodiment and social and cultural embededdness. Would you mind sharing
your whimsical paper that you mentioned?

Also, one related line of thought is: I wonder how good AI has been at
thinking about what John Searle calls "social ontology". This refers to the
social worlds that all humans inhabit. For far too long these were
considered as phantasmic worlds, worlds that were "socially constructed"
and therefore  unreal. But recent thinking in social sciences (Bruno
Latour, among others) has pushed people to take these social constructions
much more seriously .

As I understand (rather dimly) human development, one of the critical
aspects of development (typically accomplished in the 7-9 years) is the
coming into awareness of these culturally particular social reals
(ontologies). The result of this learning is that the adolescent encounters
the world not simply as it is but as others recognize it to be. From this
developmental perspective, the child in the story of the Emperor's new
clothes has failed to reach this basic developmental stage - he doesn't see
the world as others see it, he sees it as it is - the Emperor is naked! As
a matter of modeling AI, what is needed is for the machine to be able to
see what is not there (in a simplistic scientistic sense), namely the world
as others see it. This will require AI modelers to let go of their
scientistic sensibilities (which I assume they have) and build machines
that can see the world not as the pre-cultural child sees it, but which can
grasp the complex culturally particular social worlds that we inhabit (yes,
full of feeling, but also full of role relations and all kinds of "being"
that aren't there). I suspect that most AI developers would prefer to model
an understanding of the world "as it is" (i.e., scientistic) rather than as
others consider it to be. To my mind that means neglecting all the
"ratcheting power" of human culture (as Tomasello described it). The
result, I suspect, is that AI would never begin to approach human
consciousness (perhaps there will be some other form of AI-consciousness,
but for it to be human, it must be cultural, with all the
non-scientific-ness that entails). But perhaps that's a good thing (i.e.,
I'm not going to be the one to tell them this!)

Anyway, I really appreciate your contribution Doug (and I'm not sure why I
didn't see it before Andy responded to it).

Very best,
greg







On Sat, Jul 14, 2018 at 9:58 PM, Andy Blunden <andyb@marxists.org> wrote:

> I understand that the Turing Test is one which AI people can use to
> measure the success of their AI - if you can't tell the difference between
> a computer and a human interaction then the computer has passed the Turing
> test. I tend to rely on a kind of anti-Turing Test, that is, that if you
> can tell the difference between the computer and the human interaction,
> then you have passed the anti-Turing test, that is, you know something
> about humans.
>
> Andy
> ------------------------------
> Andy Blunden
> http://www.ethicalpolitics.org/ablunden/index.htm
> On 14/07/2018 1:12 PM, Douglas Williams wrote:
>
> Hi--
>
> I think I'll come out of lurking for this one. Actually, what you're
> talking about with this pain algorithm system sounds like a modeling system
> that someone might need to develop what Alan Turing described as a P-type
> computing device. A P-type computer would receive its programming from
> inputs of pleasure and pain. It was probably derived from reading some of
> the behavioralist models of mind at the time. Turing thought that he was
> probably pretty close to being able to develop such a computing device,
> which, because its input was similar, could model human thought. The Eliza
> Rogersian analysis computer program was another early idea in which the
> goal was to model the patterns of human interaction, and gradually approach
> closer to human thought and interaction that way. And by the 2000's, the
> idea of the "singularity" was afloat, in which one could model human minds
> so well as to enable a human to be uploaded into a computer, and live
> forever as software (Kurzweil, 2005). But given that we barely had a
> sufficient model of mind to say Boo with at the time (what is
> consciousness? where does intention come from? What is the balance of
> nature/nurture in motivation? Speech utterances? and so on), and you're
> right, AI doesn't have much of a theory of emotion, either--the goal of
> computer software modeling human thought seemed very far away to me.
>
> At someone's request, I wrote a rather whimsical paper called "What is
> Artificial Intelligence?" back in 2006 about such things. My argument was
> that statistical modeling of human interaction and capturing thought was
> not too easy after all, precisely because of the parts of mind we don't
> think of, and the social interactions that, at the time, were not a primary
> focus. I mused about that in the context of my trying to write a computer
> program by applying Chomsky's syntactic structures to interpret intention
> of a few simple questions--without, alas, in my case, a corpus-supported
> Markov chain logic to do it. Generative grammar would take care of it,
> right? Wrong.
>
> So as someone who had done a little primitive, incompetent attempt at
> speech modeling myself, and in the light of my later-acquired knowledge of
> CHAT, Burke, Bakhtin, Mead, and various other people in different fields,
> and of the tendency of people to interact through the world through
> cognitive biases, complexes, and embodied perceptions that were not readily
> available to artificial systems, I didn't think the singularity was so near.
>
> The terrible thing about computer programs is that they do just what you
> tell them to do, and no more. They have no drive to improve, except as
> programmed. When they do improve, their creativity is limited. And the
> approach now still substantially is pattern-recognition based. The current
> paradigm is something called Convolutional Neural Network Long Short-Term
> Memory Networks (CNN/LSTM) for speech recognition, in which the
> convolutional neural networks reduce the variants of speech input into
> manageable patterns, and temporal processing (temporal patterns of the real
> wold phenomena to which the AI system is responding). But while such
> systems combined with natural language processing can increasingly mimic
> human response, and "learn" on their own, and while they are approaching
> the "weak" form of artificial general intelligence (AGI), the intelligence
> needed for a machine to perform any intellectual task that a human being
> can, they are an awfully long way from "strong" AGI--that is, something
> approaching human consciousness. I think that's because they are a long way
> from capturing the kind of social embeddedness of almost all animal
> behavior, and the sense in which human cognition is embedded in the messy
> things, like emotion. A computer algorithm can recognize the patterns of
> emotion, but that's it. An AGI system that can experience emotions, or have
> motivation, is quite another thing entirely.
>
> I can tell you that AI confidence is still there. In raising questions
> about cultural and physical embodiment in artficial intelligence
> interations with someone in the field recently, he dismissed the idea as
> being that relevant. His thought was that "what I find essential is that we
> acknowledge that there's no obvious evidence  supporting that the current
> paradigm of CNN/LSTM under various reinforcement algorithms isn't enough
> for A AGI and in particular for broad animal-like intelligence like that of
> ravens and dogs."
>
> But ravens and dogs are embedded in social interaction, in intentionality,
> in consciousness--qualitatively different than ours, maybe, but there. Dogs
> don't do what you ask them to, always. When they do things, they do them
> for their own intentionality, which may be to please you, or may be to do
> something you never asked the dog to do, which is either inherent in its
> nature, or an expression of social interactions with you or others, many of
> which you and they may not be consciously aware of. The deep structure of
> metaphor, the spatiotemporal relations of language that Langacker describes
> as being necessary for construal, the worlds of narrativized experience,
> are mostly outside of the reckoning, so far as I know (though I'm not an
> expert--I could be at least partly wrong) of the current CNN/LSTM paradigm.
>
> My old interlocutor in thinking about my language program, Noam Chomsky,
> has been a pretty sharp critic of the pattern recognition approach to
> artificial intelligence.
>
> Here's Chomsky's take on the idea:
>
> http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html
>
> And here's Peter Norvig's response; he's a director of research at Google,
> where Kurzweil is, and where, I assume, they are as close to the strong
> version of artificial general intelligence as anyone out there...
>
> http://norvig.com/chomsky.html
>
> Frankly, I would be quite interested in what you think of these things.
> I'm merely an Isaiah Berlin fox, chasing to and fro at all the pretty ideas
> out there. But you, many of you, are, I suspect, the untapped hedgehogs
> whose ideas on these things would see more readily what I dimly grasp must
> be required, not just for achieving a strong AGI, but for achieving
> something that we would see as an ethical, reasonable artificial mind that
> expands human experience, rather than becomes a prison that reduces human
> interactions to its own level.
>
> My own thinking is that lately, Cognitive Metaphor Theory (CMT), which I
> knew more of in its earlier (now "standard model') days, is getting even
> more interesting than it was. I'd done a transfer term to UC Berkeley to
> study with George Lakoff, but we didn't hit it off well, perhaps I kept
> asking him questions about social embeddedness, and similarities to
> Vygotsky's theory of complex thought, and was too expressive about my
> interest in linking out from his approach than folding in. It seems that
> the idea I was rather woolily suggesting to Lakoff back then has caught on:
> namely, that utterances could be explored for cultural variation and
> historical embeddedness, a form ofsocial context to the narratives and
> metaphors and blended spaces that underlay speech utterances and thought;
> that there was a degree of social embodiment as well as physiological
> embodiment through which language operated. I thought then, and it looks
> like some other people now, are thinking that someone seeking to understand
> utterances (as a strong AGI system would need to do) really, would need to
> engage in internalizing and ventriloqusing a form of Geertz's thick
> description of interactions. In such forms, words do not mean what they
> say, and can have different affect that is a bit more complex than I think
> temporal processing currently addresses.
>
> I think these are the kind of things that artificial intelligence would
> need truly to advance, and that Bakhtin and Vygotsky and Leont'ev and in
> the visual world, Eisenstein were addressing all along...
>
> And, of course, you guys.
>
> Regards,
> Douglas Willams
>
>
>
> On Tuesday, July 3, 2018, 10:35:45 AM PDT, David H Kirshner
> <dkirsh@lsu.edu> <dkirsh@lsu.edu> wrote:
>
>
> The other side of the coin is that ineffable human experience is becoming
> more effable.
>
> Computers can now look at a human brain scan and determine the degree of
> subjectively experienced pain:
>
>
>
> In 2013, Tor Wager, a neuroscientist at the University of Colorado,
> Boulder, took the logical next step by creating an algorithm that could
> recognize pain’s distinctive patterns; today, it can pick out brains in
> pain with more than ninety-five-per-cent accuracy. When the algorithm is
> asked to sort activation maps by apparent intensity, its ranking matches
> participants’ subjective pain ratings. By analyzing neural activity, it can
> tell not just whether someone is in pain but also how intense the
> experience is.
>
>
>
> So, perhaps the computer can’t “feel our pain,” but it can sure “sense our
> pain!”
>
>
>
> Here’s the full article:
>
> <https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain>
> https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain
>
>
>
> David
>
>
>
> *From:* xmca-l-bounces@mailman.ucsd.edu <xmca-l-bounces@mailman.ucsd.edu>
> <xmca-l-bounces@mailman.ucsd.edu> *On Behalf Of *Glassman, Michael
> *Sent:* Tuesday, July 3, 2018 8:16 AM
> *To:* eXtended Mind, Culture, Activity <xmca-l@mailman.ucsd.edu>
> <xmca-l@mailman.ucsd.edu>
> *Subject:* [Xmca-l] Re: Interesting article on robots and social learning
>
>
>
>
>
>
>
> It seems like we are still having the same argument as when robots first
> came on the scene.  In response to John McCarthy, who was claiming that
> eventually robots can have belief systems and motivations similar to humans
> through AI John Searle wrote the Chinese room.  There have been a lot of
> responses to the Chinese room over the years and a number of digital
> philosopher claim it is no longer salient, but I don’t think anybody has
> ever effectively answered his central question.
>
>
>
> Just a quick recap.  You come to a closed door and know there is a person
> on the other side. To communicate you decide the teacher the person on the
> other side Chinese. You do this by continuously exchanging rules systems
> under the door.  After a while you are able to have a conversation with the
> individual in perfect Chinese. But does that person actually know Chinese
> just from the rule systems.  I think Searle’s major point is are you really
> learning if you don’t know why you’re learning, or are you just repeating.
> Learning is embedded in the human condition and the reason it works so well
> and is adaptable is because we understand it when we use what we learn in
> the world in response to others.  To put it in response to the post, does a
> bomb defusion robot really learn how to defuse a bomb if it does not know
> why it is doing it.  It might cut the right wires at the right time but it
> doesn’t understand why and therefore is not doing the task just a series of
> steps it has been able to absorb.  Is that the opposite of human learning?
>
>
>
> What the researcher did really isn’t that special at this point.  Well I
> definitely couldn’t do it and it is amazing, but it is in essence a
> miniature version of Libratus (which beat experts at Texas Hold em) and
> Alphago (which beat the second best Go player in the world).  My guess it
> is the same use of deep learning in which the program integrates new
> information into what it is already capable of.  If machines can learn from
> interacting with other humans then they can learn from interacting with
> other machines.  It is the same principle (though much, much simpler in
> this case).  The question is what does it mean.  As we defining learning
> down because of the zeitgeist.  Greg started his post saying a
> socio-cultural theorist be interested in this research.  I wonder if they
> might more likely to be the ones putting on the brakes, asking questions
> about it.
>
>
>
> Michael
>
>
>
> *From:* xmca-l-bounces@mailman.ucsd.edu <xmca-l-bounces@mailman.ucsd.edu> *On
> Behalf Of *Andy Blunden
> *Sent:* Tuesday, July 03, 2018 7:04 AM
> *To:* xmca-l@mailman.ucsd.edu
> *Subject:* [Xmca-l] Re: Interesting article on robots and social learning
>
>
>
> Does a robot have "motivation"?
>
> andy
> ------------------------------
>
> Andy Blunden
> http://www.ethicalpolitics.org/ablunden/index.htm
>
> On 3/07/2018 5:28 PM, Rod Parker-Rees wrote:
>
> Hi Greg,
>
>
>
> What is most interesting to me about the understanding of learning which
> informs most AI projects is that it seems to assume that affect is
> irrelevant. The role of caring, liking, worrying etc. in social learning
> seems to be almost universally overlooked because information is seen as
> something that can be ‘got’ and ‘given’ more than something that is
> distributed in relationships.
>
>
>
> Does anyone know about any AI projects which consider how machines might
> feel about what they learn?
>
>
>
> All the best,
>
>
> Rod
>
>
>
> *From:* xmca-l-bounces@mailman.ucsd.edu <xmca-l-bounces@mailman.ucsd.edu>
> <xmca-l-bounces@mailman.ucsd.edu> *On Behalf Of *Greg Thompson
> *Sent:* 03 July 2018 02:50
> *To:* eXtended Mind, Culture, Activity <xmca-l@mailman.ucsd.edu>
> <xmca-l@mailman.ucsd.edu>
> *Subject:* [Xmca-l] Interesting article on robots and social learning
>
>
>
> I’m ambivalent about this project but I suspect that some young CHAT
> scholar out there could have a lot to contribute to a project like this one:
>
>
> <https://www.sapiens.org/column/machinations/artificial-intelligence-culture/>
> https://www.sapiens.org/column/machinations/artificial-intelligence-
> culture/
>
>
>
> -Greg
>
> --
>
> Gregory A. Thompson, Ph.D.
>
> Assistant Professor
>
> Department of Anthropology
>
> 880 Spencer W. Kimball Tower
>
> Brigham Young University
>
> Provo, UT 84602
>
> WEBSITE: greg.a.thompson.byu.edu
> http://byu.academia.edu/GregoryThompson
> ------------------------------
>
> [image: Image removed by sender.] <http://www.plymouth.ac.uk/worldclass>
>
> This email and any files with it are confidential and intended solely for
> the use of the recipient to whom it is addressed. If you are not the
> intended recipient then copying, distribution or other use of the
> information contained is strictly prohibited and you should not rely on it.
> If you have received this email in error please let the sender know
> immediately and delete it from your system(s). Internet emails are not
> necessarily secure. While we take every care, University of Plymouth
> accepts no responsibility for viruses and it is your responsibility to scan
> emails and their attachments. University of Plymouth does not accept
> responsibility for any changes made after it was sent. Nothing in this
> email or its attachments constitutes an order for goods or services unless
> accompanied by an official order form.
>
>
>
>
>


-- 
Gregory A. Thompson, Ph.D.
Assistant Professor
Department of Anthropology
880 Spencer W. Kimball Tower
Brigham Young University
Provo, UT 84602
WEBSITE: greg.a.thompson.byu.edu
http://byu.academia.edu/GregoryThompson
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180715/2fff4df9/attachment.html 


More information about the xmca-l mailing list