[Xmca-l] Re: Interesting article on robots and social learning
Andy Blunden
andyb@marxists.org
Sat Jul 14 18:55:09 PDT 2018
I think we go back to Martin's earlier ironic comment here,
Michael.
Andy
------------------------------------------------------------
Andy Blunden
http://www.ethicalpolitics.org/ablunden/index.htm
On 15/07/2018 9:44 AM, Glassman, Michael wrote:
>
> The Turing test, at least the test he wrote in his
> article, is actually a big more complicated than this, and
> especially poignant today. Turing’s test of whether
> computers are acting as human was based on an old English
> game show called The Lying Game (I suppose one of the
> reasons for the title of the movie on Turing, though of
> course it had multiple meanings. But for some reason they
> never mentioned the origin of the phrase in the movie).
> Anyway in the lying game the contestant had to listen to
> two individuals, one of whom was telling the truth about
> the situation and one of whom was lying. The way Turing
> describes it, it sounds quite brutal. The contestant had
> to figure out who the liar was (there was a similar much
> milder version years later in the US). Anyway Turing’s
> proposal, if I remember correctly, was that a computer
> could be considered thinking like a human if the comp the
> contestant was listening to was lying and he or she
> couldn’t tell. In essence the computer would successfully
> lie. Everybody think Turing believed that computers would
> eventually think like humans but my reading of the article
> was that he had no idea, but as the computer stood at the
> time there was no chance.
>
>
>
> The reason this is so poignant is the Mueller indictments
> that came down yesterday. For those outside the U.S. or
> not following the news the indictments were against
> Russian military leading a scheme to convince individuals
> of lies about various actor in the 2016 election (also
> times release of information and breaking in to voting
> systems). But it is the propagation of lies by robots and
> people believing them that interests me. I feel like we
> aren’t putting enough thought into that. Many of the
> people receiving the information could not tell it was no
> from humans and believed it even though in many cases it
> was generated by robots, passing it seems to me Turing’s
> test. How and why did this happen? Of course Turing died
> before the Internet so he couldn’t have known about it.
> But I wonder if part of the reason the robots were
> successful is that they have the ability to mine, collect
> and aggregate people’s biases and then reflect them back
> to us. We tend to engage, believe things in the contexts
> of our own biases. They say in salesmanship that the
> trick is figuring out what people want to here and then
> couching whatever you want to see in that. Trump is a
> master of reading what a group of people want to hear at
> the moment, their biases, and then mirroring it back to them
>
>
>
> If we went back to the Chinese room and the person inside
> was able to read our biases from our messages would they
> then be human.
>
>
>
> We live in a strange age.
>
>
>
> *From:*xmca-l-bounces@mailman.ucsd.edu
> <xmca-l-bounces@mailman.ucsd.edu> *On Behalf Of *Andy Blunden
> *Sent:* Saturday, July 14, 2018 8:58 AM
> *To:* xmca-l@mailman.ucsd.edu
> *Subject:* [Xmca-l] Re: Interesting article on robots and
> social learning
>
>
>
> I understand that the Turing Test is one which AI people
> can use to measure the success of their AI - if you can't
> tell the difference between a computer and a human
> interaction then the computer has passed the Turing test.
> I tend to rely on a kind of anti-Turing Test, that is,
> that if you can tell the difference between the computer
> and the human interaction, then you have passed the
> anti-Turing test, that is, you know something about humans.
>
> Andy
>
> ------------------------------------------------------------
>
> Andy Blunden
> http://www.ethicalpolitics.org/ablunden/index.htm
>
> On 14/07/2018 1:12 PM, Douglas Williams wrote:
>
> Hi--
>
> I think I'll come out of lurking for this one.
> Actually, what you're talking about with this pain
> algorithm system sounds like a modeling system that
> someone might need to develop what Alan Turing
> described as a P-type computing device. A P-type
> computer would receive its programming from inputs of
> pleasure and pain. It was probably derived from
> reading some of the behavioralist models of mind at
> the time. Turing thought that he was probably pretty
> close to being able to develop such a computing
> device, which, because its input was similar, could
> model human thought. The Eliza Rogersian analysis
> computer program was another early idea in which the
> goal was to model the patterns of human interaction,
> and gradually approach closer to human thought and
> interaction that way. And by the 2000's, the idea of
> the "singularity" was afloat, in which one could model
> human minds so well as to enable a human to be
> uploaded into a computer, and live forever as software
> (Kurzweil, 2005). But given that we barely had a
> sufficient model of mind to say Boo with at the time
> (what is consciousness? where does intention come
> from? What is the balance of nature/nurture in
> motivation? Speech utterances? and so on), and you're
> right, AI doesn't have much of a theory of emotion,
> either--the goal of computer software modeling human
> thought seemed very far away to me.
>
>
>
> At someone's request, I wrote a rather whimsical paper
> called "What is Artificial Intelligence?" back in 2006
> about such things. My argument was that statistical
> modeling of human interaction and capturing thought
> was not too easy after all, precisely because of the
> parts of mind we don't think of, and the social
> interactions that, at the time, were not a primary
> focus. I mused about that in the context of my trying
> to write a computer program by applying Chomsky's
> syntactic structures to interpret intention of a few
> simple questions--without, alas, in my case, a
> corpus-supported Markov chain logic to do it.
> Generative grammar would take care of it, right? Wrong.
>
>
> So as someone who had done a little primitive,
> incompetent attempt at speech modeling myself, and in
> the light of my later-acquired knowledge of CHAT,
> Burke, Bakhtin, Mead, and various other people in
> different fields, and of the tendency of people to
> interact through the world through cognitive biases,
> complexes, and embodied perceptions that were not
> readily available to artificial systems, I didn't
> think the singularity was so near.
>
> The terrible thing about computer programs is that
> they do just what you tell them to do, and no more.
> They have no drive to improve, except as programmed.
> When they do improve, their creativity is limited. And
> the approach now still substantially is
> pattern-recognition based. The current paradigm is
> something called Convolutional Neural Network Long
> Short-Term Memory Networks (CNN/LSTM) for speech
> recognition, in which the convolutional neural
> networks reduce the variants of speech input into
> manageable patterns, and temporal processing (temporal
> patterns of the real wold phenomena to which the AI
> system is responding). But while such systems combined
> with natural language processing can increasingly
> mimic human response, and "learn" on their own, and
> while they are approaching the "weak" form of
> artificial general intelligence (AGI), the
> intelligence needed for a machine to perform any
> intellectual task that a human being can, they are an
> awfully long way from "strong" AGI--that is, something
> approaching human consciousness. I think that's
> because they are a long way from capturing the kind of
> social embeddedness of almost all animal behavior, and
> the sense in which human cognition is embedded in the
> messy things, like emotion. A computer algorithm can
> recognize the patterns of emotion, but that's it. An
> AGI system that can experience emotions, or have
> motivation, is quite another thing entirely.
>
> I can tell you that AI confidence is still there. In
> raising questions about cultural and physical
> embodiment in artficial intelligence interations with
> someone in the field recently, he dismissed the idea
> as being that relevant. His thought was that "what I
> find essential is that we acknowledge that there's no
> obvious evidence supporting that the current paradigm
> of CNN/LSTM under various reinforcement algorithms
> isn't enough for A AGI and in particular for broad
> animal-like intelligence like that of ravens and dogs."
>
> But ravens and dogs are embedded in social
> interaction, in intentionality, in
> consciousness--qualitatively different than ours,
> maybe, but there. Dogs don't do what you ask them to,
> always. When they do things, they do them for their
> own intentionality, which may be to please you, or may
> be to do something you never asked the dog to do,
> which is either inherent in its nature, or an
> expression of social interactions with you or others,
> many of which you and they may not be consciously
> aware of. The deep structure of metaphor, the
> spatiotemporal relations of language that Langacker
> describes as being necessary for construal, the worlds
> of narrativized experience, are mostly outside of the
> reckoning, so far as I know (though I'm not an
> expert--I could be at least partly wrong) of the
> current CNN/LSTM paradigm.
>
> My old interlocutor in thinking about my language
> program, Noam Chomsky, has been a pretty sharp critic
> of the pattern recognition approach to artificial
> intelligence.
>
> Here's Chomsky's take on the idea:
>
> http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html
>
> And here's Peter Norvig's response; he's a director of
> research at Google, where Kurzweil is, and where, I
> assume, they are as close to the strong version of
> artificial general intelligence as anyone out there...
>
> http://norvig.com/chomsky.html
>
> Frankly, I would be quite interested in what you think
> of these things. I'm merely an Isaiah Berlin fox,
> chasing to and fro at all the pretty ideas out there.
> But you, many of you, are, I suspect, the untapped
> hedgehogs whose ideas on these things would see more
> readily what I dimly grasp must be required, not just
> for achieving a strong AGI, but for achieving
> something that we would see as an ethical, reasonable
> artificial mind that expands human experience, rather
> than becomes a prison that reduces human interactions
> to its own level.
>
> My own thinking is that lately, Cognitive Metaphor
> Theory (CMT), which I knew more of in its earlier (now
> "standard model') days, is getting even more
> interesting than it was. I'd done a transfer term to
> UC Berkeley to study with George Lakoff, but we didn't
> hit it off well, perhaps I kept asking him questions
> about social embeddedness, and similarities to
> Vygotsky's theory of complex thought, and was too
> expressive about my interest in linking out from his
> approach than folding in. It seems that the idea I was
> rather woolily suggesting to Lakoff back then has
> caught on: namely, that utterances could be explored
> for cultural variation and historical embeddedness, a
> form ofsocial context to the narratives and metaphors
> and blended spaces that underlay speech utterances and
> thought; that there was a degree of social embodiment
> as well as physiological embodiment through which
> language operated. I thought then, and it looks like
> some other people now, are thinking that someone
> seeking to understand utterances (as a strong AGI
> system would need to do) really, would need to engage
> in internalizing and ventriloqusing a form of Geertz's
> thick description of interactions. In such forms,
> words do not mean what they say, and can have
> different affect that is a bit more complex than I
> think temporal processing currently addresses.
>
> I think these are the kind of things that artificial
> intelligence would need truly to advance, and that
> Bakhtin and Vygotsky and Leont'ev and in the visual
> world, Eisenstein were addressing all along...
>
> And, of course, you guys.
>
>
>
> Regards,
>
> Douglas Willams
>
>
>
>
>
>
>
> On Tuesday, July 3, 2018, 10:35:45 AM PDT, David H
> Kirshner <dkirsh@lsu.edu> <mailto:dkirsh@lsu.edu> wrote:
>
>
>
>
>
> The other side of the coin is that ineffable human
> experience is becoming more effable.
>
> Computers can now look at a human brain scan and
> determine the degree of subjectively experienced pain:
>
>
>
> In 2013, Tor Wager, a neuroscientist at the University
> of Colorado, Boulder, took the logical next step by
> creating an algorithm that could recognize pain’s
> distinctive patterns; today, it can pick out brains in
> pain with more than ninety-five-per-cent accuracy.
> When the algorithm is asked to sort activation maps by
> apparent intensity, its ranking matches participants’
> subjective pain ratings. By analyzing neural activity,
> it can tell not just whether someone is in pain but
> also how intense the experience is.
>
>
>
> So, perhaps the computer can’t “feel our pain,” but it
> can sure “sense our pain!”
>
>
>
> Here’s the full article:
>
> https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain
>
>
>
> David
>
>
>
> *From:*xmca-l-bounces@mailman.ucsd.edu
> <mailto:xmca-l-bounces@mailman.ucsd.edu>
> <xmca-l-bounces@mailman.ucsd.edu>
> <mailto:xmca-l-bounces@mailman.ucsd.edu> *On Behalf Of
> *Glassman, Michael
> *Sent:* Tuesday, July 3, 2018 8:16 AM
> *To:* eXtended Mind, Culture, Activity
> <xmca-l@mailman.ucsd.edu> <mailto:xmca-l@mailman.ucsd.edu>
> *Subject:* [Xmca-l] Re: Interesting article on robots
> and social learning
>
>
>
> / /
>
>
>
> It seems like we are still having the same argument as
> when robots first came on the scene. In response to
> John McCarthy, who was claiming that eventually robots
> can have belief systems and motivations similar to
> humans through AI John Searle wrote the Chinese room.
> There have been a lot of responses to the Chinese room
> over the years and a number of digital philosopher
> claim it is no longer salient, but I don’t think
> anybody has ever effectively answered his central
> question.
>
>
>
> Just a quick recap. You come to a closed door and
> know there is a person on the other side. To
> communicate you decide the teacher the person on the
> other side Chinese. You do this by continuously
> exchanging rules systems under the door. After a
> while you are able to have a conversation with the
> individual in perfect Chinese. But does that person
> actually know Chinese just from the rule systems. I
> think Searle’s major point is are you really learning
> if you don’t know why you’re learning, or are you just
> repeating. Learning is embedded in the human condition
> and the reason it works so well and is adaptable is
> because we understand it when we use what we learn in
> the world in response to others. To put it in
> response to the post, does a bomb defusion robot
> really learn how to defuse a bomb if it does not know
> why it is doing it. It might cut the right wires at
> the right time but it doesn’t understand why and
> therefore is not doing the task just a series of steps
> it has been able to absorb. Is that the opposite of
> human learning?
>
>
>
> What the researcher did really isn’t that special at
> this point. Well I definitely couldn’t do it and it
> is amazing, but it is in essence a miniature version
> of Libratus (which beat experts at Texas Hold em) and
> Alphago (which beat the second best Go player in the
> world). My guess it is the same use of deep learning
> in which the program integrates new information into
> what it is already capable of. If machines can learn
> from interacting with other humans then they can learn
> from interacting with other machines. It is the same
> principle (though much, much simpler in this case).
> The question is what does it mean. As we defining
> learning down because of the zeitgeist. Greg started
> his post saying a socio-cultural theorist be
> interested in this research. I wonder if they might
> more likely to be the ones putting on the brakes,
> asking questions about it.
>
>
>
> Michael
>
>
>
> *From:*xmca-l-bounces@mailman.ucsd.edu
> <mailto:xmca-l-bounces@mailman.ucsd.edu>
> <xmca-l-bounces@mailman.ucsd.edu
> <mailto:xmca-l-bounces@mailman.ucsd.edu>> *On Behalf
> Of *Andy Blunden
> *Sent:* Tuesday, July 03, 2018 7:04 AM
> *To:* xmca-l@mailman.ucsd.edu
> <mailto:xmca-l@mailman.ucsd.edu>
> *Subject:* [Xmca-l] Re: Interesting article on robots
> and social learning
>
>
>
> Does a robot have "motivation"?
>
> andy
>
> ------------------------------------------------------------
>
> Andy Blunden
> http://www.ethicalpolitics.org/ablunden/index.htm
>
> On 3/07/2018 5:28 PM, Rod Parker-Rees wrote:
>
> Hi Greg,
>
>
>
> What is most interesting to me about the
> understanding of learning which informs most AI
> projects is that it seems to assume that affect is
> irrelevant. The role of caring, liking, worrying
> etc. in social learning seems to be almost
> universally overlooked because information is seen
> as something that can be ‘got’ and ‘given’ more
> than something that is distributed in relationships.
>
>
>
> Does anyone know about any AI projects which
> consider how machines might feel about what they
> learn?
>
>
>
> All the best,
>
>
> Rod
>
>
>
> *From:*xmca-l-bounces@mailman.ucsd.edu
> <mailto:xmca-l-bounces@mailman.ucsd.edu>
> <xmca-l-bounces@mailman.ucsd.edu>
> <mailto:xmca-l-bounces@mailman.ucsd.edu> *On
> Behalf Of *Greg Thompson
> *Sent:* 03 July 2018 02:50
> *To:* eXtended Mind, Culture, Activity
> <xmca-l@mailman.ucsd.edu>
> <mailto:xmca-l@mailman.ucsd.edu>
> *Subject:* [Xmca-l] Interesting article on robots
> and social learning
>
>
>
> I’m ambivalent about this project but I suspect
> that some young CHAT scholar out there could have
> a lot to contribute to a project like this one:
>
> https://www.sapiens.org/column/machinations/artificial-intelligence-culture/
>
>
>
> -Greg
>
> --
>
> Gregory A. Thompson, Ph.D.
>
> Assistant Professor
>
> Department of Anthropology
>
> 880 Spencer W. Kimball Tower
>
> Brigham Young University
>
> Provo, UT 84602
>
> WEBSITE: greg.a.thompson.byu.edu
> <http://greg.a.thompson.byu.edu>
> http://byu.academia.edu/GregoryThompson
>
> ------------------------------------------------------------
>
> Image removed by sender.
> <http://www.plymouth.ac.uk/worldclass>
>
> This email and any files with it are confidential
> and intended solely for the use of the recipient
> to whom it is addressed. If you are not the
> intended recipient then copying, distribution or
> other use of the information contained is strictly
> prohibited and you should not rely on it. If you
> have received this email in error please let the
> sender know immediately and delete it from your
> system(s). Internet emails are not necessarily
> secure. While we take every care, University of
> Plymouth accepts no responsibility for viruses and
> it is your responsibility to scan emails and their
> attachments. University of Plymouth does not
> accept responsibility for any changes made after
> it was sent. Nothing in this email or its
> attachments constitutes an order for goods or
> services unless accompanied by an official order
> form.
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180715/36fe97b0/attachment.html
More information about the xmca-l
mailing list