[Xmca-l] Re: Interesting article on robots and social learning
Douglas Williams
djwdoc@yahoo.com
Fri Jul 20 14:57:19 PDT 2018
Hi, Michael--
I think your response is correct (or, at least it's the same one I had, and I like to think I'm correct). That's partly why I wanted to bring this to people's attention here.
What I also think, and have thought for some time, is that HCI/AI is a field that could use a lot more theory and practice development that applied CHAT to these problems. So far as I can see--and keep in mind research is my hobby these days, rather than my avocation--there has been a small steady stream of such work, most notably by Bonnie Nardi and Victor Kaptelinin, and Daisy Mwanza, and some other "3rd Generation" activity theory people, but it seems more specialized a niche than it should be, and less influential than it ought to be. It always surprises me when I talk to some people in my world about theories of learning and action and agency and construal of intent, and I don't hear more about CHAT, Action Research, and Cognitive Linguistics. But I don't
I see this as an example of different activity systems approaching a shared object with different rules, communities of practice and education, rules, artifacts, modes of thought, economics, politics--very different, but yet, I think, addressing some of the same objects. I think they have need of each other. I've been part of your activity system (.edu), and now I'm part of another activity system (.com), and from time to time, I throw stones in both of your parts of the pool to see if the ripples will meet. I wish they would, more often. But the little pebbles I throw are so small, and the pool is so large, and I know there is a long history of staying in one's corner of the pool (which is always full of interesting activities of its own, and dare I say it, a little xenophobic about other parts of the pool), so it is hard.
In both cases, I'm more in the position of an implementer, a bricoleur with the things I'm authorized to use, or that no one stops me from using: I describe and implement technology, more than design it; I read and apply ideas, more than research and develop them; that's the role I have. In my current activity world, I position myself as a potential stakeholder in the product, and as a product consumer, I have certain use case priorities that I request should be considered in design. In your activity world, I'd suggest that here is a substantial research deficit in an area that is probably worth many, many dissertations, where there is probably a possibility to attract grantmakers and internships and placement of one's students--but only if there is more interest in breaking the distance between theory and practice, and appeal across disciplines--maybe more interdisciplinary institutes could evolve out of that practice, which could draw from several different academic areas; out of such things after all, cognitive science programs have formed. If the research you do seems relevant to the potential grantmakers for what is a substantial and growing area of practice, then these grantmakers will come, particularly if some of you are able to cross over and present papers at things like the Neural Information Processing Systems 2018 convention. If you identify problems that, until you formulated the theory and remediation, they intuited they had, but could not fully articulate, I think the interest would be there. Note that the Kate Crawford presentation on bias was a keynote speech, and bias, as well as addressing other externalities of process-oriented development, is a huge and growing area of interest.
I'm too old and too ill-placed to participate much on either side of bringing these activity systems more closely in alignment with each other. But I do see that I could be a beneficiary in this way: We all have an interest in helping to ensure that the artificial intelligence systems of the future, the initial implementations of which are in production now, develop in ways that are more human-centered than transaction process-centered, more focused on inclusion and affordance, and that they ultimately advance human freedom, and human agency, rather than restrict humans to live within a world of technology whose bars, because never fully articulated, may not be fully visible. But they will be there, nonetheless.
It's a little too late for 2018, but I'll put this in for reference, just as another pebble...
NIPS 2018 Call for Papers
|
|
| |
NIPS 2018 Call for Papers
NIPS Foundation
NIPS Website
|
|
|
Regards,Doug
On Tuesday, July 17, 2018, 12:45:45 PM PDT, Glassman, Michael <glassman.13@osu.edu> wrote:
Hi David and Julie and Greg and whoever else is interested,
Finally got a chance to take a look at the Kate Crawford talk and I sort of feel it represents both what is hopeful and what is not about machine learning and maybe answer Greg's question a bit about the role of CHAT (and other more participatory, process oriented social science theories) in machine learning. First what is hopeful. I think it is great that this is a topic that people seem really worried about. What I am a bit concerned about though is two things. One is the general lack of awareness of how these issues have played out in other areas. The second is what I see as the continued to commitment to centralization in machine learning, that really smart people from a few research shops are going to figure out how to fix this.
So my first concern with Dr. Crawford's talk. It was like the 20th century didn't exist at all in the development of thinking about the roles that bias and classification plays in our lives and how that is being replicated by machine learning in possibly damaging ways. When Dr. Crawford starting talking about classification for instance I was hoping (against hope) that she would talk about Mead and the social psychology on classification that emerged out of the social psychology program at the University of Chicago. And/or the beginnings of action research. Or one of the other theories that see classification as a purposeful and destructive process. I was also hoping she might talk about a more modern theory like intersectionality. Instead she simply talked about pretty ancient ideas that more or less danced around the issue. I wonder if the reason for this is that a lot of people in machine learning tend to think the problem can be solved through coding (a bit more on that in a bit) rather than taking programming into the community and making a real effort to create a symbiotic relationship between machine and human activity (perhaps this is where CHAT comes in).
Near the beginning Dr. Crawford talks about "socio-technical," a term that has been used so broadly that it seems to have lost most meaning. But the term socio-technical actually did or does have a meaning. It was coined by the action theorist Eric Trist who suggested that communities themselves understand best how to use technologies to serve their functions. You bring in the technology with an understanding of how it works but then you rely on the community to implement and change it to meet its needs. In some ways I feel that is what was at least partially done in the Fifth Dimension project and other CHAT projects. Maybe to keep machine learning from being destructive to communities we need to find similar uses for it in the community (I spent part of the summer talking to a bunch of Chinese students immersed in AI for education - one of the reasons this is at the forefront of my mind and we discussed this quite a bit). They aren't as committed to the whole community of computing geniuses thing as we are in this culture, at least not those students.
The important issue here is decentralization of problem solving. Something that Berners Lee has been talking about a lot
https://www.vanityfair.com/news/2018/07/the-man-who-created-the-world-wide-web-has-some-regrets
And maybe be just as important for AI as it is for the Internet. It would mean that a lot of the work for what machine learning would look like would be on the fly, in the community. And again I think some of the work done in CHAT may work for this.
Okay, just meanderings of my mind I guess. I hope some of it made sense.
Michael
-----Original Message-----
From: xmca-l-bounces@mailman.ucsd.edu <xmca-l-bounces@mailman.ucsd.edu> On Behalf Of JULIE WADDINGTON
Sent: Tuesday, July 17, 2018 10:34 AM
To: eXtended Mind, Culture, Activity <xmca-l@mailman.ucsd.edu>
Subject: [Xmca-l] Re: Interesting article on robots and social learning
Doug,
Thank you for sharing the video with Kate Crawford's keynote speech.
Only managed to watch half so far, but from what I've gleaned up to now, makes a strong argument for the need for SOCIO-TECHNICAL analysis which fits in with the concerns/questions being raised by everyone.
Talking of bias in AI and its huge ramifications (racism, sexism, homophobia, etc.), Crawford warns that: "When we consider bias just as a technical issue, then we're already missing the (bigger?) picture. The default of all data gathered reflects the deepest structural biases of society".
Sounds obvious to state that social bias always precedes biases in AI, but the examples given and discussion of them provide much food for thought.
Thanks again for sharing,
Julie
> Hi, Michael--I think it could be, as there is certainly an interest
> in dealing with bias, especially once you move away from the
> relatively easily detectable ones in chatbots. Frankly, I was
> thinking in part to check in with you guys to see what you thought, as
> the questions Kate Crawford poses here in the Neural Information
> Processing Conference keynote last year are precisely the ones of
> perspective and mind that I associate with CHAT. Perhaps the most
> useful thing I can do is to put this in front of you all for
> consideration:
> The Trouble with Bias - NIPS 2017 Keynote - Kate Crawford #NIPS2017
>
>
> |
> |
> |
> | | |
>
> |
>
> |
> |
> | |
> The Trouble with Bias - NIPS 2017 Keynote - Kate Crawford #NIPS2017
>
> Kate Crawford is a leading researcher, academic and author who has
> spent the last decade studying the social imp...
> |
>
> |
>
> |
>
>
> Regards,Doug
> On ‎Sunday‎, ‎July‎ ‎15‎, ‎2018‎
> ‎05‎:‎26‎:‎23‎ ‎PM‎ ‎PDT, Glassman, Michael
> <glassman.13@osu.edu> wrote:
>
>
> I wonder if where CHAT might be most interesting in addressing AI are
> on topics of bias and oppression. I believe that there is a real
> danger that AI can be used as a tool for oppression, especially from
> some of its early uses. One of the things people discussing the
> possibilities of AI don’t discuss near enough is that it picks up
> and integrates biases from the information it receives. Sometimes
> this can be interesting such as the program Libratus that beat world
> class poker players at Texas Hold ‘em. One of the less discussed
> aspects is that one of the reasons it was capable of doing this is it
> picks up on the playing biases of the players it is competing with and
> integrates them into its decision making process. This I think is
> one of the reasons that it has to play only one player at a time to be successful.
>
> Â
>
> The danger is when it integrates these biases into a larger decision
> making process. There is an AI program called Northpointe used by
> the justice department that uses a combination of big data and deep
> learning to make decisions about whether people convicted of crimes
> will wind up back in jail. This should have implications for
> sentencing. The program, surprise, tends to be much harsher with
> Black individuals than white individuals. Even if you keep ethnicity
> outside of the equation it has enough other information to create a
> natural bias. There are also some of the more advanced translation
> programs which tend to incorporate the biases of the languages (e.g.
> mysoginistic) into the translations without those getting the
> translations realizing it. AI , especially machine learning, is in
> many ways a prisoner to the information it receives. Who decides
> what information it receives? Much like the intelligence tests of an
> earlier age people will use AI decision making as being neutral or
> objective when it actually mirrors back (almost
> perfectly) those who are feeding it information.
>
> Â
>
> Like I said I don’t see this point raised nearly enough. Perhaps
> CHAT is one of the fields in a position to constantly point this out,
> explore the ways that AI is culturally biases, and those that dominate
> information flow can easily use it as a tool for oppression.
>
> Â
>
> Michael
>
> Â
>
> From: xmca-l-bounces@mailman.ucsd.edu
> <xmca-l-bounces@mailman.ucsd.edu>On
> Behalf Of Greg Thompson
> Sent: Sunday, July 15, 2018 12:12 PM
> To: eXtended Mind, Culture, Activity <xmca-l@mailman.ucsd.edu>
> Subject: [Xmca-l] Re: Interesting article on robots and social
> learning
>
> Â
>
> And I'm still curious if any others out there might have anything to
> contribute to Doug's query regarding what CHAT theory (particularly
> developmental theories) might have to offer thinking about AI?
>
> Â
>
> It seems an interesting question to think through even if you aren't
> on board with the larger AI project...
>
> Â
>
> -greg
>
> Â
>
> On Sun, Jul 15, 2018 at 10:55 AM, Andy Blunden <andyb@marxists.org> wrote:
>
>
> I think we go back to Martin's earlier ironic comment here, Michael.
>
> Andy
>
> Andy Blunden
> http://www.ethicalpolitics.org/ablunden/index.htm
>
> On 15/07/2018 9:44 AM, Glassman, Michael wrote:
>
>
> The Turing test, at least the test he wrote in his article, is
> actually a big more complicated than this, and especially poignant
> today. Turing’s test of whether computers are acting as human was
> based on an old English game show called The Lying Game (I suppose one
> of the reasons for the title of the movie on Turing, though of course
> it had multiple meanings. But for some reason they never mentioned
> the origin of the phrase in the movie). Anyway in the lying game the
> contestant had to listen to two individuals, one of whom was telling
> the truth about the situation and one of whom was lying. The way
> Turing describes it, it sounds quite brutal. The contestant had to
> figure out who the liar was (there was a similar much milder version
> years later in the US). Anyway Turing’s proposal, if I remember
> correctly, was that a computer could be considered thinking like a
> human if the comp the contestant was listening to was lying and he or
> she couldn’t tell. In essence the computer would successfully lie.Â
> Everybody think Turing believed that computers would eventually think
> like humans but my reading of the article was that he had no idea, but as the computer stood at the time there was no chance.
>
> Â
>
> The reason this is so poignant is the Mueller indictments that came
> down yesterday. For those outside the U.S. or not following the news
> the indictments were against Russian military leading a scheme to
> convince individuals of lies about various actor in the 2016 election
> (also times release of information and breaking in to voting
> systems). But it is the propagation of lies by robots and people
> believing them that interests me. I feel like we aren’t putting
> enough thought into that. Many of the people receiving the
> information could not tell it was no from humans and believed it even
> though in many cases it was generated by robots, passing it seems to
> me Turing’s test. How and why did this happen? Of course Turing
> died before the Internet so he couldn’t have known about it. But I
> wonder if part of the reason the robots were successful is that they
> have the ability to mine, collect and aggregate people’s biases and
> then reflect them back to us. We tend to engage, believe things in
> the contexts of our own biases. They say in salesmanship that the
> trick is figuring out what people want to here and then couching
> whatever you want to see in that. Trump is a master of reading what
> a group of people want to hear at the moment, their biases, and then
> mirroring it back to them
>
> Â
>
> If we went back to the Chinese room and the person inside was able to
> read our biases from our messages would they then be human.Â
>
> Â
>
> We live in a strange age.
>
> Â
>
> From:xmca-l-bounces@mailman.ucsd.edu<xmca-l-bounces@mailman.ucsd.edu>O
> n
> Behalf Of Andy Blunden
> Sent: Saturday, July 14, 2018 8:58 AM
> To: xmca-l@mailman.ucsd.edu
> Subject: [Xmca-l] Re: Interesting article on robots and social
> learning
>
> Â
>
> I understand that the Turing Test is one which AI people can use to
> measure the success of their AI - if you can't tell the difference
> between a computer and a human interaction then the computer has
> passed the Turing test. I tend to rely on a kind of anti-Turing Test,
> that is, that if you can tell the difference between the computer and
> the human interaction, then you have passed the anti-Turing test, that
> is, you know something about humans.
>
> Andy
>
> Andy Blunden
> http://www.ethicalpolitics.org/ablunden/index.htm
>
> On 14/07/2018 1:12 PM, Douglas Williams wrote:
>
>
> Hi--
>
> I think I'll come out of lurking for this one. Actually, what you're
> talking about with this pain algorithm system sounds like a modeling
> system that someone might need to develop what Alan Turing described
> as a P-type computing device. A P-type computer would receive its
> programming from inputs of pleasure and pain. It was probably derived
> from reading some of the behavioralist models of mind at the time.
> Turing thought that he was probably pretty close to being able to
> develop such a computing device, which, because its input was similar, could model human thought.
> The Eliza Rogersian analysis computer program was another early idea
> in which the goal was to model the patterns of human interaction, and
> gradually approach closer to human thought and interaction that way.
> And by the 2000's, the idea of the "singularity" was afloat, in which
> one could model human minds so well as to enable a human to be
> uploaded into a computer, and live forever as software (Kurzweil,
> 2005). But given that we barely had a sufficient model of mind to say
> Boo with at the time (what is consciousness? where does intention come
> from? What is the balance of nature/nurture in motivation? Speech
> utterances? and so on), and you're right, AI doesn't have much of a
> theory of emotion, either--the goal of computer software modeling human thought seemed very far away to me.
>
> Â
>
> At someone's request, I wrote a rather whimsical paper called "What is
> Artificial Intelligence?" back in 2006 about such things. My argument
> was that statistical modeling of human interaction and capturing
> thought was not too easy after all, precisely because of the parts of
> mind we don't think of, and the social interactions that, at the time,
> were not a primary focus. I mused about that in the context of my
> trying to write a computer program by applying Chomsky's syntactic
> structures to interpret intention of a few simple questions--without,
> alas, in my case, a corpus-supported Markov chain logic to do it.
> Generative grammar would take care of it, right? Wrong.
>
>
> So as someone who had done a little primitive, incompetent attempt at
> speech modeling myself, and in the light of my later-acquired
> knowledge of CHAT, Burke, Bakhtin, Mead, and various other people in
> different fields, and of the tendency of people to interact through
> the world through cognitive biases, complexes, and embodied
> perceptions that were not readily available to artificial systems, I
> didn't think the singularity was so near.
>
> The terrible thing about computer programs is that they do just what
> you tell them to do, and no more. They have no drive to improve,
> except as programmed. When they do improve, their creativity is
> limited. And the approach now still substantially is
> pattern-recognition based. The current paradigm is something called
> Convolutional Neural Network Long Short-Term Memory Networks
> (CNN/LSTM) for speech recognition, in which the convolutional neural
> networks reduce the variants of speech input into manageable patterns,
> and temporal processing (temporal patterns of the real wold phenomena
> to which the AI system is responding). But while such systems combined
> with natural language processing can increasingly mimic human
> response, and "learn" on their own, and while they are approaching the
> "weak" form of artificial general intelligence (AGI), the intelligence
> needed for a machine to perform any intellectual task that a human
> being can, they are an awfully long way from "strong" AGI--that is,
> something approaching human consciousness. I think that's because they
> are a long way from capturing the kind of social embeddedness of
> almost all animal behavior, and the sense in which human cognition is
> embedded in the messy things, like emotion. A computer algorithm can
> recognize the patterns of emotion, but that's it. An AGI system that can experience emotions, or have motivation, is quite another thing entirely.
>
> I can tell you that AI confidence is still there. In raising questions
> about cultural and physical embodiment in artficial intelligence
> interations with someone in the field recently, he dismissed the idea
> as being that relevant. His thought was that "what I find essential is
> that we acknowledge that there's no obvious evidence supporting that
> the current paradigm of CNN/LSTM under various reinforcement
> algorithms isn't enough for A AGI and in particular for broad
> animal-like intelligence like that of ravens and dogs."
>
> But ravens and dogs are embedded in social interaction, in
> intentionality, in consciousness--qualitatively different than ours, maybe, but there.
> Dogs don't do what you ask them to, always. When they do things, they
> do them for their own intentionality, which may be to please you, or
> may be to do something you never asked the dog to do, which is either
> inherent in its nature, or an expression of social interactions with
> you or others, many of which you and they may not be consciously aware
> of. The deep structure of metaphor, the spatiotemporal relations of
> language that Langacker describes as being necessary for construal,
> the worlds of narrativized experience, are mostly outside of the
> reckoning, so far as I know (though I'm not an expert--I could be at
> least partly wrong) of the current CNN/LSTM paradigm.
>
> My old interlocutor in thinking about my language program, Noam
> Chomsky, has been a pretty sharp critic of the pattern recognition
> approach to artificial intelligence.
>
> Here's Chomsky's take on the idea:
>
> http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html
>
> And here's Peter Norvig's response; he's a director of research at
> Google, where Kurzweil is, and where, I assume, they are as close to
> the strong version of artificial general intelligence as anyone out there...
>
> http://norvig.com/chomsky.html
>
> Frankly, I would be quite interested in what you think of these things.
> I'm merely an Isaiah Berlin fox, chasing to and fro at all the pretty
> ideas out there. But you, many of you, are, I suspect, the untapped
> hedgehogs whose ideas on these things would see more readily what I
> dimly grasp must be required, not just for achieving a strong AGI, but
> for achieving something that we would see as an ethical, reasonable
> artificial mind that expands human experience, rather than becomes a
> prison that reduces human interactions to its own level.
>
> My own thinking is that lately, Cognitive Metaphor Theory (CMT), which
> I knew more of in its earlier (now "standard model') days, is getting
> even more interesting than it was. I'd done a transfer term to UC
> Berkeley to study with George Lakoff, but we didn't hit it off well,
> perhaps I kept asking him questions about social embeddedness, and
> similarities to Vygotsky's theory of complex thought, and was too
> expressive about my interest in linking out from his approach than
> folding in. It seems that the idea I was rather woolily suggesting to
> Lakoff back then has caught
> on: namely, that utterances could be explored for cultural variation
> and historical embeddedness, a form ofsocial context to the narratives
> and metaphors and blended spaces that underlay speech utterances and
> thought; that there was a degree of social embodiment as well as
> physiological embodiment through which language operated. I thought
> then, and it looks like some other people now, are thinking that
> someone seeking to understand utterances (as a strong AGI system would
> need to do) really, would need to engage in internalizing and
> ventriloqusing a form of Geertz's thick description of interactions.
> In such forms, words do not mean what they say, and can have different
> affect that is a bit more complex than I think temporal processing currently addresses.
>
> I think these are the kind of things that artificial intelligence
> would need truly to advance, and that Bakhtin and Vygotsky and
> Leont'ev and in the visual world, Eisenstein were addressing all along...
>
> And, of course, you guys.
>
> Â
>
> Regards,
>
> Douglas Willams
>
> Â
>
> Â
>
> Â
>
> On Tuesday, July 3, 2018, 10:35:45 AM PDT, David H
> Kirshner<dkirsh@lsu.edu> wrote:
>
> Â
>
> Â
>
> The other side of the coin is that ineffable human experience is
> becoming more effable.
>
> Computers can now look at a human brain scan and determine the degree
> of subjectively experienced pain:
>
> Â
>
> In 2013, Tor Wager, a neuroscientist at the University of Colorado,
> Boulder, took the logical next step by creating an algorithm that
> could recognize pain’s distinctive patterns; today, it can pick out
> brains in pain with more than ninety-five-per-cent accuracy. When the
> algorithm is asked to sort activation maps by apparent intensity, its
> ranking matches participants’ subjective pain ratings. By analyzing
> neural activity, it can tell not just whether someone is in pain but
> also how intense the experience is.
>
> Â
>
> So, perhaps the computer can’t “feel our pain,� but it can sure
> “sense our pain!�
>
> Â
>
> Here’s the full article:
>
> https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain
>
> Â
>
> David
>
> Â
>
> From:xmca-l-bounces@mailman.ucsd.edu<xmca-l-bounces@mailman.ucsd.edu>O
> n
> Behalf Of Glassman, Michael
> Sent: Tuesday, July 3, 2018 8:16 AM
> To: eXtended Mind, Culture, Activity <xmca-l@mailman.ucsd.edu>
> Subject: [Xmca-l] Re: Interesting article on robots and social
> learning
>
> Â
>
> Â
>
> Â
>
> It seems like we are still having the same argument as when robots
> first came on the scene. In response to John McCarthy, who was
> claiming that eventually robots can have belief systems and
> motivations similar to humans through AI John Searle wrote the Chinese
> room. There have been a lot of responses to the Chinese room over
> the years and a number of digital philosopher claim it is no longer
> salient, but I don’t think anybody has ever effectively answered his central question.
>
> Â
>
> Just a quick recap. You come to a closed door and know there is a
> person on the other side. To communicate you decide the teacher the
> person on the other side Chinese. You do this by continuously
> exchanging rules systems under the door. After a while you are able
> to have a conversation with the individual in perfect Chinese. But
> does that person actually know Chinese just from the rule systems. I
> think Searle’s major point is are you really learning if you don’t
> know why you’re learning, or are you just repeating. Learning is
> embedded in the human condition and the reason it works so well and is
> adaptable is because we understand it when we use what we learn in the
> world in response to others. To put it in response to the post, does
> a bomb defusion robot really learn how to defuse a bomb if it does not
> know why it is doing it. It might cut the right wires at the right
> time but it doesn’t understand why and therefore is not doing the
> task just a series of steps it has been able to absorb. Is that the opposite of human learning?
>
> Â
>
> What the researcher did really isn’t that special at this point.Â
> Well I definitely couldn’t do it and it is amazing, but it is in
> essence a miniature version of Libratus (which beat experts at Texas
> Hold em) and Alphago (which beat the second best Go player in the
> world). My guess it is the same use of deep learning in which the
> program integrates new information into what it is already capable
> of. If machines can learn from interacting with other humans then
> they can learn from interacting with other machines. It is the same
> principle (though much, much simpler in this case). The question is
> what does it mean. As we defining learning down because of the
> zeitgeist. Â Greg started his post saying a socio-cultural theorist be
> interested in this research. I wonder if they might more likely to
> be the ones putting on the brakes, asking questions about it.
>
> Â
>
> Michael
>
> Â
>
> From:xmca-l-bounces@mailman.ucsd.edu
> <xmca-l-bounces@mailman.ucsd.edu>On
> Behalf Of Andy Blunden
> Sent: Tuesday, July 03, 2018 7:04 AM
> To: xmca-l@mailman.ucsd.edu
> Subject: [Xmca-l] Re: Interesting article on robots and social
> learning
>
> Â
>
> Does a robot have "motivation"?
>
> andy
>
> Andy Blunden
> http://www.ethicalpolitics.org/ablunden/index.htm
>
> On 3/07/2018 5:28 PM, Rod Parker-Rees wrote:
>
>
> Hi Greg,
>
> Â
>
> What is most interesting to me about the understanding of learning
> which informs most AI projects is that it seems to assume that affect
> is irrelevant. The role of caring, liking, worrying etc. in social
> learning seems to be almost universally overlooked because information
> is seen as something that can be ‘got’ and ‘given’ more than
> something that is distributed in relationships.
>
> Â
>
> Does anyone know about any AI projects which consider how machines
> might feel about what they learn?
>
> Â
>
> All the best,
>
>
> Rod
>
> Â
>
> From:xmca-l-bounces@mailman.ucsd.edu<xmca-l-bounces@mailman.ucsd.edu>O
> n
> Behalf Of Greg Thompson
> Sent: 03 July 2018 02:50
> To: eXtended Mind, Culture, Activity <xmca-l@mailman.ucsd.edu>
> Subject: [Xmca-l] Interesting article on robots and social learning
>
> Â
>
> I’m ambivalent about this project but I suspect that some young CHAT
> scholar out there could have a lot to contribute to a project like
> this
> one:
>
> https://www.sapiens.org/column/machinations/artificial-intelligence-cu
> lture/
>
> Â
>
> -GregÂ
>
> --
>
> Gregory A. Thompson, Ph.D.
>
> Assistant Professor
>
> Department of Anthropology
>
> 880 Spencer W. Kimball Tower
>
> Brigham Young University
>
> Provo, UT 84602
>
> WEBSITE:greg.a.thompson.byu.eduÂ
> http://byu.academia.edu/GregoryThompson
>
>
>
> This email and any files with it are confidential and intended solely
> for the use of the recipient to whom it is addressed. If you are not
> the intended recipient then copying, distribution or other use of the
> information contained is strictly prohibited and you should not rely
> on it. If you have received this email in error please let the sender
> know immediately and delete it from your system(s). Internet emails
> are not necessarily secure. While we take every care, University of
> Plymouth accepts no responsibility for viruses and it is your
> responsibility to scan emails and their attachments. University of
> Plymouth does not accept responsibility for any changes made after it
> was sent. Nothing in this email or its attachments constitutes an
> order for goods or services unless accompanied by an official order form.
>
>
> Â
>
>
> Â
>
>
> Â
>
>
>
>
>
>
> Â
>
> --
>
> Gregory A. Thompson, Ph.D.
>
> Assistant Professor
>
> Department of Anthropology
>
> 880 Spencer W. Kimball Tower
>
> Brigham Young University
>
> Provo, UT 84602
>
> WEBSITE:greg.a.thompson.byu.eduÂ
> http://byu.academia.edu/GregoryThompson
>
Dra. Julie Waddington
Departament de Didàctiques Específiques
Facultat d'Educació i Psicologia
Universitat de Girona
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180720/f5dde5dd/attachment.html
More information about the xmca-l
mailing list