[Xmca-l] Re: Interesting article on robots and social learning

Alfredo Jornet Gil a.j.gil@iped.uio.no
Tue Jul 3 02:15:01 PDT 2018


Another question, would be, how do sensible organisms develop some form of empathy so that caring rituals become a motive? Then, the question of knowing how others feel seems central, but not from the third-persona but from the first person perspective. Not what do robots feel, but perhaps, how do robots feel what other beings feel... if that in fact is at the origin of caring (which, of course, I don't know).

​Alfredo


________________________________
From: xmca-l-bounces@mailman.ucsd.edu <xmca-l-bounces@mailman.ucsd.edu> on behalf of Alfredo Jornet Gil <a.j.gil@iped.uio.no>
Sent: 03 July 2018 11:07
To: eXtended Mind, Culture, Activity
Subject: [Xmca-l] Re: Interesting article on robots and social learning


​Thanks for sharing, Greg, really interesting.


Rod, I see your point about affect. But the question is a bit tricky, isn't it? For, is affect really primary in achieving sensible action (and learning), or is it secondary, a sort of "manifestation" that sensible action is being carried? Take for example "caring", which you mention. As it is the case for the action, "safely defuse this bomb," caring for something/someone requires of sensible or sensuous activity, that is, being sensible to a changing environment. It requires not that you get pre-given information about the world as input that would only be understood in terms of a Kantian a priori, that is, by means of pre-given schemata in the robot's "mind". That is the approach that W. Clancey, in his 90's book Situated Cognition, showed had proven wrong to robot builders. Instead, sensuous activity requires of self-affection, that is, that the agent (robot? person? organism?), in carrying action, *notices* changes in its own states, and that these changes correspond to its ongoing action in the world.


So, what I am trying to get at is, as long as you have a machine or organism capable of adjusting its own action to the texture (call it affect) of its own action with respect to some object (motive or goal), what difference is there in adding or not the attribution of emotions to that machine or organism? I am here raising the question, is not the affective dimension hard-wired into the premise that a being is capable of sensible movement without having a pre-established description of what the shape of the world it has to move across is? I am not saying that any program that does not build on internal formal representations will generate affect. But I am saying that, perhaps, a program that would succeed in implementing the adaptive capabilities of sensible (active, object-oriented, valence-laded) action will exhibit affective qualities without us needing to ask what is it like to be a robot. That those qualities are of its nature.


But these are just my thoughts, I am afraid there is lots I don't know compared to all that has been written about these long-standing issues. Any thoughts?


Alfredo

________________________________
From: xmca-l-bounces@mailman.ucsd.edu <xmca-l-bounces@mailman.ucsd.edu> on behalf of Rod Parker-Rees <R.Parker-Rees@plymouth.ac.uk>
Sent: 03 July 2018 09:28
To: eXtended Mind, Culture, Activity
Subject: [Xmca-l] Re: Interesting article on robots and social learning

Hi Greg,

What is most interesting to me about the understanding of learning which informs most AI projects is that it seems to assume that affect is irrelevant. The role of caring, liking, worrying etc. in social learning seems to be almost universally overlooked because information is seen as something that can be ‘got’ and ‘given’ more than something that is distributed in relationships.

Does anyone know about any AI projects which consider how machines might feel about what they learn?

All the best,

Rod

From: xmca-l-bounces@mailman.ucsd.edu <xmca-l-bounces@mailman.ucsd.edu> On Behalf Of Greg Thompson
Sent: 03 July 2018 02:50
To: eXtended Mind, Culture, Activity <xmca-l@mailman.ucsd.edu>
Subject: [Xmca-l] Interesting article on robots and social learning

I’m ambivalent about this project but I suspect that some young CHAT scholar out there could have a lot to contribute to a project like this one:
https://www.sapiens.org/column/machinations/artificial-intelligence-culture/

-Greg
--
Gregory A. Thompson, Ph.D.
Assistant Professor
Department of Anthropology
880 Spencer W. Kimball Tower
Brigham Young University
Provo, UT 84602
WEBSITE: greg.a.thompson.byu.edu<http://greg.a.thompson.byu.edu>
http://byu.academia.edu/GregoryThompson
________________________________
[http://www.plymouth.ac.uk/images/email_footer.gif]<http://www.plymouth.ac.uk/worldclass>

This email and any files with it are confidential and intended solely for the use of the recipient to whom it is addressed. If you are not the intended recipient then copying, distribution or other use of the information contained is strictly prohibited and you should not rely on it. If you have received this email in error please let the sender know immediately and delete it from your system(s). Internet emails are not necessarily secure. While we take every care, University of Plymouth accepts no responsibility for viruses and it is your responsibility to scan emails and their attachments. University of Plymouth does not accept responsibility for any changes made after it was sent. Nothing in this email or its attachments constitutes an order for goods or services unless accompanied by an official order form.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180703/3b13d1ef/attachment.html 


More information about the xmca-l mailing list