From mcole@ucsd.edu Mon Jul 2 10:54:36 2018 From: mcole@ucsd.edu (mike cole) Date: Mon, 2 Jul 2018 10:54:36 -0700 Subject: [Xmca-l] Fwd: [COGDEVSOC] Postdoctoral Research Fellow, Department of Psychology, The University of Texas at Austin In-Reply-To: <060DAB2D-286A-4835-96D8-360281AF5729@austin.utexas.edu> References: <060DAB2D-286A-4835-96D8-360281AF5729@austin.utexas.edu> Message-ID: This should be a stimulating post doc for the right person. mike ---------- Forwarded message ---------- From: Legare, Cristine H Date: Fri, Jun 29, 2018 at 12:30 PM Subject: [COGDEVSOC] Postdoctoral Research Fellow, Department of Psychology, The University of Texas at Austin To: COGDEVLIST *Postdoctoral Research Fellow, Department of Psychology, The University of Texas at Austin* Cristine Legare, Department of Psychology, The University of Texas at Austin is seeking candidates for a postdoctoral position to begin Summer or Fall of 2018. The position is part of a NSF-funded project titled "Explaining, Exploring, and Scientific Reasoning in Museum Settings" . This collaborative project investigates how parents and children from diverse backgrounds engage in explanation and exploration of scientific concepts in three children?s museums across the U.S., including the Thinkery in Austin, Texas. The successful candidate will work closely on projects with Cristine Legare in collaboration with Maureen Callanan (UCSC) and David Sobel (Brown). Ideal candidates will have a Ph.D. in psychology, education, cognitive science, or a closely related field, with experience conducting research on the development of scientific reasoning, causal learning, and early science education. Candidates with expertise studying parent-child interaction will receive special consideration. Experience coding observational data sets, experience with data management, and strong statistical skills are required. More information about the PI and the Evolution, Ontogeny, and Variation in Learning Lab can be found at: www.cristinelegare.com More information about the Thinkery (The New Austin Children?s Museum) can be found at: https://thinkeryaustin.org Interested applicants should submit a CV, cover letter with statement of research interests (2 pages), contact information for three potential references, and up to two publications. Reference letters will be requested from short-listed applicants. The Fall 2018 start date is flexible. Review of applications will continue until the position is filled. Salary will be based on the NSF postdoctoral rates and the start date is negotiable. Please send your application materials, as a single PDF document, to Program Manager Oskar Burger oskar@austin.utexas.edu. Other inquiries can be directed to Cristine Legare at legare@austin.utexas.edu. Cristine H. Legare Associate Professor Director of the Evolution, Variation, and Ontogeny of Learning Laboratory (EVO Learn Lab) Department of Psychology The University of Texas at Austin email: legare@austin.utexas.edu webpage: www.cristinelegare.com phone: (512) 468-8238 _______________________________________________ To post to the CDS listserv, send your message to: cogdevsoc@lists.cogdevsoc.org (If you belong to the listserv and have not included any large attachments, your message will be posted without moderation--so be careful!) To subscribe or unsubscribe from the listserv, visit: http://lists.cogdevsoc.org/listinfo.cgi/cogdevsoc-cogdevsoc.org -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180702/865cbd79/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 202 bytes Desc: not available Url : http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180702/865cbd79/attachment.png From smago@uga.edu Mon Jul 2 11:14:28 2018 From: smago@uga.edu (Peter Smagorinsky) Date: Mon, 2 Jul 2018 18:14:28 +0000 Subject: [Xmca-l] FW: Just Published: Journal of Higher Education Outreach and Engagement 22(2) Summer 2018 In-Reply-To: <4AF8CBDD-AF43-479C-8E01-BAAB669491B4@uga.edu> References: <4AF8CBDD-AF43-479C-8E01-BAAB669491B4@uga.edu> Message-ID: From: Service Learning Discussions On Behalf Of Shannon O Wilder The Journal of Higher Education Outreach and Engagement has published its latest issue, Volume 22, No. 2 (Summer 2018). JHEOE is an open access, peer-reviewed journal focused on advancing theory and practice related all forms of outreach and engagement between higher education institutions and communities. The latest issue is available at http://openjournals.libs.uga.edu/index.php/jheoe/issue/view/84 We invite you to review the enclosed Table of Contents and visit our web site to access articles of interest. Thank you and enjoy! ________________________________ Journal of Higher Education Outreach and Engagement - http://openjournals.libs.uga.edu/index.php/jheoe/issue/view/84 Vol 22, No 2 (Summer 2018): REFLECTIVE ESSAYS Lessons Learned From 30 Years of a University-Community Engagement Center (7-30) Christina J. Groark, Robert B. McCall Approaching Critical Service-Learning: A Model for Reflection on Positionality and Possibility (31-56) Mark Latta, Tina M. Kruger, Lindsey Payne, Laura Weaver, Jennifer L. VanSickle RESEARCH ARTICLES Public Purpose Under Pressure: Examining the Effects of Neoliberal Public Policy on the Missions of Regional Comprehensive Universities (59-102) Cecilia Orphan Identity Status, Service-Learning, and Future Plans (103-126) Lynn E. Pelco, Christopher T. Ball ?We Don?t Leave Engineering on the Page?: Civic Engagement Experiences of Engineering Graduate Student (127-156) Richard J. Reddick, Laura E. Struve, Jeffrey R. Mayo, Ryan A. Miller, Jennifer L. Wang Engaging with Host Schools to Establish the Reciprocity of an International Teacher Education Partnership (157-188) Laura Boynton Hauerwas, Meaghan Creamer Fostering eABCD: Asset-Based Community Development in Digital Service-Learning (189-222) Rachael W. Shah, Jennifer M Selting Troester, Robert Brooke, Lauren Gatti, Sarah L. Thomas, Jessica Masterson BOOK REVIEWS Community-Based Research: Teaching for Community Impact (225-232) Miles A. McNall, Jessica V. Barnes-Najor Deliberative Pedagogy: Teaching and Learning Form Democratic Engagement (233-236) Fay Fletcher Regional Perspectives on Learning by Doing: Stories from Engaged Universities Around the World (237-242) Elizabeth A. Tryon -- Shannon O'Brien Wilder, Ph.D. Director, Office of Service-Learning Editor, Journal of Higher Education Outreach and Engagement University of Georgia 1242 1/2 S. Lumpkin St. Athens, GA 30602 ph: 706-542-0535 cell: 706-202-2013 swilder@uga.edu The Office of Service-Learning is jointly supported by the Offices of the Vice President for Instruction and the Vice President for Public Service & Outreach -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180702/bec18c72/attachment.html From a.j.gil@iped.uio.no Mon Jul 2 16:22:11 2018 From: a.j.gil@iped.uio.no (Alfredo Jornet Gil) Date: Mon, 2 Jul 2018 23:22:11 +0000 Subject: [Xmca-l] Re: Fwd: [COGDEVSOC] Postdoctoral Research Fellow, Department of Psychology, The University of Texas at Austin In-Reply-To: References: <060DAB2D-286A-4835-96D8-360281AF5729@austin.utexas.edu>, Message-ID: <1530573731830.70526@iped.uio.no> ?Thanks for sharing, Mike. I am forwarding this one, very relevant to the Oslo group. Alfredo ________________________________ From: xmca-l-bounces@mailman.ucsd.edu on behalf of mike cole Sent: 02 July 2018 19:54 To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Fwd: [COGDEVSOC] Postdoctoral Research Fellow, Department of Psychology, The University of Texas at Austin This should be a stimulating post doc for the right person. mike ---------- Forwarded message ---------- From: Legare, Cristine H > Date: Fri, Jun 29, 2018 at 12:30 PM Subject: [COGDEVSOC] Postdoctoral Research Fellow, Department of Psychology, The University of Texas at Austin To: COGDEVLIST > Postdoctoral Research Fellow, Department of Psychology, The University of Texas at Austin Cristine Legare, Department of Psychology, The University of Texas at Austin is seeking candidates for a postdoctoral position to begin Summer or Fall of 2018. The position is part of a NSF-funded project titled "Explaining, Exploring, and Scientific Reasoning in Museum Settings". This collaborative project investigates how parents and children from diverse backgrounds engage in explanation and exploration of scientific concepts in three children's museums across the U.S., including the Thinkery in Austin, Texas. The successful candidate will work closely on projects with Cristine Legare in collaboration with Maureen Callanan (UCSC) and David Sobel (Brown). Ideal candidates will have a Ph.D. in psychology, education, cognitive science, or a closely related field, with experience conducting research on the development of scientific reasoning, causal learning, and early science education. Candidates with expertise studying parent-child interaction will receive special consideration. Experience coding observational data sets, experience with data management, and strong statistical skills are required. More information about the PI and the Evolution, Ontogeny, and Variation in Learning Lab can be found at: www.cristinelegare.com More information about the Thinkery (The New Austin Children's Museum) can be found at: https://thinkeryaustin.org Interested applicants should submit a CV, cover letter with statement of research interests (2 pages), contact information for three potential references, and up to two publications. Reference letters will be requested from short-listed applicants. The Fall 2018 start date is flexible. Review of applications will continue until the position is filled. Salary will be based on the NSF postdoctoral rates and the start date is negotiable. Please send your application materials, as a single PDF document, to Program Manager Oskar Burger oskar@austin.utexas.edu. Other inquiries can be directed to Cristine Legare at legare@austin.utexas.edu. Cristine H. Legare Associate Professor Director of the Evolution, Variation, and Ontogeny of Learning Laboratory (EVO Learn Lab) Department of Psychology The University of Texas at Austin email: legare@austin.utexas.edu webpage: www.cristinelegare.com phone: (512) 468-8238 _______________________________________________ To post to the CDS listserv, send your message to: cogdevsoc@lists.cogdevsoc.org (If you belong to the listserv and have not included any large attachments, your message will be posted without moderation--so be careful!) To subscribe or unsubscribe from the listserv, visit: http://lists.cogdevsoc.org/listinfo.cgi/cogdevsoc-cogdevsoc.org -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180702/986b28e4/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 202 bytes Desc: image001.png Url : http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180702/986b28e4/attachment.png From greg.a.thompson@gmail.com Mon Jul 2 18:49:45 2018 From: greg.a.thompson@gmail.com (Greg Thompson) Date: Tue, 3 Jul 2018 10:49:45 +0900 Subject: [Xmca-l] Interesting article on robots and social learning Message-ID: I?m ambivalent about this project but I suspect that some young CHAT scholar out there could have a lot to contribute to a project like this one: https://www.sapiens.org/column/machinations/artificial-intelligence-culture/ -Greg -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180703/f1d3610c/attachment.html From R.Parker-Rees@plymouth.ac.uk Tue Jul 3 00:28:04 2018 From: R.Parker-Rees@plymouth.ac.uk (Rod Parker-Rees) Date: Tue, 3 Jul 2018 07:28:04 +0000 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: References: Message-ID: Hi Greg, What is most interesting to me about the understanding of learning which informs most AI projects is that it seems to assume that affect is irrelevant. The role of caring, liking, worrying etc. in social learning seems to be almost universally overlooked because information is seen as something that can be ?got? and ?given? more than something that is distributed in relationships. Does anyone know about any AI projects which consider how machines might feel about what they learn? All the best, Rod From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Greg Thompson Sent: 03 July 2018 02:50 To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Interesting article on robots and social learning I?m ambivalent about this project but I suspect that some young CHAT scholar out there could have a lot to contribute to a project like this one: https://www.sapiens.org/column/machinations/artificial-intelligence-culture/ -Greg -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson ________________________________ [http://www.plymouth.ac.uk/images/email_footer.gif] This email and any files with it are confidential and intended solely for the use of the recipient to whom it is addressed. If you are not the intended recipient then copying, distribution or other use of the information contained is strictly prohibited and you should not rely on it. If you have received this email in error please let the sender know immediately and delete it from your system(s). Internet emails are not necessarily secure. While we take every care, University of Plymouth accepts no responsibility for viruses and it is your responsibility to scan emails and their attachments. University of Plymouth does not accept responsibility for any changes made after it was sent. Nothing in this email or its attachments constitutes an order for goods or services unless accompanied by an official order form. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180703/39958325/attachment.html From a.j.gil@iped.uio.no Tue Jul 3 02:07:10 2018 From: a.j.gil@iped.uio.no (Alfredo Jornet Gil) Date: Tue, 3 Jul 2018 09:07:10 +0000 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: References: , Message-ID: <1530608830804.92205@iped.uio.no> ?Thanks for sharing, Greg, really interesting. Rod, I see your point about affect. But the question is a bit tricky, isn't it? For, is affect really primary in achieving sensible action (and learning), or is it secondary, a sort of "manifestation" that sensible action is being carried? Take for example "caring", which you mention. As it is the case for the action, "safely defuse this bomb," caring for something/someone requires of sensible or sensuous activity, that is, being sensible to a changing environment. It requires not that you get pre-given information about the world as input that would only be understood in terms of a Kantian a priori, that is, by means of pre-given schemata in the robot's "mind". That is the approach that W. Clancey, in his 90's book Situated Cognition, showed had proven wrong to robot builders. Instead, sensuous activity requires of self-affection, that is, that the agent (robot? person? organism?), in carrying action, *notices* changes in its own states, and that these changes correspond to its ongoing action in the world. So, what I am trying to get at is, as long as you have a machine or organism capable of adjusting its own action to the texture (call it affect) of its own action with respect to some object (motive or goal), what difference is there in adding or not the attribution of emotions to that machine or organism? I am here raising the question, is not the affective dimension hard-wired into the premise that a being is capable of sensible movement without having a pre-established description of what the shape of the world it has to move across is? I am not saying that any program that does not build on internal formal representations will generate affect. But I am saying that, perhaps, a program that would succeed in implementing the adaptive capabilities of sensible (active, object-oriented, valence-laded) action will exhibit affective qualities without us needing to ask what is it like to be a robot. That those qualities are of its nature. But these are just my thoughts, I am afraid there is lots I don't know compared to all that has been written about these long-standing issues. Any thoughts? Alfredo ________________________________ From: xmca-l-bounces@mailman.ucsd.edu on behalf of Rod Parker-Rees Sent: 03 July 2018 09:28 To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: Interesting article on robots and social learning Hi Greg, What is most interesting to me about the understanding of learning which informs most AI projects is that it seems to assume that affect is irrelevant. The role of caring, liking, worrying etc. in social learning seems to be almost universally overlooked because information is seen as something that can be ?got? and ?given? more than something that is distributed in relationships. Does anyone know about any AI projects which consider how machines might feel about what they learn? All the best, Rod From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Greg Thompson Sent: 03 July 2018 02:50 To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Interesting article on robots and social learning I?m ambivalent about this project but I suspect that some young CHAT scholar out there could have a lot to contribute to a project like this one: https://www.sapiens.org/column/machinations/artificial-intelligence-culture/ -Greg -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson ________________________________ [http://www.plymouth.ac.uk/images/email_footer.gif] This email and any files with it are confidential and intended solely for the use of the recipient to whom it is addressed. If you are not the intended recipient then copying, distribution or other use of the information contained is strictly prohibited and you should not rely on it. If you have received this email in error please let the sender know immediately and delete it from your system(s). Internet emails are not necessarily secure. While we take every care, University of Plymouth accepts no responsibility for viruses and it is your responsibility to scan emails and their attachments. University of Plymouth does not accept responsibility for any changes made after it was sent. Nothing in this email or its attachments constitutes an order for goods or services unless accompanied by an official order form. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180703/e4948cb4/attachment.html From jamesma320@gmail.com Tue Jul 3 02:13:17 2018 From: jamesma320@gmail.com (James Ma) Date: Tue, 3 Jul 2018 10:13:17 +0100 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: References: Message-ID: I like to see any AI system as a self-contained, self-organised system of signification which can be analysed using a biosemiotic or cybersemiotic approach informed by Peircean semiotics. James *________________________________________________* *James Ma Independent Scholar **https://oxford.academia.edu/JamesMa * On 3 July 2018 at 08:28, Rod Parker-Rees wrote: > Hi Greg, > > > > What is most interesting to me about the understanding of learning which > informs most AI projects is that it seems to assume that affect is > irrelevant. The role of caring, liking, worrying etc. in social learning > seems to be almost universally overlooked because information is seen as > something that can be ?got? and ?given? more than something that is > distributed in relationships. > > > > Does anyone know about any AI projects which consider how machines might > feel about what they learn? > > > > All the best, > > > Rod > > > > *From:* xmca-l-bounces@mailman.ucsd.edu *On > Behalf Of *Greg Thompson > *Sent:* 03 July 2018 02:50 > *To:* eXtended Mind, Culture, Activity > *Subject:* [Xmca-l] Interesting article on robots and social learning > > > > I?m ambivalent about this project but I suspect that some young CHAT > scholar out there could have a lot to contribute to a project like this one: > > https://www.sapiens.org/column/machinations/artificial-intelligence- > culture/ > > > > -Greg > > -- > > Gregory A. Thompson, Ph.D. > > Assistant Professor > > Department of Anthropology > > 880 Spencer W. Kimball Tower > > Brigham Young University > > Provo, UT 84602 > > WEBSITE: greg.a.thompson.byu.edu > http://byu.academia.edu/GregoryThompson > ------------------------------ > > > This email and any files with it are confidential and intended solely for > the use of the recipient to whom it is addressed. If you are not the > intended recipient then copying, distribution or other use of the > information contained is strictly prohibited and you should not rely on it. > If you have received this email in error please let the sender know > immediately and delete it from your system(s). Internet emails are not > necessarily secure. While we take every care, University of Plymouth > accepts no responsibility for viruses and it is your responsibility to scan > emails and their attachments. University of Plymouth does not accept > responsibility for any changes made after it was sent. Nothing in this > email or its attachments constitutes an order for goods or services unless > accompanied by an official order form. > Virus-free. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180703/78855d01/attachment.html From a.j.gil@iped.uio.no Tue Jul 3 02:15:01 2018 From: a.j.gil@iped.uio.no (Alfredo Jornet Gil) Date: Tue, 3 Jul 2018 09:15:01 +0000 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: <1530608830804.92205@iped.uio.no> References: , , <1530608830804.92205@iped.uio.no> Message-ID: <1530609300935.93506@iped.uio.no> Another question, would be, how do sensible organisms develop some form of empathy so that caring rituals become a motive? Then, the question of knowing how others feel seems central, but not from the third-persona but from the first person perspective. Not what do robots feel, but perhaps, how do robots feel what other beings feel... if that in fact is at the origin of caring (which, of course, I don't know). ?Alfredo ________________________________ From: xmca-l-bounces@mailman.ucsd.edu on behalf of Alfredo Jornet Gil Sent: 03 July 2018 11:07 To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: Interesting article on robots and social learning ?Thanks for sharing, Greg, really interesting. Rod, I see your point about affect. But the question is a bit tricky, isn't it? For, is affect really primary in achieving sensible action (and learning), or is it secondary, a sort of "manifestation" that sensible action is being carried? Take for example "caring", which you mention. As it is the case for the action, "safely defuse this bomb," caring for something/someone requires of sensible or sensuous activity, that is, being sensible to a changing environment. It requires not that you get pre-given information about the world as input that would only be understood in terms of a Kantian a priori, that is, by means of pre-given schemata in the robot's "mind". That is the approach that W. Clancey, in his 90's book Situated Cognition, showed had proven wrong to robot builders. Instead, sensuous activity requires of self-affection, that is, that the agent (robot? person? organism?), in carrying action, *notices* changes in its own states, and that these changes correspond to its ongoing action in the world. So, what I am trying to get at is, as long as you have a machine or organism capable of adjusting its own action to the texture (call it affect) of its own action with respect to some object (motive or goal), what difference is there in adding or not the attribution of emotions to that machine or organism? I am here raising the question, is not the affective dimension hard-wired into the premise that a being is capable of sensible movement without having a pre-established description of what the shape of the world it has to move across is? I am not saying that any program that does not build on internal formal representations will generate affect. But I am saying that, perhaps, a program that would succeed in implementing the adaptive capabilities of sensible (active, object-oriented, valence-laded) action will exhibit affective qualities without us needing to ask what is it like to be a robot. That those qualities are of its nature. But these are just my thoughts, I am afraid there is lots I don't know compared to all that has been written about these long-standing issues. Any thoughts? Alfredo ________________________________ From: xmca-l-bounces@mailman.ucsd.edu on behalf of Rod Parker-Rees Sent: 03 July 2018 09:28 To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: Interesting article on robots and social learning Hi Greg, What is most interesting to me about the understanding of learning which informs most AI projects is that it seems to assume that affect is irrelevant. The role of caring, liking, worrying etc. in social learning seems to be almost universally overlooked because information is seen as something that can be ?got? and ?given? more than something that is distributed in relationships. Does anyone know about any AI projects which consider how machines might feel about what they learn? All the best, Rod From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Greg Thompson Sent: 03 July 2018 02:50 To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Interesting article on robots and social learning I?m ambivalent about this project but I suspect that some young CHAT scholar out there could have a lot to contribute to a project like this one: https://www.sapiens.org/column/machinations/artificial-intelligence-culture/ -Greg -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson ________________________________ [http://www.plymouth.ac.uk/images/email_footer.gif] This email and any files with it are confidential and intended solely for the use of the recipient to whom it is addressed. If you are not the intended recipient then copying, distribution or other use of the information contained is strictly prohibited and you should not rely on it. If you have received this email in error please let the sender know immediately and delete it from your system(s). Internet emails are not necessarily secure. While we take every care, University of Plymouth accepts no responsibility for viruses and it is your responsibility to scan emails and their attachments. University of Plymouth does not accept responsibility for any changes made after it was sent. Nothing in this email or its attachments constitutes an order for goods or services unless accompanied by an official order form. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180703/3b13d1ef/attachment.html From andyb@marxists.org Tue Jul 3 04:04:28 2018 From: andyb@marxists.org (Andy Blunden) Date: Tue, 3 Jul 2018 21:04:28 +1000 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: References: Message-ID: Does a robot have "motivation"? andy ------------------------------------------------------------ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: > > Hi Greg, > > > > What is most interesting to me about the understanding of > learning which informs most AI projects is that it seems > to assume that affect is irrelevant. The role of caring, > liking, worrying etc. in social learning seems to be > almost universally overlooked because information is seen > as something that can be ?got? and ?given? more than > something that is distributed in relationships. > > > > Does anyone know about any AI projects which consider how > machines might feel about what they learn? > > > > All the best, > > > Rod > > > > *From:*xmca-l-bounces@mailman.ucsd.edu > *On Behalf Of *Greg Thompson > *Sent:* 03 July 2018 02:50 > *To:* eXtended Mind, Culture, Activity > > *Subject:* [Xmca-l] Interesting article on robots and > social learning > > > > I?m ambivalent about this project but I suspect that some > young CHAT scholar out there could have a lot to > contribute to a project like this one: > > https://www.sapiens.org/column/machinations/artificial-intelligence-culture/ > > > > -Greg > > -- > > Gregory A. Thompson, Ph.D. > > Assistant Professor > > Department of Anthropology > > 880 Spencer W. Kimball Tower > > Brigham Young University > > Provo, UT 84602 > > WEBSITE: greg.a.thompson.byu.edu > > http://byu.academia.edu/GregoryThompson > > ------------------------------------------------------------ > > > This email and any files with it are confidential and > intended solely for the use of the recipient to whom it is > addressed. If you are not the intended recipient then > copying, distribution or other use of the information > contained is strictly prohibited and you should not rely > on it. If you have received this email in error please let > the sender know immediately and delete it from your > system(s). Internet emails are not necessarily secure. > While we take every care, University of Plymouth accepts > no responsibility for viruses and it is your > responsibility to scan emails and their attachments. > University of Plymouth does not accept responsibility for > any changes made after it was sent. Nothing in this email > or its attachments constitutes an order for goods or > services unless accompanied by an official order form. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180703/d9a826e8/attachment.html From a.j.gil@iped.uio.no Tue Jul 3 04:16:35 2018 From: a.j.gil@iped.uio.no (Alfredo Jornet Gil) Date: Tue, 3 Jul 2018 11:16:35 +0000 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: References: , Message-ID: <1530616596014.52213@iped.uio.no> Andy, are not motives aspects of activities (not of individuals)? If so, the question you just posed does not seem to make sense, does it? Another question would be, is there a robot that can be the subject of sensuous, objective activity? Obviously, the Turing test would not be able to answer this question. But would a test capable of addressing that question consist in finding out whether the robot "has" emotion, or "motivation"? Or, what would such a test look like? Alfredo ________________________________ From: xmca-l-bounces@mailman.ucsd.edu on behalf of Andy Blunden Sent: 03 July 2018 13:04 To: xmca-l@mailman.ucsd.edu Subject: [Xmca-l] Re: Interesting article on robots and social learning Does a robot have "motivation"? andy ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: Hi Greg, What is most interesting to me about the understanding of learning which informs most AI projects is that it seems to assume that affect is irrelevant. The role of caring, liking, worrying etc. in social learning seems to be almost universally overlooked because information is seen as something that can be 'got' and 'given' more than something that is distributed in relationships. Does anyone know about any AI projects which consider how machines might feel about what they learn? All the best, Rod From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Greg Thompson Sent: 03 July 2018 02:50 To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Interesting article on robots and social learning I'm ambivalent about this project but I suspect that some young CHAT scholar out there could have a lot to contribute to a project like this one: https://www.sapiens.org/column/machinations/artificial-intelligence-culture/ -Greg -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson ________________________________ [http://www.plymouth.ac.uk/images/email_footer.gif] This email and any files with it are confidential and intended solely for the use of the recipient to whom it is addressed. If you are not the intended recipient then copying, distribution or other use of the information contained is strictly prohibited and you should not rely on it. If you have received this email in error please let the sender know immediately and delete it from your system(s). Internet emails are not necessarily secure. While we take every care, University of Plymouth accepts no responsibility for viruses and it is your responsibility to scan emails and their attachments. University of Plymouth does not accept responsibility for any changes made after it was sent. Nothing in this email or its attachments constitutes an order for goods or services unless accompanied by an official order form. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180703/191792e3/attachment.html From andyb@marxists.org Tue Jul 3 04:20:47 2018 From: andyb@marxists.org (Andy Blunden) Date: Tue, 3 Jul 2018 21:20:47 +1000 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: <1530616596014.52213@iped.uio.no> References: <1530616596014.52213@iped.uio.no> Message-ID: <0e34f61b-a5f4-92b6-45c3-08bd6c77f8f9@marxists.org> It seems to be obvious that the robot does not have motivation. It's motivation lies, as you say, in the activities of human beings in which it is included. The human participants in such activities have that motivation within themselves though, and consequently they have motivations. The robot has no more motivation than a hammer has motivation. Andy ------------------------------------------------------------ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 3/07/2018 9:16 PM, Alfredo Jornet Gil wrote: > > Andy, are not motives aspects of activities (not of > individuals)? If so, the question you just posed does not > seem to make sense, does it? > > > Another question would be, is there a robot that can be > the subject of sensuous, objective activity? Obviously, > the Turing test would not be able to answer this question. > But would a test capable of addressing that > question consist in finding out whether the robot "has" > emotion, or "motivation"? Or, what would such a test look > like? > > > Alfredo > > > > ------------------------------------------------------------ > *From:* xmca-l-bounces@mailman.ucsd.edu > on behalf of Andy > Blunden > *Sent:* 03 July 2018 13:04 > *To:* xmca-l@mailman.ucsd.edu > *Subject:* [Xmca-l] Re: Interesting article on robots and > social learning > > > Does a robot have "motivation"? > > andy > > ------------------------------------------------------------ > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: >> >> Hi Greg, >> >> >> >> What is most interesting to me about the understanding of >> learning which informs most AI projects is that it seems >> to assume that affect is irrelevant. The role of caring, >> liking, worrying etc. in social learning seems to be >> almost universally overlooked because information is seen >> as something that can be ?got? and ?given? more than >> something that is distributed in relationships. >> >> >> >> Does anyone know about any AI projects which consider how >> machines might feel about what they learn? >> >> >> >> All the best, >> >> >> Rod >> >> >> >> *From:*xmca-l-bounces@mailman.ucsd.edu >> *On Behalf Of *Greg >> Thompson >> *Sent:* 03 July 2018 02:50 >> *To:* eXtended Mind, Culture, Activity >> >> *Subject:* [Xmca-l] Interesting article on robots and >> social learning >> >> >> >> I?m ambivalent about this project but I suspect that some >> young CHAT scholar out there could have a lot to >> contribute to a project like this one: >> >> https://www.sapiens.org/column/machinations/artificial-intelligence-culture/ >> >> >> >> -Greg >> >> -- >> >> Gregory A. Thompson, Ph.D. >> >> Assistant Professor >> >> Department of Anthropology >> >> 880 Spencer W. Kimball Tower >> >> Brigham Young University >> >> Provo, UT 84602 >> >> WEBSITE: greg.a.thompson.byu.edu >> >> http://byu.academia.edu/GregoryThompson >> >> ------------------------------------------------------------ >> >> >> This email and any files with it are confidential and >> intended solely for the use of the recipient to whom it >> is addressed. If you are not the intended recipient then >> copying, distribution or other use of the information >> contained is strictly prohibited and you should not rely >> on it. If you have received this email in error please >> let the sender know immediately and delete it from your >> system(s). Internet emails are not necessarily secure. >> While we take every care, University of Plymouth accepts >> no responsibility for viruses and it is your >> responsibility to scan emails and their attachments. >> University of Plymouth does not accept responsibility for >> any changes made after it was sent. Nothing in this email >> or its attachments constitutes an order for goods or >> services unless accompanied by an official order form. > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180703/b6f4411d/attachment.html From andyb@marxists.org Tue Jul 3 04:23:36 2018 From: andyb@marxists.org (Andy Blunden) Date: Tue, 3 Jul 2018 21:23:36 +1000 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: <0e34f61b-a5f4-92b6-45c3-08bd6c77f8f9@marxists.org> References: <1530616596014.52213@iped.uio.no> <0e34f61b-a5f4-92b6-45c3-08bd6c77f8f9@marxists.org> Message-ID: <4e6042ba-e426-186d-5759-3ab445e58ad8@marxists.org> ... Sorry. I didn't spell that out. Since a robot cannot have motivations they do not have emotions. Andy ------------------------------------------------------------ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 3/07/2018 9:20 PM, Andy Blunden wrote: > > It seems to be obvious that the robot does not have > motivation. It's motivation lies, as you say, in the > activities of human beings in which it is included. The > human participants in such activities have that motivation > within themselves though, and consequently they have > motivations. The robot has no more motivation than a > hammer has motivation. > > Andy > > ------------------------------------------------------------ > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > On 3/07/2018 9:16 PM, Alfredo Jornet Gil wrote: >> >> Andy, are not motives aspects of activities (not of >> individuals)? If so, the question you just posed does not >> seem to make sense, does it? >> >> >> Another question would be, is there a robot that can be >> the subject of sensuous, objective activity? Obviously, >> the Turing test would not be able to answer this >> question. But would a test capable of addressing that >> question consist in finding out whether the robot "has" >> emotion, or "motivation"? Or, what would such a test look >> like? >> >> >> Alfredo >> >> >> >> ------------------------------------------------------------ >> *From:* xmca-l-bounces@mailman.ucsd.edu >> on behalf of Andy >> Blunden >> *Sent:* 03 July 2018 13:04 >> *To:* xmca-l@mailman.ucsd.edu >> *Subject:* [Xmca-l] Re: Interesting article on robots and >> social learning >> >> >> Does a robot have "motivation"? >> >> andy >> >> ------------------------------------------------------------ >> Andy Blunden >> http://www.ethicalpolitics.org/ablunden/index.htm >> On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: >>> >>> Hi Greg, >>> >>> >>> >>> What is most interesting to me about the understanding >>> of learning which informs most AI projects is that it >>> seems to assume that affect is irrelevant. The role of >>> caring, liking, worrying etc. in social learning seems >>> to be almost universally overlooked because information >>> is seen as something that can be ?got? and ?given? more >>> than something that is distributed in relationships. >>> >>> >>> >>> Does anyone know about any AI projects which consider >>> how machines might feel about what they learn? >>> >>> >>> >>> All the best, >>> >>> >>> Rod >>> >>> >>> >>> *From:*xmca-l-bounces@mailman.ucsd.edu >>> *On Behalf Of *Greg >>> Thompson >>> *Sent:* 03 July 2018 02:50 >>> *To:* eXtended Mind, Culture, Activity >>> >>> *Subject:* [Xmca-l] Interesting article on robots and >>> social learning >>> >>> >>> >>> I?m ambivalent about this project but I suspect that >>> some young CHAT scholar out there could have a lot to >>> contribute to a project like this one: >>> >>> https://www.sapiens.org/column/machinations/artificial-intelligence-culture/ >>> >>> >>> >>> -Greg >>> >>> -- >>> >>> Gregory A. Thompson, Ph.D. >>> >>> Assistant Professor >>> >>> Department of Anthropology >>> >>> 880 Spencer W. Kimball Tower >>> >>> Brigham Young University >>> >>> Provo, UT 84602 >>> >>> WEBSITE: greg.a.thompson.byu.edu >>> >>> http://byu.academia.edu/GregoryThompson >>> >>> ------------------------------------------------------------ >>> >>> >>> This email and any files with it are confidential and >>> intended solely for the use of the recipient to whom it >>> is addressed. If you are not the intended recipient then >>> copying, distribution or other use of the information >>> contained is strictly prohibited and you should not rely >>> on it. If you have received this email in error please >>> let the sender know immediately and delete it from your >>> system(s). Internet emails are not necessarily secure. >>> While we take every care, University of Plymouth accepts >>> no responsibility for viruses and it is your >>> responsibility to scan emails and their attachments. >>> University of Plymouth does not accept responsibility >>> for any changes made after it was sent. Nothing in this >>> email or its attachments constitutes an order for goods >>> or services unless accompanied by an official order form. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180703/81a306b2/attachment.html From glassman.13@osu.edu Tue Jul 3 06:16:03 2018 From: glassman.13@osu.edu (Glassman, Michael) Date: Tue, 3 Jul 2018 13:16:03 +0000 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: References: Message-ID: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> It seems like we are still having the same argument as when robots first came on the scene. In response to John McCarthy, who was claiming that eventually robots can have belief systems and motivations similar to humans through AI John Searle wrote the Chinese room. There have been a lot of responses to the Chinese room over the years and a number of digital philosopher claim it is no longer salient, but I don?t think anybody has ever effectively answered his central question. Just a quick recap. You come to a closed door and know there is a person on the other side. To communicate you decide the teacher the person on the other side Chinese. You do this by continuously exchanging rules systems under the door. After a while you are able to have a conversation with the individual in perfect Chinese. But does that person actually know Chinese just from the rule systems. I think Searle?s major point is are you really learning if you don?t know why you?re learning, or are you just repeating. Learning is embedded in the human condition and the reason it works so well and is adaptable is because we understand it when we use what we learn in the world in response to others. To put it in response to the post, does a bomb defusion robot really learn how to defuse a bomb if it does not know why it is doing it. It might cut the right wires at the right time but it doesn?t understand why and therefore is not doing the task just a series of steps it has been able to absorb. Is that the opposite of human learning? What the researcher did really isn?t that special at this point. Well I definitely couldn?t do it and it is amazing, but it is in essence a miniature version of Libratus (which beat experts at Texas Hold em) and Alphago (which beat the second best Go player in the world). My guess it is the same use of deep learning in which the program integrates new information into what it is already capable of. If machines can learn from interacting with other humans then they can learn from interacting with other machines. It is the same principle (though much, much simpler in this case). The question is what does it mean. As we defining learning down because of the zeitgeist. Greg started his post saying a socio-cultural theorist be interested in this research. I wonder if they might more likely to be the ones putting on the brakes, asking questions about it. Michael From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Andy Blunden Sent: Tuesday, July 03, 2018 7:04 AM To: xmca-l@mailman.ucsd.edu Subject: [Xmca-l] Re: Interesting article on robots and social learning Does a robot have "motivation"? andy ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: Hi Greg, What is most interesting to me about the understanding of learning which informs most AI projects is that it seems to assume that affect is irrelevant. The role of caring, liking, worrying etc. in social learning seems to be almost universally overlooked because information is seen as something that can be ?got? and ?given? more than something that is distributed in relationships. Does anyone know about any AI projects which consider how machines might feel about what they learn? All the best, Rod From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Greg Thompson Sent: 03 July 2018 02:50 To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Interesting article on robots and social learning I?m ambivalent about this project but I suspect that some young CHAT scholar out there could have a lot to contribute to a project like this one: https://www.sapiens.org/column/machinations/artificial-intelligence-culture/ -Greg -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson ________________________________ [Image removed by sender.] This email and any files with it are confidential and intended solely for the use of the recipient to whom it is addressed. If you are not the intended recipient then copying, distribution or other use of the information contained is strictly prohibited and you should not rely on it. If you have received this email in error please let the sender know immediately and delete it from your system(s). Internet emails are not necessarily secure. While we take every care, University of Plymouth accepts no responsibility for viruses and it is your responsibility to scan emails and their attachments. University of Plymouth does not accept responsibility for any changes made after it was sent. Nothing in this email or its attachments constitutes an order for goods or services unless accompanied by an official order form. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180703/5d7eb49f/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: ~WRD000.jpg Type: image/jpeg Size: 823 bytes Desc: ~WRD000.jpg Url : http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180703/5d7eb49f/attachment.jpg From dkirsh@lsu.edu Tue Jul 3 10:32:12 2018 From: dkirsh@lsu.edu (David H Kirshner) Date: Tue, 3 Jul 2018 17:32:12 +0000 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> References: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> Message-ID: The other side of the coin is that ineffable human experience is becoming more effable. Computers can now look at a human brain scan and determine the degree of subjectively experienced pain: In 2013, Tor Wager, a neuroscientist at the University of Colorado, Boulder, took the logical next step by creating an algorithm that could recognize pain?s distinctive patterns; today, it can pick out brains in pain with more than ninety-five-per-cent accuracy. When the algorithm is asked to sort activation maps by apparent intensity, its ranking matches participants? subjective pain ratings. By analyzing neural activity, it can tell not just whether someone is in pain but also how intense the experience is. So, perhaps the computer can?t ?feel our pain,? but it can sure ?sense our pain!? Here?s the full article: https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain David From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Glassman, Michael Sent: Tuesday, July 3, 2018 8:16 AM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: Interesting article on robots and social learning It seems like we are still having the same argument as when robots first came on the scene. In response to John McCarthy, who was claiming that eventually robots can have belief systems and motivations similar to humans through AI John Searle wrote the Chinese room. There have been a lot of responses to the Chinese room over the years and a number of digital philosopher claim it is no longer salient, but I don?t think anybody has ever effectively answered his central question. Just a quick recap. You come to a closed door and know there is a person on the other side. To communicate you decide the teacher the person on the other side Chinese. You do this by continuously exchanging rules systems under the door. After a while you are able to have a conversation with the individual in perfect Chinese. But does that person actually know Chinese just from the rule systems. I think Searle?s major point is are you really learning if you don?t know why you?re learning, or are you just repeating. Learning is embedded in the human condition and the reason it works so well and is adaptable is because we understand it when we use what we learn in the world in response to others. To put it in response to the post, does a bomb defusion robot really learn how to defuse a bomb if it does not know why it is doing it. It might cut the right wires at the right time but it doesn?t understand why and therefore is not doing the task just a series of steps it has been able to absorb. Is that the opposite of human learning? What the researcher did really isn?t that special at this point. Well I definitely couldn?t do it and it is amazing, but it is in essence a miniature version of Libratus (which beat experts at Texas Hold em) and Alphago (which beat the second best Go player in the world). My guess it is the same use of deep learning in which the program integrates new information into what it is already capable of. If machines can learn from interacting with other humans then they can learn from interacting with other machines. It is the same principle (though much, much simpler in this case). The question is what does it mean. As we defining learning down because of the zeitgeist. Greg started his post saying a socio-cultural theorist be interested in this research. I wonder if they might more likely to be the ones putting on the brakes, asking questions about it. Michael From: xmca-l-bounces@mailman.ucsd.edu > On Behalf Of Andy Blunden Sent: Tuesday, July 03, 2018 7:04 AM To: xmca-l@mailman.ucsd.edu Subject: [Xmca-l] Re: Interesting article on robots and social learning Does a robot have "motivation"? andy ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: Hi Greg, What is most interesting to me about the understanding of learning which informs most AI projects is that it seems to assume that affect is irrelevant. The role of caring, liking, worrying etc. in social learning seems to be almost universally overlooked because information is seen as something that can be ?got? and ?given? more than something that is distributed in relationships. Does anyone know about any AI projects which consider how machines might feel about what they learn? All the best, Rod From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Greg Thompson Sent: 03 July 2018 02:50 To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Interesting article on robots and social learning I?m ambivalent about this project but I suspect that some young CHAT scholar out there could have a lot to contribute to a project like this one: https://www.sapiens.org/column/machinations/artificial-intelligence-culture/ -Greg -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson ________________________________ [Image removed by sender.] This email and any files with it are confidential and intended solely for the use of the recipient to whom it is addressed. If you are not the intended recipient then copying, distribution or other use of the information contained is strictly prohibited and you should not rely on it. If you have received this email in error please let the sender know immediately and delete it from your system(s). Internet emails are not necessarily secure. While we take every care, University of Plymouth accepts no responsibility for viruses and it is your responsibility to scan emails and their attachments. University of Plymouth does not accept responsibility for any changes made after it was sent. Nothing in this email or its attachments constitutes an order for goods or services unless accompanied by an official order form. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180703/53187308/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 823 bytes Desc: image001.jpg Url : http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180703/53187308/attachment.jpg From smago@uga.edu Mon Jul 2 08:24:26 2018 From: smago@uga.edu (Peter Smagorinsky) Date: Mon, 2 Jul 2018 15:24:26 +0000 Subject: [Xmca-l] JoLLE 2019 Winter Conference: First Call Message-ID: -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180702/17c7293e/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: 2019 JoLLE Save the Date.jpg Type: image/jpeg Size: 2277025 bytes Desc: 2019 JoLLE Save the Date.jpg Url : http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180702/17c7293e/attachment-0001.jpg From hhdave15@gmail.com Fri Jul 6 20:51:27 2018 From: hhdave15@gmail.com (Harshad Dave) Date: Sat, 7 Jul 2018 09:21:27 +0530 Subject: [Xmca-l] Re: JoLLE 2019 Winter Conference: First Call In-Reply-To: References: Message-ID: ? Hi, An article written by undersigned is published in the research journal ?Financial Markets, Institutions and Risks? - by ARMG Publishing, Sumy State University, Ukraine. The link is given here bellow. It is free down Load. I hope you might find it interesting with some innovative thoughts there in. It will be my pleasure to hear from you with necessary tips and criticism if any and that will help me to refine it a step further towards its meaningfulness. Regards, Harshad Dave Email: hhdave15@gmail.com *Xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx* *Research Journal: Financial Markets, Institutions and Risks (FMIR)* *Title:* ?Preliminary contemplation on Exchange Value? *Abstract:* Exchange Value is a vital term of economics. Exchange Value is born by an exchange process and the exchange process is the life line of the human society. Exchange value gets influenced by various parameters. These parameters are discussed here. It is also tried to investigate on the linkages among human characteristics and economics through the process of exchange. The nature of influence of the parameters gets transformed into unethical ways and means as and when time and circumstances permit so. Today, a dense flow of exchange processes incessantly flows through our society and has become a life line for the existence of the society. Unfortunately the flow is polluted with unethical influence on the process of exchange and this subject matter is discussed in this article. Successful application of abilities in unethical ways and means to secure a favourable exchange ratio could be realized only with the help of government brasses and public servants and ruling politicians. The application of abilities on unethical ways during the process of exchange returns with an advantageous exchange ratio on either side party who is more unethical. The wealth/resources accumulated with unethical part of exchange process become a special influencing parameter (Capitalistic parameter) to undermine opposite party in future exchange. Ultimately, it turns into a race to hold maximum unethical resources to dictate most advantageous exchange ratio in all future exchange processes. This is one of the prime causes that drag our society into downfall to ugly peril. *DOI:* 10.21272/fmir.2(2).69-92.2018 *Journal Link [Financial Markets, Institutions and Risks (FMIR)]:* http://armgpublishing.sumdu.edu.ua/journals/fmir/current-issue-of-fmir/ *Article Link:* http://armgpublishing.sumdu.edu.ua/journals/fmir/volume-2-issue-2/article-6/ xxxxx On Sat, Jul 7, 2018 at 4:07 AM Peter Smagorinsky wrote: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180707/9a387765/attachment.html From djwdoc@yahoo.com Fri Jul 13 20:12:09 2018 From: djwdoc@yahoo.com (Douglas Williams) Date: Sat, 14 Jul 2018 03:12:09 +0000 (UTC) Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: References: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> Message-ID: <1860198877.3850789.1531537929986@mail.yahoo.com> Hi-- I think I'll come out of lurking for this one. Actually, what you're talking about with this pain algorithm system sounds like a modeling system that someone might need to develop what Alan Turing described as a P-type computing device. A P-type computer would receive its programming from inputs of pleasure and pain. It was probably derived from reading some of the behavioralist models of mind at the time. Turing thought that he was probably pretty close to being able to develop such a computing device, which, because its input was similar, could model human thought. The Eliza Rogersian analysis computer program was another early idea in which the goal was to model the patterns of human interaction, and gradually approach closer to human thought and interaction that way. And by the 2000's, the idea of the "singularity" was afloat, in which one could model human minds so well as to enable a human to be uploaded into a computer, and live forever as software (Kurzweil, 2005). But given that we barely had a sufficient model of mind to say Boo with at the time (what is consciousness? where does intention come from? What is the balance of nature/nurture in motivation? Speech utterances? and so on), and you're right, AI doesn't have much of a theory of emotion, either--the goal of computer software modeling human thought seemed very far away to me. At someone's request, I wrote a rather whimsical paper called "What is Artificial Intelligence?" back in 2006 about such things. My argument was that statistical modeling of human interaction and capturing thought was not too easy after all, precisely because of the parts of mind we don't think of, and the social interactions that, at the time, were not a primary focus. I mused about that in the context of my trying to write a computer program by applying Chomsky's syntactic structures to interpret intention of a few simple questions--without, alas, in my case, a corpus-supported Markov chain logic to do it. Generative grammar would take care of it, right? Wrong. So as someone who had done a little primitive, incompetent attempt at speech modeling myself, and in the light of my later-acquired knowledge of CHAT, Burke, Bakhtin, Mead, and various other people in different fields, and of the tendency of people to interact through the world through cognitive biases, complexes, and embodied perceptions that were not readily available to artificial systems, I didn't think the singularity was so near. The terrible thing about computer programs is that they do just what you tell them to do, and no more. They have no drive to improve, except as programmed. When they do improve, their creativity is limited. And the approach now still substantially is pattern-recognition based. The current paradigm is something called Convolutional Neural Network Long Short-Term Memory Networks (CNN/LSTM) for speech recognition, in which the convolutional neural networks reduce the variants of speech input into manageable patterns, and temporal processing (temporal patterns of the real wold phenomena to which the AI system is responding). But while such systems combined with natural language processing can increasingly mimic human response, and "learn" on their own, and while they are approaching the "weak" form of artificial general intelligence (AGI), the intelligence needed for a machine to perform any intellectual task that a human being can, they are an awfully long way from "strong" AGI--that is, something approaching human consciousness. I think that's because they are a long way from capturing the kind of social embeddedness of almost all animal behavior, and the sense in which human cognition is embedded in the messy things, like emotion. A computer algorithm can recognize the patterns of emotion, but that's it. An AGI system that can experience emotions, or have motivation, is quite another thing entirely. I can tell you that AI confidence is still there. In raising questions about cultural and physical embodiment in artficial intelligence interations with someone in the field recently, he dismissed the idea as being that relevant. His thought was that "what I find essential is that we acknowledge that there's no obvious evidence? supporting that the current paradigm of CNN/LSTM under various reinforcement algorithms isn't enough for A AGI and in particular for broad animal-like intelligence like that of ravens and dogs." But ravens and dogs are embedded in social interaction, in intentionality, in consciousness--qualitatively different than ours, maybe, but there. Dogs don't do what you ask them to, always. When they do things, they do them for their own intentionality, which may be to please you, or may be to do something you never asked the dog to do, which is either inherent in its nature, or an expression of social interactions with you or others, many of which you and they may not be consciously aware of. The deep structure of metaphor, the spatiotemporal relations of language that Langacker describes as being necessary for construal, the worlds of narrativized experience, are mostly outside of the reckoning, so far as I know (though I'm not an expert--I could be at least partly wrong) of the current CNN/LSTM paradigm. My old interlocutor in thinking about my language program, Noam Chomsky, has been a pretty sharp critic of the pattern recognition approach to artificial intelligence. Here's Chomsky's take on the idea: http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html And here's Peter Norvig's response; he's a director of research at Google, where Kurzweil is, and where, I assume, they are as close to the strong version of artificial general intelligence as anyone out there... http://norvig.com/chomsky.html Frankly, I would be quite interested in what you think of these things. I'm merely an Isaiah Berlin fox, chasing to and fro at all the pretty ideas out there. But you, many of you, are, I suspect, the untapped hedgehogs whose ideas on these things would see more readily what I dimly grasp must be required, not just for achieving a strong AGI, but for achieving something that we would see as an ethical, reasonable artificial mind that expands human experience, rather than becomes a prison that reduces human interactions to its own level. My own thinking is that lately, Cognitive Metaphor Theory (CMT), which I knew more of in its earlier (now "standard model') days, is getting even more interesting than it was. I'd done a transfer term to UC Berkeley to study with George Lakoff, but we didn't hit it off well, perhaps I kept asking him questions about social embeddedness, and similarities to Vygotsky's theory of complex thought, and was too expressive about my interest in linking out from his approach than folding in. It seems that the idea I was rather woolily suggesting to Lakoff back then has caught on: namely, that utterances could be explored for cultural variation and historical embeddedness, a form ofsocial context to the narratives and metaphors and blended spaces that underlay speech utterances and thought; that there was a degree of social embodiment as well as physiological embodiment through which language operated. I thought then, and it looks like some other people now, are thinking that someone seeking to understand utterances (as a strong AGI system would need to do) really, would need to engage in internalizing and ventriloqusing a form of Geertz's thick description of interactions. In such forms, words do not mean what they say, and can have different affect that is a bit more complex than I think temporal processing currently addresses. I think these are the kind of things that artificial intelligence would need truly to advance, and that Bakhtin and Vygotsky and Leont'ev and in the visual world, Eisenstein were addressing all along... And, of course, you guys. Regards, Douglas Willams On Tuesday, July 3, 2018, 10:35:45 AM PDT, David H Kirshner wrote: The other side of the coin is that ineffable human experience is becoming more effable. Computers can now look at a human brain scan and determine the degree of subjectively experienced pain: ? In 2013, Tor Wager, a neuroscientist at the University of Colorado, Boulder, took the logical next step by creating an algorithm that could recognize pain?s distinctive patterns; today, it can pick out brains in pain with more than ninety-five-per-cent accuracy. When the algorithm is asked to sort activation maps by apparent intensity, its ranking matches participants? subjective pain ratings. By analyzing neural activity, it can tell not just whether someone is in pain but also how intense the experience is. ? So, perhaps the computer can?t ?feel our pain,? but it can sure ?sense our pain!? ? Here?s the full article: https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain ? David ? From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Glassman, Michael Sent: Tuesday, July 3, 2018 8:16 AM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: Interesting article on robots and social learning ? ? ? It seems like we are still having the same argument as when robots first came on the scene.? In response to John McCarthy, who was claiming that eventually robots can have belief systems and motivations similar to humans through AI John Searle wrote the Chinese room.? There have been a lot of responses to the Chinese room over the years and a number of digital philosopher claim it is no longer salient, but I don?t think anybody has ever effectively answered his central question. ? Just a quick recap.? You come to a closed door and know there is a person on the other side. To communicate you decide the teacher the person on the other side Chinese. You do this by continuously exchanging rules systems under the door.? After a while you are able to have a conversation with the individual in perfect Chinese. But does that person actually know Chinese just from the rule systems.? I think Searle?s major point is are you really learning if you don?t know why you?re learning, or are you just repeating. Learning is embedded in the human condition and the reason it works so well and is adaptable is because we understand it when we use what we learn in the world in response to others.? To put it in response to the post, does a bomb defusion robot really learn how to defuse a bomb if it does not know why it is doing it.? It might cut the right wires at the right time but it doesn?t understand why and therefore is not doing the task just a series of steps it has been able to absorb.? Is that the opposite of human learning? ? What the researcher did really isn?t that special at this point.? Well I definitely couldn?t do it and it is amazing, but it is in essence a miniature version of Libratus (which beat experts at Texas Hold em) and Alphago (which beat the second best Go player in the world).? My guess it is the same use of deep learning in which the program integrates new information into what it is already capable of.? If machines can learn from interacting with other humans then they can learn from interacting with other machines.? It is the same principle (though much, much simpler in this case).? The question is what does it mean.? As we defining learning down because of the zeitgeist. ?Greg started his post saying a socio-cultural theorist be interested in this research.? I wonder if they might more likely to be the ones putting on the brakes, asking questions about it. ? Michael ? From:xmca-l-bounces@mailman.ucsd.edu On Behalf Of Andy Blunden Sent: Tuesday, July 03, 2018 7:04 AM To: xmca-l@mailman.ucsd.edu Subject: [Xmca-l] Re: Interesting article on robots and social learning ? Does a robot have "motivation"? andy Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: Hi Greg, ? What is most interesting to me about the understanding of learning which informs most AI projects is that it seems to assume that affect is irrelevant. The role of caring, liking, worrying etc. in social learning seems to be almost universally overlooked because information is seen as something that can be ?got? and ?given? more than something that is distributed in relationships. ? Does anyone know about any AI projects which consider how machines might feel about what they learn? ? All the best, Rod ? From:xmca-l-bounces@mailman.ucsd.eduOn Behalf Of Greg Thompson Sent: 03 July 2018 02:50 To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Interesting article on robots and social learning ? I?m ambivalent about this project but I suspect that some young CHAT scholar out there could have a lot to contribute to a project like this one: https://www.sapiens.org/column/machinations/artificial-intelligence-culture/ ? -Greg? -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu? http://byu.academia.edu/GregoryThompson This email and any files with it are confidential and intended solely for the use of the recipient to whom it is addressed. If you are not the intended recipient then copying, distribution or other use of the information contained is strictly prohibited and you should not rely on it. If you have received this email in error please let the sender know immediately and delete it from your system(s). Internet emails are not necessarily secure. While we take every care, University of Plymouth accepts no responsibility for viruses and it is your responsibility to scan emails and their attachments. University of Plymouth does not accept responsibility for any changes made after it was sent. Nothing in this email or its attachments constitutes an order for goods or services unless accompanied by an official order form. ? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180714/aaf8a3ea/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 823 bytes Desc: not available Url : http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180714/aaf8a3ea/attachment.jpg From andyb@marxists.org Sat Jul 14 05:58:27 2018 From: andyb@marxists.org (Andy Blunden) Date: Sat, 14 Jul 2018 22:58:27 +1000 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: <1860198877.3850789.1531537929986@mail.yahoo.com> References: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> <1860198877.3850789.1531537929986@mail.yahoo.com> Message-ID: <7c142464-a2b2-ede1-e258-388e449e10f6@marxists.org> I understand that the Turing Test is one which AI people can use to measure the success of their AI - if you can't tell the difference between a computer and a human interaction then the computer has passed the Turing test. I tend to rely on a kind of anti-Turing Test, that is, that if you can tell the difference between the computer and the human interaction, then you have passed the anti-Turing test, that is, you know something about humans. Andy ------------------------------------------------------------ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 14/07/2018 1:12 PM, Douglas Williams wrote: > Hi-- > > I think I'll come out of lurking for this one. Actually, > what you're talking about with this pain algorithm system > sounds like a modeling system that someone might need to > develop what Alan Turing described as a P-type computing > device. A P-type computer would receive its programming > from inputs of pleasure and pain. It was probably derived > from reading some of the behavioralist models of mind at > the time. Turing thought that he was probably pretty close > to being able to develop such a computing device, which, > because its input was similar, could model human thought. > The Eliza Rogersian analysis computer program was another > early idea in which the goal was to model the patterns of > human interaction, and gradually approach closer to human > thought and interaction that way. And by the 2000's, the > idea of the "singularity" was afloat, in which one could > model human minds so well as to enable a human to be > uploaded into a computer, and live forever as software > (Kurzweil, 2005). But given that we barely had a > sufficient model of mind to say Boo with at the time (what > is consciousness? where does intention come from? What is > the balance of nature/nurture in motivation? Speech > utterances? and so on), and you're right, AI doesn't have > much of a theory of emotion, either--the goal of computer > software modeling human thought seemed very far away to me. > > At someone's request, I wrote a rather whimsical paper > called "What is Artificial Intelligence?" back in 2006 > about such things. My argument was that statistical > modeling of human interaction and capturing thought was > not too easy after all, precisely because of the parts of > mind we don't think of, and the social interactions that, > at the time, were not a primary focus. I mused about that > in the context of my trying to write a computer program by > applying Chomsky's syntactic structures to interpret > intention of a few simple questions--without, alas, in my > case, a corpus-supported Markov chain logic to do it. > Generative grammar would take care of it, right?Wrong. > > So as someone who had done a little primitive, incompetent > attempt at speech modeling myself, and in the light of my > later-acquired knowledge of CHAT, Burke, Bakhtin, Mead, > and various other people in different fields, and of the > tendency of people to interact through the world through > cognitive biases, complexes, and embodied perceptions that > were not readily available to artificial systems, I didn't > think the singularity was so near. > > The terrible thing about computer programs is that they do > just what you tell them to do, and no more. They have no > drive to improve, except as programmed. When they do > improve, their creativity is limited. And the approach now > still substantially is pattern-recognition based. The > current paradigm is something called Convolutional Neural > Network Long Short-Term Memory Networks (CNN/LSTM) for > speech recognition, in which the convolutional neural > networks reduce the variants of speech input into > manageable patterns, and temporal processing (temporal > patterns of the real wold phenomena to which the AI system > is responding). But while such systems combined with > natural language processing can increasingly mimic human > response, and "learn" on their own, and while they are > approaching the "weak" form of artificial general > intelligence (AGI), the intelligence needed for a machine > to perform any intellectual task that a human being can, > they are an awfully long way from "strong" AGI--that is, > something approaching human consciousness. I think that's > because they are a long way from capturing the kind of > social embeddedness of almost all animal behavior, and the > sense in which human cognition is embedded in the messy > things, like emotion. A computer algorithm can recognize > the patterns of emotion, but that's it. An AGI system that > can experience emotions, or have motivation, is quite > another thing entirely. > > I can tell you that AI confidence is still there. In > raising questions about cultural and physical embodiment > in artficial intelligence interations with someone in the > field recently, he dismissed the idea as being that > relevant. His thought was that "what I find essential is > that we acknowledge that there's no obvious evidence > supporting that the current paradigm of CNN/LSTM under > various reinforcement algorithms isn't enough for A AGI > and in particular for broad animal-like intelligence like > that of ravens and dogs." > > But ravens and dogs are embedded in social interaction, in > intentionality, in consciousness--qualitatively different > than ours, maybe, but there. Dogs don't do what you ask > them to, always. When they do things, they do them for > their own intentionality, which may be to please you, or > may be to do something you never asked the dog to do, > which is either inherent in its nature, or an expression > of social interactions with you or others, many of which > you and they may not be consciously aware of. The deep > structure of metaphor, the spatiotemporal relations of > language that Langacker describes as being necessary for > construal, the worlds of narrativized experience, are > mostly outside of the reckoning, so far as I know (though > I'm not an expert--I could be at least partly wrong) of > the current CNN/LSTM paradigm. > > My old interlocutor in thinking about my language program, > Noam Chomsky, has been a pretty sharp critic of the > pattern recognition approach to artificial intelligence. > > Here's Chomsky's take on the idea: > > http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html > > And here's Peter Norvig's response; he's a director of > research at Google, where Kurzweil is, and where, I > assume, they are as close to the strong version of > artificial general intelligence as anyone out there... > > http://norvig.com/chomsky.html > > Frankly, I would be quite interested in what you think of > these things. I'm merely an Isaiah Berlin fox, chasing to > and fro at all the pretty ideas out there. But you, many > of you, are, I suspect, the untapped hedgehogs whose ideas > on these things would see more readily what I dimly grasp > must be required, not just for achieving a strong AGI, but > for achieving something that we would see as an ethical, > reasonable artificial mind that expands human experience, > rather than becomes a prison that reduces human > interactions to its own level. > > My own thinking is that lately, Cognitive Metaphor Theory > (CMT), which I knew more of in its earlier (now "standard > model') days, is getting even more interesting than it > was. I'd done a transfer term to UC Berkeley to study with > George Lakoff, but we didn't hit it off well, perhaps I > kept asking him questions about social embeddedness, and > similarities to Vygotsky's theory of complex thought, and > was too expressive about my interest in linking out from > his approach than folding in. It seems that the idea I was > rather woolily suggesting to Lakoff back then has caught > on: namely, that utterances could be explored for cultural > variation and historical embeddedness, a form ofsocial > context to the narratives and metaphors and blended spaces > that underlay speech utterances and thought; that there > was a degree of social embodiment as well as physiological > embodiment through which language operated. I thought > then, and it looks like some other people now, are > thinking that someone seeking to understand utterances (as > a strong AGI system would need to do) really, would need > to engage in internalizing and ventriloqusing a form of > Geertz's thick description of interactions. In such forms, > words do not mean what they say, and can have different > affect that is a bit more complex than I think temporal > processing currently addresses. > > I think these are the kind of things that artificial > intelligence would need truly to advance, and that Bakhtin > and Vygotsky and Leont'ev and in the visual world, > Eisenstein were addressing all along... > > And, of course, you guys. > > Regards, > Douglas Willams > > > > On Tuesday, July 3, 2018, 10:35:45 AM PDT, David H > Kirshner wrote: > > > The other side of the coin is that ineffable human > experience is becoming more effable. > > Computers can now look at a human brain scan and determine > the degree of subjectively experienced pain: > > > > In 2013, Tor Wager, a neuroscientist at the University of > Colorado, Boulder, took the logical next step by creating > an algorithm that could recognize pain?s distinctive > patterns; today, it can pick out brains in pain with more > than ninety-five-per-cent accuracy. When the algorithm is > asked to sort activation maps by apparent intensity, its > ranking matches participants? subjective pain ratings. By > analyzing neural activity, it can tell not just whether > someone is in pain but also how intense the experience is. > > > > So, perhaps the computer can?t ?feel our pain,? but it can > sure ?sense our pain!? > > > > Here?s the full article: > > https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain > > > > David > > > > *From:*xmca-l-bounces@mailman.ucsd.edu > *On Behalf Of *Glassman, > Michael > *Sent:* Tuesday, July 3, 2018 8:16 AM > *To:* eXtended Mind, Culture, Activity > > *Subject:* [Xmca-l] Re: Interesting article on robots and > social learning > > > > / / > > > > It seems like we are still having the same argument as > when robots first came on the scene. In response to John > McCarthy, who was claiming that eventually robots can have > belief systems and motivations similar to humans through > AI John Searle wrote the Chinese room. There have been a > lot of responses to the Chinese room over the years and a > number of digital philosopher claim it is no longer > salient, but I don?t think anybody has ever effectively > answered his central question. > > > > Just a quick recap. You come to a closed door and know > there is a person on the other side. To communicate you > decide the teacher the person on the other side Chinese. > You do this by continuously exchanging rules systems under > the door. After a while you are able to have a > conversation with the individual in perfect Chinese. But > does that person actually know Chinese just from the rule > systems. I think Searle?s major point is are you really > learning if you don?t know why you?re learning, or are you > just repeating. Learning is embedded in the human > condition and the reason it works so well and is adaptable > is because we understand it when we use what we learn in > the world in response to others. To put it in response to > the post, does a bomb defusion robot really learn how to > defuse a bomb if it does not know why it is doing it. It > might cut the right wires at the right time but it doesn?t > understand why and therefore is not doing the task just a > series of steps it has been able to absorb. Is that the > opposite of human learning? > > > > What the researcher did really isn?t that special at this > point. Well I definitely couldn?t do it and it is > amazing, but it is in essence a miniature version of > Libratus (which beat experts at Texas Hold em) and Alphago > (which beat the second best Go player in the world). My > guess it is the same use of deep learning in which the > program integrates new information into what it is already > capable of. If machines can learn from interacting with > other humans then they can learn from interacting with > other machines. It is the same principle (though much, > much simpler in this case). The question is what does it > mean. As we defining learning down because of the > zeitgeist. Greg started his post saying a socio-cultural > theorist be interested in this research. I wonder if they > might more likely to be the ones putting on the brakes, > asking questions about it. > > > > Michael > > > > *From:*xmca-l-bounces@mailman.ucsd.edu > > > *On Behalf Of > *Andy Blunden > *Sent:* Tuesday, July 03, 2018 7:04 AM > *To:* xmca-l@mailman.ucsd.edu > *Subject:* [Xmca-l] Re: Interesting article on robots and > social learning > > > > Does a robot have "motivation"? > > andy > > ------------------------------------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: > > Hi Greg, > > > > What is most interesting to me about the understanding > of learning which informs most AI projects is that it > seems to assume that affect is irrelevant. The role of > caring, liking, worrying etc. in social learning seems > to be almost universally overlooked because > information is seen as something that can be ?got? and > ?given? more than something that is distributed in > relationships. > > > > Does anyone know about any AI projects which consider > how machines might feel about what they learn? > > > > All the best, > > > Rod > > > > *From:*xmca-l-bounces@mailman.ucsd.edu > > > *On Behalf Of > *Greg Thompson > *Sent:* 03 July 2018 02:50 > *To:* eXtended Mind, Culture, Activity > > *Subject:* [Xmca-l] Interesting article on robots and > social learning > > > > I?m ambivalent about this project but I suspect that > some young CHAT scholar out there could have a lot to > contribute to a project like this one: > > https://www.sapiens.org/column/machinations/artificial-intelligence-culture/ > > > > -Greg > > -- > > Gregory A. Thompson, Ph.D. > > Assistant Professor > > Department of Anthropology > > 880 Spencer W. Kimball Tower > > Brigham Young University > > Provo, UT 84602 > > WEBSITE: greg.a.thompson.byu.edu > > http://byu.academia.edu/GregoryThompson > > ------------------------------------------------------------ > > Image removed by sender. > > > This email and any files with it are confidential and > intended solely for the use of the recipient to whom > it is addressed. If you are not the intended recipient > then copying, distribution or other use of the > information contained is strictly prohibited and you > should not rely on it. If you have received this email > in error please let the sender know immediately and > delete it from your system(s). Internet emails are not > necessarily secure. While we take every care, > University of Plymouth accepts no responsibility for > viruses and it is your responsibility to scan emails > and their attachments. University of Plymouth does not > accept responsibility for any changes made after it > was sent. Nothing in this email or its attachments > constitutes an order for goods or services unless > accompanied by an official order form. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180714/81c74f8a/attachment.html From greg.a.thompson@gmail.com Sat Jul 14 08:15:27 2018 From: greg.a.thompson@gmail.com (Greg Thompson) Date: Sun, 15 Jul 2018 00:15:27 +0900 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: <7c142464-a2b2-ede1-e258-388e449e10f6@marxists.org> References: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> <1860198877.3850789.1531537929986@mail.yahoo.com> <7c142464-a2b2-ede1-e258-388e449e10f6@marxists.org> Message-ID: Andy, thanks for sending this since it alerted me to Doug's message (which seems to have not been included in this thread for me and so this is the first time I'm seeing it - not sure if the XMCA list is "playing with us" or something...) Doug, I agree with what you have pointed to here as far as the important role of embodiment and social and cultural embededdness. Would you mind sharing your whimsical paper that you mentioned? Also, one related line of thought is: I wonder how good AI has been at thinking about what John Searle calls "social ontology". This refers to the social worlds that all humans inhabit. For far too long these were considered as phantasmic worlds, worlds that were "socially constructed" and therefore unreal. But recent thinking in social sciences (Bruno Latour, among others) has pushed people to take these social constructions much more seriously . As I understand (rather dimly) human development, one of the critical aspects of development (typically accomplished in the 7-9 years) is the coming into awareness of these culturally particular social reals (ontologies). The result of this learning is that the adolescent encounters the world not simply as it is but as others recognize it to be. From this developmental perspective, the child in the story of the Emperor's new clothes has failed to reach this basic developmental stage - he doesn't see the world as others see it, he sees it as it is - the Emperor is naked! As a matter of modeling AI, what is needed is for the machine to be able to see what is not there (in a simplistic scientistic sense), namely the world as others see it. This will require AI modelers to let go of their scientistic sensibilities (which I assume they have) and build machines that can see the world not as the pre-cultural child sees it, but which can grasp the complex culturally particular social worlds that we inhabit (yes, full of feeling, but also full of role relations and all kinds of "being" that aren't there). I suspect that most AI developers would prefer to model an understanding of the world "as it is" (i.e., scientistic) rather than as others consider it to be. To my mind that means neglecting all the "ratcheting power" of human culture (as Tomasello described it). The result, I suspect, is that AI would never begin to approach human consciousness (perhaps there will be some other form of AI-consciousness, but for it to be human, it must be cultural, with all the non-scientific-ness that entails). But perhaps that's a good thing (i.e., I'm not going to be the one to tell them this!) Anyway, I really appreciate your contribution Doug (and I'm not sure why I didn't see it before Andy responded to it). Very best, greg On Sat, Jul 14, 2018 at 9:58 PM, Andy Blunden wrote: > I understand that the Turing Test is one which AI people can use to > measure the success of their AI - if you can't tell the difference between > a computer and a human interaction then the computer has passed the Turing > test. I tend to rely on a kind of anti-Turing Test, that is, that if you > can tell the difference between the computer and the human interaction, > then you have passed the anti-Turing test, that is, you know something > about humans. > > Andy > ------------------------------ > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > On 14/07/2018 1:12 PM, Douglas Williams wrote: > > Hi-- > > I think I'll come out of lurking for this one. Actually, what you're > talking about with this pain algorithm system sounds like a modeling system > that someone might need to develop what Alan Turing described as a P-type > computing device. A P-type computer would receive its programming from > inputs of pleasure and pain. It was probably derived from reading some of > the behavioralist models of mind at the time. Turing thought that he was > probably pretty close to being able to develop such a computing device, > which, because its input was similar, could model human thought. The Eliza > Rogersian analysis computer program was another early idea in which the > goal was to model the patterns of human interaction, and gradually approach > closer to human thought and interaction that way. And by the 2000's, the > idea of the "singularity" was afloat, in which one could model human minds > so well as to enable a human to be uploaded into a computer, and live > forever as software (Kurzweil, 2005). But given that we barely had a > sufficient model of mind to say Boo with at the time (what is > consciousness? where does intention come from? What is the balance of > nature/nurture in motivation? Speech utterances? and so on), and you're > right, AI doesn't have much of a theory of emotion, either--the goal of > computer software modeling human thought seemed very far away to me. > > At someone's request, I wrote a rather whimsical paper called "What is > Artificial Intelligence?" back in 2006 about such things. My argument was > that statistical modeling of human interaction and capturing thought was > not too easy after all, precisely because of the parts of mind we don't > think of, and the social interactions that, at the time, were not a primary > focus. I mused about that in the context of my trying to write a computer > program by applying Chomsky's syntactic structures to interpret intention > of a few simple questions--without, alas, in my case, a corpus-supported > Markov chain logic to do it. Generative grammar would take care of it, > right? Wrong. > > So as someone who had done a little primitive, incompetent attempt at > speech modeling myself, and in the light of my later-acquired knowledge of > CHAT, Burke, Bakhtin, Mead, and various other people in different fields, > and of the tendency of people to interact through the world through > cognitive biases, complexes, and embodied perceptions that were not readily > available to artificial systems, I didn't think the singularity was so near. > > The terrible thing about computer programs is that they do just what you > tell them to do, and no more. They have no drive to improve, except as > programmed. When they do improve, their creativity is limited. And the > approach now still substantially is pattern-recognition based. The current > paradigm is something called Convolutional Neural Network Long Short-Term > Memory Networks (CNN/LSTM) for speech recognition, in which the > convolutional neural networks reduce the variants of speech input into > manageable patterns, and temporal processing (temporal patterns of the real > wold phenomena to which the AI system is responding). But while such > systems combined with natural language processing can increasingly mimic > human response, and "learn" on their own, and while they are approaching > the "weak" form of artificial general intelligence (AGI), the intelligence > needed for a machine to perform any intellectual task that a human being > can, they are an awfully long way from "strong" AGI--that is, something > approaching human consciousness. I think that's because they are a long way > from capturing the kind of social embeddedness of almost all animal > behavior, and the sense in which human cognition is embedded in the messy > things, like emotion. A computer algorithm can recognize the patterns of > emotion, but that's it. An AGI system that can experience emotions, or have > motivation, is quite another thing entirely. > > I can tell you that AI confidence is still there. In raising questions > about cultural and physical embodiment in artficial intelligence > interations with someone in the field recently, he dismissed the idea as > being that relevant. His thought was that "what I find essential is that we > acknowledge that there's no obvious evidence supporting that the current > paradigm of CNN/LSTM under various reinforcement algorithms isn't enough > for A AGI and in particular for broad animal-like intelligence like that of > ravens and dogs." > > But ravens and dogs are embedded in social interaction, in intentionality, > in consciousness--qualitatively different than ours, maybe, but there. Dogs > don't do what you ask them to, always. When they do things, they do them > for their own intentionality, which may be to please you, or may be to do > something you never asked the dog to do, which is either inherent in its > nature, or an expression of social interactions with you or others, many of > which you and they may not be consciously aware of. The deep structure of > metaphor, the spatiotemporal relations of language that Langacker describes > as being necessary for construal, the worlds of narrativized experience, > are mostly outside of the reckoning, so far as I know (though I'm not an > expert--I could be at least partly wrong) of the current CNN/LSTM paradigm. > > My old interlocutor in thinking about my language program, Noam Chomsky, > has been a pretty sharp critic of the pattern recognition approach to > artificial intelligence. > > Here's Chomsky's take on the idea: > > http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html > > And here's Peter Norvig's response; he's a director of research at Google, > where Kurzweil is, and where, I assume, they are as close to the strong > version of artificial general intelligence as anyone out there... > > http://norvig.com/chomsky.html > > Frankly, I would be quite interested in what you think of these things. > I'm merely an Isaiah Berlin fox, chasing to and fro at all the pretty ideas > out there. But you, many of you, are, I suspect, the untapped hedgehogs > whose ideas on these things would see more readily what I dimly grasp must > be required, not just for achieving a strong AGI, but for achieving > something that we would see as an ethical, reasonable artificial mind that > expands human experience, rather than becomes a prison that reduces human > interactions to its own level. > > My own thinking is that lately, Cognitive Metaphor Theory (CMT), which I > knew more of in its earlier (now "standard model') days, is getting even > more interesting than it was. I'd done a transfer term to UC Berkeley to > study with George Lakoff, but we didn't hit it off well, perhaps I kept > asking him questions about social embeddedness, and similarities to > Vygotsky's theory of complex thought, and was too expressive about my > interest in linking out from his approach than folding in. It seems that > the idea I was rather woolily suggesting to Lakoff back then has caught on: > namely, that utterances could be explored for cultural variation and > historical embeddedness, a form ofsocial context to the narratives and > metaphors and blended spaces that underlay speech utterances and thought; > that there was a degree of social embodiment as well as physiological > embodiment through which language operated. I thought then, and it looks > like some other people now, are thinking that someone seeking to understand > utterances (as a strong AGI system would need to do) really, would need to > engage in internalizing and ventriloqusing a form of Geertz's thick > description of interactions. In such forms, words do not mean what they > say, and can have different affect that is a bit more complex than I think > temporal processing currently addresses. > > I think these are the kind of things that artificial intelligence would > need truly to advance, and that Bakhtin and Vygotsky and Leont'ev and in > the visual world, Eisenstein were addressing all along... > > And, of course, you guys. > > Regards, > Douglas Willams > > > > On Tuesday, July 3, 2018, 10:35:45 AM PDT, David H Kirshner > wrote: > > > The other side of the coin is that ineffable human experience is becoming > more effable. > > Computers can now look at a human brain scan and determine the degree of > subjectively experienced pain: > > > > In 2013, Tor Wager, a neuroscientist at the University of Colorado, > Boulder, took the logical next step by creating an algorithm that could > recognize pain?s distinctive patterns; today, it can pick out brains in > pain with more than ninety-five-per-cent accuracy. When the algorithm is > asked to sort activation maps by apparent intensity, its ranking matches > participants? subjective pain ratings. By analyzing neural activity, it can > tell not just whether someone is in pain but also how intense the > experience is. > > > > So, perhaps the computer can?t ?feel our pain,? but it can sure ?sense our > pain!? > > > > Here?s the full article: > > > https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain > > > > David > > > > *From:* xmca-l-bounces@mailman.ucsd.edu > *On Behalf Of *Glassman, Michael > *Sent:* Tuesday, July 3, 2018 8:16 AM > *To:* eXtended Mind, Culture, Activity > > *Subject:* [Xmca-l] Re: Interesting article on robots and social learning > > > > > > > > It seems like we are still having the same argument as when robots first > came on the scene. In response to John McCarthy, who was claiming that > eventually robots can have belief systems and motivations similar to humans > through AI John Searle wrote the Chinese room. There have been a lot of > responses to the Chinese room over the years and a number of digital > philosopher claim it is no longer salient, but I don?t think anybody has > ever effectively answered his central question. > > > > Just a quick recap. You come to a closed door and know there is a person > on the other side. To communicate you decide the teacher the person on the > other side Chinese. You do this by continuously exchanging rules systems > under the door. After a while you are able to have a conversation with the > individual in perfect Chinese. But does that person actually know Chinese > just from the rule systems. I think Searle?s major point is are you really > learning if you don?t know why you?re learning, or are you just repeating. > Learning is embedded in the human condition and the reason it works so well > and is adaptable is because we understand it when we use what we learn in > the world in response to others. To put it in response to the post, does a > bomb defusion robot really learn how to defuse a bomb if it does not know > why it is doing it. It might cut the right wires at the right time but it > doesn?t understand why and therefore is not doing the task just a series of > steps it has been able to absorb. Is that the opposite of human learning? > > > > What the researcher did really isn?t that special at this point. Well I > definitely couldn?t do it and it is amazing, but it is in essence a > miniature version of Libratus (which beat experts at Texas Hold em) and > Alphago (which beat the second best Go player in the world). My guess it > is the same use of deep learning in which the program integrates new > information into what it is already capable of. If machines can learn from > interacting with other humans then they can learn from interacting with > other machines. It is the same principle (though much, much simpler in > this case). The question is what does it mean. As we defining learning > down because of the zeitgeist. Greg started his post saying a > socio-cultural theorist be interested in this research. I wonder if they > might more likely to be the ones putting on the brakes, asking questions > about it. > > > > Michael > > > > *From:* xmca-l-bounces@mailman.ucsd.edu *On > Behalf Of *Andy Blunden > *Sent:* Tuesday, July 03, 2018 7:04 AM > *To:* xmca-l@mailman.ucsd.edu > *Subject:* [Xmca-l] Re: Interesting article on robots and social learning > > > > Does a robot have "motivation"? > > andy > ------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: > > Hi Greg, > > > > What is most interesting to me about the understanding of learning which > informs most AI projects is that it seems to assume that affect is > irrelevant. The role of caring, liking, worrying etc. in social learning > seems to be almost universally overlooked because information is seen as > something that can be ?got? and ?given? more than something that is > distributed in relationships. > > > > Does anyone know about any AI projects which consider how machines might > feel about what they learn? > > > > All the best, > > > Rod > > > > *From:* xmca-l-bounces@mailman.ucsd.edu > *On Behalf Of *Greg Thompson > *Sent:* 03 July 2018 02:50 > *To:* eXtended Mind, Culture, Activity > > *Subject:* [Xmca-l] Interesting article on robots and social learning > > > > I?m ambivalent about this project but I suspect that some young CHAT > scholar out there could have a lot to contribute to a project like this one: > > > > https://www.sapiens.org/column/machinations/artificial-intelligence- > culture/ > > > > -Greg > > -- > > Gregory A. Thompson, Ph.D. > > Assistant Professor > > Department of Anthropology > > 880 Spencer W. Kimball Tower > > Brigham Young University > > Provo, UT 84602 > > WEBSITE: greg.a.thompson.byu.edu > http://byu.academia.edu/GregoryThompson > ------------------------------ > > [image: Image removed by sender.] > > This email and any files with it are confidential and intended solely for > the use of the recipient to whom it is addressed. If you are not the > intended recipient then copying, distribution or other use of the > information contained is strictly prohibited and you should not rely on it. > If you have received this email in error please let the sender know > immediately and delete it from your system(s). Internet emails are not > necessarily secure. While we take every care, University of Plymouth > accepts no responsibility for viruses and it is your responsibility to scan > emails and their attachments. University of Plymouth does not accept > responsibility for any changes made after it was sent. Nothing in this > email or its attachments constitutes an order for goods or services unless > accompanied by an official order form. > > > > > -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180715/2fff4df9/attachment.html From mpacker@cantab.net Sat Jul 14 08:23:31 2018 From: mpacker@cantab.net (Martin Packer) Date: Sat, 14 Jul 2018 10:23:31 -0500 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: References: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> <1860198877.3850789.1531537929986@mail.yahoo.com> <7c142464-a2b2-ede1-e258-388e449e10f6@marxists.org> Message-ID: On Jul 14, 2018, at 10:15 AM, Greg Thompson wrote: > > Anyway, I really appreciate your contribution Doug (and I'm not sure why I didn't see it before Andy responded to it). > Because it only exists when others see it! :) Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180714/9c165f90/attachment.html From mcole@ucsd.edu Sat Jul 14 09:35:25 2018 From: mcole@ucsd.edu (mike cole) Date: Sat, 14 Jul 2018 09:35:25 -0700 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: References: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> <1860198877.3850789.1531537929986@mail.yahoo.com> <7c142464-a2b2-ede1-e258-388e449e10f6@marxists.org> Message-ID: Take patriotism for example as ?on display? and the social mechanisms of its ?made visible ness? Thanks for the note, Doug. You are located in two worlds we are all trying to understand, yay Comm at UCSD Mike On Sat, Jul 14, 2018 at 8:25 AM Martin Packer wrote: > On Jul 14, 2018, at 10:15 AM, Greg Thompson > wrote: > > > Anyway, I really appreciate your contribution Doug (and I'm not sure why I > didn't see it before Andy responded to it). > > Because it only exists when others see it! :) > > Martin > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180714/3457014f/attachment.html From mcole@ucsd.edu Sat Jul 14 15:27:24 2018 From: mcole@ucsd.edu (mike cole) Date: Sat, 14 Jul 2018 15:27:24 -0700 Subject: [Xmca-l] Fwd: Design Research News, July 2018 In-Reply-To: <1AF13E3B-49F0-4EE9-86A6-C895B6127ABB@icloud.com> References: <1AF13E3B-49F0-4EE9-86A6-C895B6127ABB@icloud.com> Message-ID: Of potential interest regarding tech and disability studies ---------- Forwarded message ---------- From: DAVID DURLING <0000216e5ba832f3-dmarc-request@jiscmail.ac.uk> Date: Sat, Jul 14, 2018 at 12:59 PM Subject: Design Research News, July 2018 To: DESIGN-RESEARCH@jiscmail.ac.uk _______________________________________________ _______________ _______________________________________________ _______________ ___________________________________________ __ _ _ ___ _________________________________________ ___ __ ___ _____ _________________________________________ ____ __ _____ ___ _________________________________________ ___ __ _______ __ ___________________________________________ __ ____ ___ DESIGN RESEARCH NEWS Volume 23 Number 7, Jul 2018 ISSN 1473-3862 DRS Digital Newsletter http://www.designresearchsociety.org ________________________________________________________________ Join DRS via e-payment http://www.designresearchsociety.org ________________________________________________________________ CONTENTS o IASDR o MinD Conference o Calls o Announcements o DRN search o Digital Services of the DRS o Subscribing and unsubscribing to DRN o Contributing to DRN ________________________________________________________________ ________________________________________________________________ INTERNATIONAL ASSOCIATION OF SOCIETIES OF DESIGN RESEARCH (IASDR) NEW WEBSITE Over the past few months, the IASDR board has considered the design of a new website. The intention is to offer member societies a more up to date site with better internal editing facilities and room for future growth. We are now able to announce that the first iteration of the new site is online, and we will be adding further content and facilities over the coming months. Further announcements will be made as facilities become available. As a significant part of this fresh approach, we also took the difficult decision to relinquish the old domain name in favour of the new website iasdr.net to better reflect the nature of IASDR in networking between member societies and in networking globally for the benefit of the design research community in general. Please disseminate this information as widely as possible. iasdr.net ________________________________________________________________ ________________________________________________________________ 2-5 SEPTEMBER 2019 - IASDR CONGRESS 2019 - A DATE FOR DIARIES IASDR is pleased to announce that the 2019 IASDR biennial Congress will be hosted by Manchester Metropolitan University, Manchester, UK from 2nd to 5th September 2019. The lead organiser is Professor Martyn Evans, Head of the Manchester School of Art Research Centre. Previous IASDR Congresses have been held in various locations worldwide, including Japan, Korea, Hong Kong, Netherlands and Australia, and most recently in USA. These conferences attract a large global audience, and proceedings are published online. Further details including a link to the dedicated conference website will be announced shortly. ________________________________________________________________ ________________________________________________________________ 19-20 September 2019 - INTERNATIONAL MIND CONFERENCE 2019 DESIGNING WITH AND FOR PEOPLE WITH DEMENTIA: WELLBEING, EMPOWERMENT AND HAPPINESS International Conference 2019 of the MinD consortium, the DRS Special Interest Group on Behaviour Change and the DRS Special Interest Group on Wellbeing and Happiness Venue: TU Dresden, Germany Conference organisers: Christian Wlfel, Kristina Niedderer, Rebecca Cain, Geke Ludden CONFERENCE THEME MinD invites papers and design contributions for the first international MinD conference 2019 on Designing for People with Dementia. The conference will provide a trans-disciplinary forum for researchers, practitioners, end-users and policy makers from the design and health care professions to exchange and discuss new findings, approaches and methods for using design to improve dementia care and to support people with dementia and their carers. With ca. 10.9 million people affected by dementia in Europe, with numbers set to double by 2050 (Prince, Guerchet and Prina 2013), with 20 million carers, and with no cure in sight, research into care to improve the quality of life of people with dementia is essential, to encourage and enable them to engage in activities that are in line with their interests and experiences (Alcove 2013; Alzheimers Society 2013). Characterised by progressive memory and cognitive degeneration, people who are affected by Alzheimers disease or other dementias often face cognitive, behavioural and psychosocial difficulties, including impairment and degeneration of memory and of perceptions of identity (Alcove 2013). As a result, many have reduced physical activities or social engagement, or are unable to work. Emotionally, this can lead to uncertainty, anxiety and depression and a loss of sense of purpose. In this light, it is becoming increasingly apparent that it is not just care that is required but support for how to live well with dementia, whether in the own home or in residential care. This includes managing ones own care and every day tasks, as well as leisure activities, social engagement. Even small things such as whether and when to go out or what to wear can have important effects on peoples sense of self and wellbeing, contentment and happiness. Key to this is having choices and the ability to decide. Acknowledging the agency of people with dementia and understanding what can be done to support this is therefore a key question. Design-based non-pharmacological interventions are increasingly recognised as having great potential to help. Design can offer novel ways of complementing care and independent living to empower people with dementia in everyday situations because of its ubiquitous nature and its affordances. Much focus has so far been on physical and cognitive tasks and on safe-keeping and reducing risks. For example, design can help accomplish physical tasks and offer guidance or reminders, e.g. for time or orientation, or alert to behavioural changes. While there are some approaches towards emotional and social aspects of living with dementia, more could and should be done to focus on enabling people with dementia and acknowledging their agency. Design can help to support social, leisure, creative activities. It can help empower people with dementia offering choices and aiding decision-making. Design can support the individual person, or change the environment. This can take the form of a product, of systems or services, of the built or natural environment. The importance is to use design to help reduce stigma and exclusion, and instead to improve well-being and social inclusion to create happiness. While the aims may be clear, the way to achieve them still raises many questions about the best approaches, ways and methods to achieve such aims. This conference therefore seeks to explore the manifold areas and approaches. This may include novel theoretical approaches, novel methods in design development or in working with and including end-users, or novel products, environments, services or systems. Or it may include novel ways of working, collaboration and co-operation. The key aim is to bring together and explore how we might impact positively and sustainably on the personal, social, cultural and economic factors within our communities to improve living with dementia. To this end, we welcome a broad engagement with the field and invite submissions from a diverse range of researchers and practitioners from the various design and health disciplines, including product and interior design, craft, information and communication technologies, architecture and the built environment, psychiatry, psychology, geriatrics and others who make a relevant to the field. Themes may include, for example: - Design approaches for the wellbeing/empowerment/happiness of elderly people - Design approaches for the wellbeing/empowerment/happiness of people with mild cognitive impairment (MCI) or dementia - New design frameworks and approaches for wellbeing/empowerment/happiness - Mindful design approaches for wellbeing/empowerment/happiness - Collaboration between designers, technologists, health professionals and people with lived experience - Data collection with and by people with MCI/dementia - Co-design & co-creation with people with MCI/dementia - Evaluation of design with people with lived experience - Evaluation of the impact of design on people with lived experience CONTRIBUTIONS & SUBMISSION INFORMATION MinD 2019 welcomes contributions in two formats: 1) Full Papers We invite the submission of full papers (3000-4000 words) by 1 February 2019. Papers are expected to offer new or challenging views on the subject, novel approaches, working methods or design interventions or ideas, or similar. Papers will be selected subject to a double blind review process by an international review team. Paper will be reviewed for relevance/significance, novelty/originality, quality/rigour and clarity. 2) Design-based submissions: We invite the submission of designs in analogue or digital format, including e.g. physical artefacts, digital artefacts, films/video. Contributions are expected to offer new or challenging ideas, novel approaches, working methods or design interventions, or similar. Submissions will be exhibited during and as part of the conference. In the first instance proposals should be submitted by 1 February 2019, including an image or visualisation and a verbal description of the design, and a 300 word statement of the underpinning research detailing its originality, significance and rigour. Design submissions will be selected subject to a double blind review process by an international review team. Submissions will be reviewed for relevance/significance, novelty/originality, and quality. If selected, submissions are expected to arrive by the organisers by 15 August 2019, free of charge. Insurance is the responsibility of the author/designer. Submission information: All contributions must be submitted by 1 February 2019 at the latest through the conference submission system, which you can access from the conference pages. Please check authors guidelines. For your convenience, we also provide templates for both paper and design submissions. For the full submission guidelines, authors guidance notes and templates as well as the link to the Submission System, please follow the link to this conference website: www.mind4dementia.eu Publication of conference submissions: Paper submissions will in the first instance be published as online proceedings, archived in an open access repository with a DOI number, and also available as an abstract / programme booklet and memory stick with the proceedings. In a second step, paper authors will be invited to submit their extended papers (6000-8000 words) for inclusion in a journal special issue. Available journals will be publicised on the conference website as soon as the are confirmed. Design submissions will be included in the abstract booklet and published in an online-based catalogue accompanying the exhibition. KEY DATES First call for papers: 1 July 2018 Online submission opens: 1 October 2018 Final date for full paper submissions: 1 February 2019 Final date for Design proposal submissions: 1 February 2019 Delegate registration opens: 1 April 2019 Paper decision notifications: 1 May 2019 Early bird registration closes: 1 June 2019 Camera ready papers submission 15 June 2019 Late registration closes: 15 August 2019 Conference: 19-20 September 2019 www.mind4dementia.eu ________________________________________________________________ ________________________________________________________________ CALLS Design and Culture special issue: Design & Neoliberalism Design & Neoliberalism: Special Issue This special issue of *Design and Culture* examines the ways in which neoliberalism has both expanded and constricted the purview of design across multiple disciplines, including (but not limited to) product design, interaction design, graphic design, advertising, branding, fashion, digital media, experience design, web design, architecture, furniture, and other adjacent areas of inquiry and practice. This call for papers seeks submissions that engage global perspectives on the intersections between design and neoliberalism across this wide variety of design and design-related fields. Of particular interest are submissions engaging historical perspectives, the context of the Global South, and questions of labor. Neoliberalism has emerged over the past decade or so as a totalizing conceptual apparatus for understanding a wide array of contemporary phenomena. Whether understood politically as a system of governance that submits all functions to the authority of market directives, economically as the financialization of capitalism, or socially as the erosion of collective institutions, neoliberalism has impacted cultural production in myriad ways. Design, when analyzed critically, has often been portrayed as complicit if not synonymous with these transformations. As Guy Julier has observed, Design takes advantage of and normalizes the transformations that neoliberalism provokes (Julier 2014). That is to say, design practices in this context not only organize themselves according to neoliberal political, economic, and social goals and systems, but also promote neoliberal structures and values. Much existing work on the intersection between neoliberalism and design focuses upon the fields of architecture and urbanism, as well as humanitarian design and design activism. This issue seeks to examine connections between design and neoliberalism that have yet to be explored. How have neoliberal economic policies shaped and constrained design, and how has design contributed to the financialization of previously uncommodified sectors of life? How has design adapted to the increasing proliferation of global networks of exchange? In what ways has design discourse intersected with neoliberal ideologies about work, value, creativity, experience, politics, institutions, etc.? Additional topics for consideration may include, but are not limited to: - Historical convergences and/or divergences of design and neoliberalism - Design and globalization and/or nationalism - Neoliberal design ideologies in the context of international development - Race and racism at the intersection of design and neoliberalism - Discourses of innovation and design thinking - Design and labor and/or class - The coalescence of design and business in both the academy and industry - Conflicts and convergences between neoliberal design and modernist traditions - Indigenous design in the context of neoliberalism - Design, neoliberalism, and postcoloniality - Challenges to neoliberal design ideologies and practices - Neoliberalism and design pedagogy Submission deadline: November 30, 2018 Manuscripts should be between 5,000 and 7,000 words long, including notes and references, and may include 48 images. For additional submission guidelines, please visit: http://designandculture.org/page/submissions. All manuscripts will be externally reviewed and should be submitted through Design and Cultures online portal: http://www.designandcultureadmin.org/index.php/dc/login. After submitting, please email the title of your paper to the guest editors: Arden Stern (arden.stern@artcenter.edu) and Sami Siegelbaum (samisiegelbaum@gmail.com). 15-16 November 2018 - Design Thinking Research Symposium 12 The Design Thinking Research Symposium (DTRS) series brings together international academics with a shared interest in design thinking and design studies coming from a diversity of disciplines. On several occasions, DTRS organizers have shared a dataset (typically video-based data with protocol transcripts) with symposium participants for distributed analysis, with each participating research team using their preferred methodology and addressing their theoretical interests. This data-sharing approach was initiated in the seminal "Delft Protocol Workshop" (now also labeled DTRS2), which was organized by Kees Dorst, Nigel Cross, and Henri Christiaans in 1994, where verbal protocol data was collected from professional designers in a controlled context. DTRS7, organized by Janet McDonnell and Peter Lloyd, involved professional designers (architects and engineers) working in their natural habitats, and DTRS10, organized by Robin Adams, involved design review conversations in a design education setting. For DTRS11, organized by Bo Christensen, Linden Ball and Kim Halskov, the dataset concerned extensive in situ collected video-based data of everyday professional design team activity traced longitudinally, notably involving cross-cultural co-creation with users. The publications stemming from DTRS11 are coming out now in the form of journal special issues of Design Studies and CoDesign, and a book publication (see below for references). CALL FOR PAPERS The Design Thinking Research Symposium 12 (DTRS12) takes place from 15-16 November 2018 at South-Korea, Ulsan National Institute of Science and Technology (UNIST), and is organized by Henri Christiaans. The theme of DTRS12 is: Tech-centered Design Thinking: Perspectives from a Rising Asia True to tradition DTRS12 invites international academics and researchers with a shared interest in design thinking to study a shared dataset and come up with their own perspectives and insights. Similar to DTRS symposia in the past the shared dataset (covering workshops with Korean companies, and interviews with Korean academics and designers) provides a common frame of reference. Compared to earlier protocol analysis studies in DTRS, the data might be less rich in terms of solving design problems on the spot. The ultimate goal is that you as design researcher brings about data, that help both industry and the research community in understanding Design Thinking as applied in that industry. It contributes to the further development of design methods. What we expect from you as participant is, that you will use the dataset for cross-cultural comparisons, based on your experience and your work for or with industry. With so many perspectives from all over the world DTRS12 will be a very promising and interesting confrontation. Please contact Jina Yoon (jinayoon@unist.ac.kr) for further information and for expression of interest in participation in DTRS12 (with or without a paper). Upcoming deadlines: 31 September: Submission of draft paper. 23-24 January 2019 - Call for Papers / Makers ABSTRACT SUBMISSION DEADLINE EXTENDED Call for Papers / Call for Makers Futurescan 4: Valuing Practice University of Bolton, UK Fashion and textiles practice intersects traditional processes and innovative technologies. Tacit knowledge acquired through hand skills, making, utilising equipment and working with processes is fundamental to developing understanding. Although practical learning is valued, the teaching of creative and making subjects is under threat in formal education. Within the fashion and textile industries there are skills shortages. Heritage crafts risk being lost as digital technologies and automation impact upon future generations. The Association of Fashion & Textile Courses (FTC) invites submissions for its forthcoming conference Futurescan 4: Valuing Practice, which provides an international forum for the dissemination of research, creative practice and pedagogy surrounding fashion and textiles. Submissions are encouraged from established and early career researchers, postgraduates, practitioners, makers and educators regarding completed projects or work in progress under the following topics: - Valuing Artisan Skills, Drawing and Making - Learning from History, Tradition and Industry - Collaborating and Cross-disciplinary Working - Integrating and Connecting Digital Technologies - Designing Responsibly and Working Sustainably - Promoting Diversity, Employability and Community - Investigating Creative Processes and Pedagogy Contributors can select from the following submission formats: Full Paper: 20-minute conference presentation Short Paper:10-minute conference presentation Exhibition:examples of practice-based work For all submission formats please upload a 200-300 word abstract and biography (200 words max) to: futurescan4.exordo.com (you will be required to setup an account first). You can also upload images (5 max - jpeg, tiff, png, bmp) to accompany your abstract. For exhibits of practice-based work please include images, provide dimensions of work and suggest methods of display i.e. wall-mounted, free standing, digital. All abstracts will be double-blind peer reviewed. For conference enquires please email:chair@ftc-online.org.uk http://www.ftc-online.org.uk/futurescan-4-conference/ Key Dates EXTENDED closing date for abstracts - 30th July 2018 Acceptance and Feedback - 7th September 2018 Presentations and Exhibits submitted - 16th January 2019 Futurescan 4 Conference - 23rd-24th January 2019 Conference Paper / Article Submission - 12th April 2019 Online Publication Abstracts, selected conference papers and exhibited work will be published online with ISSN, by the FTC. Full Paper: 3000-5000 words Short Paper:1500-3000 words Exhibition Report: 1500-3000 words Associated Journals We are delighted to announce that articles formed from conference presentations can be submitted to the following associated conference journals for consideration: Fashion Practice: Design, Creative Process & the Fashion Industry Journal of Textile Design Research and Practice Art, Design and Communication in Higher Education Articles for journal submission will be subject to journal peer review processes and must comply with the relevant journal publication guidelines. Artifact: Journal of Design Practice Artifact: Journal of Design Practice aims to publish high-quality academic papers focused on practice-based design research that explores conditions, issues, developments and tasks pertaining to design in a broad sense. As an international design research journal, Artifact targets the global design research community with the aim of strengthening knowledge sharing and theory building of relevance to design practice. All articles and research notes are subject to double-blind peer-review. The journal is cross-disciplinary in scope and welcomes contributions from all fields of design research including product design and visual communication, user experience, interface, and service design as well as design management and organization. The editors welcome both conceptual and empirical papers. All submission must include a signed Open Access publishing agreement giving us your permission to publish your paper should it be accepted by our peer review panel. This journal does not charge APCs or submission charges. Until further notice contributions should be submitted by e-mail to editor Nicky Nedergaard: nned@kadk.dk. https://www.intellectbooks.co.uk/journals/view-Journal,id=255 Dialectic journal Call for submissions/papers for possible publication in Dialectic, the scholarly journal of the AIGA Design Educators Community Authors are invited to submit works for the FIFTH issue (volume 3, issue 1)of Dialectic, a biannual journal devoted to the critical and creative examination of issues that affect design education, research, and inquiry. Michigan Publishing, the hub of scholarly publishing at the University of Michigan, is publishing Dialectic on behalf of the AIGA Design Educators Community (DEC). The fifth issue will be published between March 15 and April 15, 2019. The deadline for full versions of papers and visual narratives written and/or designed that meet Dialectic Issue 05s categorical descriptions (see below) is: 5:00 pm CDT, Friday, July 27, 2018. Dialectics fifth issue seeks papers and visual narratives that critically examine, interrogate or reveal how and why design processes informed by various aspects of making have affected (or should affect) the workings of complex systems wherein people actively participate in generating the content and quality of [their] experiences. (excerpted from Armstrong, H., Blume, M., Chochinov, A., Davis, M. et al The AIGA Designer of 2025, published by AIGA, NY, NY, USA, 2017). Papers and visual narratives that explore designs evolution from being rooted in the making of artifacts and messages to its expansion into making more human-centered endeavors rooted in experiences, services, interactions and even public policies are welcomed. Submissions are also encouraged that effectively document how design processes can or should affect collaborations that involve broadly informed, egalitarian conversations-cum-collaborations. Dialectics Editorial Board hopes that AIGA DEC MAKE conference attendees will consider submitting papers based on their conference presentations and Proceedings publications. We also invite other design educators, researchers and practitioners who wish to share scholarship, research or criticism that aligns with the themes described above to submit their work for possible publication in our fifth issue. Authors planning to contribute to this issue, be they conference attendees or others, are reminded that their work should be framed in one of the submission types described in the categorical descriptions section that appears later in this communiqu. All submitters are hereby notified that all work we publish MUST satisfy our editorial guidelines (https://quod.lib.umich.edu/d/dialectic/policies-guidelines), and MUST ABIDE BY FORMAL PARAMETERS SUCH AS WORD COUNTS (see below). Each piece that Dialectic will publish must be based on fundamentally sound scholarship and inquiry, and be written or designed so that is broadly accessible, and focused on topics relevant to our audiences. Questions to shape submissions for possible publication in Dialectic Issue 05 The fifth issue of Dialectic seeks papers and visual essays/narratives of interest to a diverse audience of design educators and practitioners. Example prompts for authors include (but are not limited to): How can research informed by design effectively guide knowledge construction and understanding that help diverse groups effectively facilitate negotiation, especially when agendas conflict? How can design decision-making processes effectively inform and guide the collaboration and management of interdisciplinary teams? How can designers best initiate and sustain roles for themselves as curators of events that occur across digitally mediated environments in ways that foster community building? How can designers design, operationalize and analyze their making processes, and then share knowledge derived from these, to help fuel critical thinking and overcome narrowly informed assumptions and biases? How can designers involve collaborators from outside design in projects and initiatives that help organizations evolve working practices and procedures from where they are now (and have been) to where they wish (and need) to go in the future. Dialectics web address for submissions: https://dialectic.submittable.com/submit Submitters are hereby advised to peruse the contents of the entire Dialectic website to ensure that their submissions meet ALL of Dialectics criteria for publication BEFORE they submit work for consideration. Reading the rest of this communiqu CAREFULLY and THOROUGHLY is also STRONGLY encouraged. All submissions to Dialectic MUST be made through the Submittable website hosted by Michigan Publishing listed above. Please DO NOT attempt to send any type of submission as an e-mail attachment to any of Dialectics Editorial Board members, its Producer, its AIGA DEC liaisons, or members of its Advisory Committee. Instructions for formatting ALL types of submissions are embedded (per category) in this submittable website. Submissions that are NOT formatted according to these instructions will be rejected. All submissions must be created in keeping with the editorial policy of Dialectic, which is articulated here: http://quod.lib.umich.edu/d/dialectic/policies-guidelines. Categorical descriptions of the type of content Dialectic publishes Dialectic will publish visual essays/narratives and papers that satisfy the following categorical descriptions: Original visual essays/visually based narratives/visual storytelling: Dialectic invites submissions from designers or teams of designers that are comprised primarily or solely of imagery (photography and/or illustrations), typographic structures, type-as-image, or some combination of these that visually communicate one or more types of narrative/storytelling. The logistical criteria specified in the Illustrations, Graphics, and Photos section of the 2016-17 Submissions Guidelines for Dialectic document must be met (re: image resolutions, physical sizes, bleeds, etc.), and submissions that are assessed by the Editorial Board and/or external reviewers to be visually compelling and conceptually provocative will be considered for publication, pending the availability of page space in a given issue. Research papers (3,000 to 6,500 words): These articles will recount how designers and design teams identified a situation that was problematic, formulated and operated research to understand the various factors, conditions and people involved that were affecting the situation, and then used their analysis of the data gathered from this research to guide design decision-making toward improving this situation. This type of writing should be grounded in evidentiary processes, and should clearly explicate a hypothesis, as well as posit and support a methodology and some form of a measurable data set. Long-form case study reports or case series reports (3,000 to 6,500 words): These articles will describe how a particular person, group, project, event, experience or situation has been studied and analyzed, using one or more methods, during a specific span of time. These contributions should posit insights that exist as logical subsets of a larger category, and that are at least tangentially generalizable to the category. A case series report collectively describes how a group of individuals have responded to a particular type of treatment, experience or interaction. They can be used to help analyze and assess the responses of a cross-section of individual users to one or more iterations of an interface design, or an environmental graphics or wayfinding system, or a series of data visualizations. Position papers (2,000 to 4,500 words): These essays will present the readership of Dialectic with an opinionof the author, or of a specified group of people or organizationabout an issue or set of issues in a way or ways that make particular values and the belief systems that guide them known. Design criticism (as long-form essays of between 2,000 and 3,000 words): The goal of these pieces is to critically analyze design decision-making, and the affects that making and using what has been designed have on the operation and evolution of social, technological, economic, environmental and political systems. Reviews of books, exhibitions, conferences, etc. (750 to 1,500 words): These shorter articles are written to critically analyze the efficacy of the structure, content, style, and relative merit of their particular subjects in ways that combine the authors personal reactions and arguments to it with his/her assessment of how effectively it fulfilled or failed in its purpose. Survey papers (2,000 to 3,000 words): These pieces are written to clearly summarize, organize, and analyze a select, topical grouping of scholarly articles, research papers, or case studies in a way that integrates and adds to the understanding of the work in a given discipline or field of study. Theoretical speculations (3,000 to 6,500 words): These contributions will consist of attempts by their authors to explain a particular phenomenon, set of circumstances, or situational construct based on their ability to utilize observations rather than hard evidence to fuel speculative thoughts and suppositions. These contributions should be grounded in a viable paradigm, or use theory as a viable justification for what has been observed, and should be internally coherent and advance logical conclusions. Editorial responses from Dialectic readers (750 to 1,200 words): Dialectic encourages its readers to submit critical responses to specific articles, editorials, or visual pieces that have been published in previous issues. Authors are also welcome to bring any issues that they believe are pertinent to the attention of Dialectics readership. Editorial commentary relative to specific published articles and pieces will be sent to their author(s) so they can respond. Important dates: The deadline for full versions of papers and visual narratives written and/or designed that meet Dialectic Issue 03s categorical descriptions is: 5:00 pm CDT, Friday, July 27, 2018. Initial/Desk reviews of submissions to Dialectic Issue 02 complete: August 20, 2018 External reviews of submission to Dialectic Issue 02 complete: October 1, 2018 Authors responses/revisions to external reviewers suggestions re: their manuscripts due: October 29, 2018 Dialectic Issue 05 published: March 15April 15, 2019 https://dialectic.submittable.com Call for Papers in French / Special Issue "Sustainable Development" / Sciences du Design Founded in 2015, *Sciences du Design* is a peer-reviewed international French language design research journal published at the Presses Universitaires de France. Non-specialist and pluralistic, it explores all aspects of design and aims to offer an open international forum for design researchers and practitioners. The journal welcomes French-speaking design research as well as international design research submitted in French. The journal just published a call for papers for the *Special Issue 09 "Sustainable development"*, to be published for Spring 2019. We are proud to announce that it is co-edited by *Gavin Melles (Swinburne University, Australia)* and *Susana Paixo-Barradas (Kedge Design School, France).* Submissions are open to any design researcher or practitioner worldwide, *on condition that it is submitted in French.* July 20, 2018: deadline for sending your abstract (300 words) November 15, 2018: deadline for sending your full paper May 2019: release both in print and online versions Read the full call here (in French): http://www.sciences-du-design.org/index.php/sdd/announcement/view/2 More about the journal (in English): http://www.sciences-du-design.org/index.php/sdd/navigationMenu/view/ english ________________________________________________________________ ________________________________________________________________ ANNOUNCEMENTS 21-22 September 2018 - Brand Design Conference The registration system for the International Brand Design Conference is now open via our website: www.branddesign2018.net - Our Early-Bird offer ends on 26 AUG - MA/PhD students must provide evidence: after registering please, send to info@branddesign2018.net either your MA/PhD acceptance letter, your enrollment receipt, or your student card, making sure 1) the course name, 2) the academic year and 3) your level of studies are easily identifiable. - Please, make sure you keep safe your confirmation email sent by UWL Shop. The Brand Design Conference will take place at the University of West London (UWL), 21 and 22 Sept, as part of the London Design Festival and London Design Biennale. www.branddesign2018.net Publication of Journal of Peer Production Special Issue 12: Makerspaces and Institutions! We are very pleased to announce the release of the much-anticipated 12th Special Issue of the open access Journal of Peer Production: Makerspaces and Institutions. Makerspaces are subjects in a plurality of institutional advances and developments, catching the imaginations of a wide variety of organisations and other actors drawn to a buzz of enticing possibilities. Depending upon the nature of the encounter, makerspaces are becoming cradles for entrepreneurship, innovators in education, nodes in open hardware networks, studios for digital artistry, ciphers for social change, prototyping shops for manufacturers, remanufacturing hubs in circular economies, twenty-first century libraries, emblematic anticipations of commons-based, peer-produced post capitalism, workshops for hacking technology and its politics, laboratories for smart urbanism, galleries for hands-on explorations in material culture... not forgetting, of course, spaces for simply having fun. What kinds of hybrid arrangements emerge through these encounters, and what becomes of the occupied factories for peer production theory? How are institutions reshaping aspirations for autonomous, even democratic, fabrication and experimentation aspirations that were and are important parts of makerspace narratives? And what do these encounters mean for institutions, whether in education, culture, business, development or some other sphere; how are they too evolving through their exposure to grassroots and community making practices? This is a mega issue, exploring institutional developments in all their complexity through 13 research articles (each of which have been peer reviewed and revised through the Journals particularly transparent process, which makes all review steps public) and 7 practitioner contributions from key leaders working in the field. Please take a look, tell us what you think, and help us spread the discussions through your networks. This project is the result of a long labour of love for the many makers and thinkers involved, and we look forward to hearing your thoughts. http://peerproduction.net/issues/issue-12-makerspaces-and-institutions/ Airea Journal | First issue now published We are delighted to announce that Airea's (Journal of Arts and Interdisciplinary Research) first issue Computational tools and digital methods in creative practices is now published: Airea is a peer-reviewed, open-access, interdisciplinary journal that acts as a channel of communication between artists and practices, concepts and tools. It is hosted by Edinburgh University Library Open Journals. Our first issue investigates creative practices at the intersection of art and digital technology. The selected papers reflect on key practical and philosophical challenges that contribute to the broader discussion of what it means to use digital tools as a form of artistic inquiry. You are welcome to follow us on Twitter @siren_eca and register with our journal at http://journals.ed.ac.uk/airea for further announcements, publications, and call for papers. Moving forward, future issues will ask how spaces, methods, practitioners, and audiences will adapt to increased technological mediation and will document the practices that emerge from the interdisciplinary condition of creative processes. http://journals.ed.ac.uk/airea/issue/view/236 Developing Countries - Resources online --- IFORS The aim of the IFORS Developing Countries On-Line Resources page is to offer the OR worker all publicly-available materials on the topic of OR for Development. It also aims to provide a venue for people who are working in the area to share their completed or in-process work, learn from others, and stimulate comments and discussions on the work. Regarding IFORS Developing Countries OR resources website, its regular updates - and your possible submission of "free" (not copyright protected) material, you might occasionally visit http://ifors.org/developing_countries/index.php?title=Main_Page. "Operational Research" (OR) is the discipline of applying advanced analytical methods to help make better decisions. By using techniques such as problem structuring methods and mathematical modelling to analyze complex situations, Operational Research gives executives the power to make more effective decisions and build more productive systems. The International Federation of Operational Research Societies (IFORS; http://ifors.org/) is an almost 60-year old organization which is currently composed of 51 national societies. Regional Groups of IFORS are: ALIO (The Latin American Ibero Association on Operations Research), APORS (The Association of Asian-Pacific Operational Research Societies), EURO (The Association of European Operational Research Societies), NORAM (The Association of North American Operations Research Societies). IFORS conferences are taking place every three years; IFORS 2017 has been successfully celebrated in Quebec City, Canada. http://ifors.org/developing_countries/index.php?title=Main_Page ________________________________________________________________ ________________________________________________________________ SEARCHING DESIGN RESEARCH NEWS Searching back issues of DRN is best done through the customisable JISC search engine at: http://www.jiscmail.ac.uk/design-research Look under 'Search Archives' ________________________________________________________________ ________________________________________________________________ SERVICES o Design Research News communicates news about design research throughout the world. It is emailed approximately monthly and is free of charge. You may subscribe or unsubscribe at the following site: http://www.jiscmail.ac.uk/lists/design-research.html o Design Studies is the International Journal for Design Research in Engineering, Architecture, Products and Systems, which is published in co-operation with the Design Research Society. DRS members can subscribe to the journal at special rates. https://www.journals.elsevier.com/design-studies ________________________________________________________________ ________________________________________________________________ CONTRIBUTIONS Information to the editor, David Durling Professor of Design Research, Coventry University, UK PLEASE NOTE: contributions should be sent as plain text in the body of an email. Do not send attachments. Do not copy and paste from Word documents. ________________________________________________________________ ________________________________________________________________ ######################################################################## To unsubscribe from the DESIGN-RESEARCH list, click the following link: https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=DESIGN-RESEARCH&A=1 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180714/154ae5f9/attachment.html From glassman.13@osu.edu Sat Jul 14 16:44:42 2018 From: glassman.13@osu.edu (Glassman, Michael) Date: Sat, 14 Jul 2018 23:44:42 +0000 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: <7c142464-a2b2-ede1-e258-388e449e10f6@marxists.org> References: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> <1860198877.3850789.1531537929986@mail.yahoo.com> <7c142464-a2b2-ede1-e258-388e449e10f6@marxists.org> Message-ID: <3B91542B0D4F274D871B38AA48E991F953B3E4C1@CIO-KRC-D1MBX04.osuad.osu.edu> The Turing test, at least the test he wrote in his article, is actually a big more complicated than this, and especially poignant today. Turing?s test of whether computers are acting as human was based on an old English game show called The Lying Game (I suppose one of the reasons for the title of the movie on Turing, though of course it had multiple meanings. But for some reason they never mentioned the origin of the phrase in the movie). Anyway in the lying game the contestant had to listen to two individuals, one of whom was telling the truth about the situation and one of whom was lying. The way Turing describes it, it sounds quite brutal. The contestant had to figure out who the liar was (there was a similar much milder version years later in the US). Anyway Turing?s proposal, if I remember correctly, was that a computer could be considered thinking like a human if the comp the contestant was listening to was lying and he or she couldn?t tell. In essence the computer would successfully lie. Everybody think Turing believed that computers would eventually think like humans but my reading of the article was that he had no idea, but as the computer stood at the time there was no chance. The reason this is so poignant is the Mueller indictments that came down yesterday. For those outside the U.S. or not following the news the indictments were against Russian military leading a scheme to convince individuals of lies about various actor in the 2016 election (also times release of information and breaking in to voting systems). But it is the propagation of lies by robots and people believing them that interests me. I feel like we aren?t putting enough thought into that. Many of the people receiving the information could not tell it was no from humans and believed it even though in many cases it was generated by robots, passing it seems to me Turing?s test. How and why did this happen? Of course Turing died before the Internet so he couldn?t have known about it. But I wonder if part of the reason the robots were successful is that they have the ability to mine, collect and aggregate people?s biases and then reflect them back to us. We tend to engage, believe things in the contexts of our own biases. They say in salesmanship that the trick is figuring out what people want to here and then couching whatever you want to see in that. Trump is a master of reading what a group of people want to hear at the moment, their biases, and then mirroring it back to them If we went back to the Chinese room and the person inside was able to read our biases from our messages would they then be human. We live in a strange age. From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Andy Blunden Sent: Saturday, July 14, 2018 8:58 AM To: xmca-l@mailman.ucsd.edu Subject: [Xmca-l] Re: Interesting article on robots and social learning I understand that the Turing Test is one which AI people can use to measure the success of their AI - if you can't tell the difference between a computer and a human interaction then the computer has passed the Turing test. I tend to rely on a kind of anti-Turing Test, that is, that if you can tell the difference between the computer and the human interaction, then you have passed the anti-Turing test, that is, you know something about humans. Andy ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 14/07/2018 1:12 PM, Douglas Williams wrote: Hi-- I think I'll come out of lurking for this one. Actually, what you're talking about with this pain algorithm system sounds like a modeling system that someone might need to develop what Alan Turing described as a P-type computing device. A P-type computer would receive its programming from inputs of pleasure and pain. It was probably derived from reading some of the behavioralist models of mind at the time. Turing thought that he was probably pretty close to being able to develop such a computing device, which, because its input was similar, could model human thought. The Eliza Rogersian analysis computer program was another early idea in which the goal was to model the patterns of human interaction, and gradually approach closer to human thought and interaction that way. And by the 2000's, the idea of the "singularity" was afloat, in which one could model human minds so well as to enable a human to be uploaded into a computer, and live forever as software (Kurzweil, 2005). But given that we barely had a sufficient model of mind to say Boo with at the time (what is consciousness? where does intention come from? What is the balance of nature/nurture in motivation? Speech utterances? and so on), and you're right, AI doesn't have much of a theory of emotion, either--the goal of computer software modeling human thought seemed very far away to me. At someone's request, I wrote a rather whimsical paper called "What is Artificial Intelligence?" back in 2006 about such things. My argument was that statistical modeling of human interaction and capturing thought was not too easy after all, precisely because of the parts of mind we don't think of, and the social interactions that, at the time, were not a primary focus. I mused about that in the context of my trying to write a computer program by applying Chomsky's syntactic structures to interpret intention of a few simple questions--without, alas, in my case, a corpus-supported Markov chain logic to do it. Generative grammar would take care of it, right? Wrong. So as someone who had done a little primitive, incompetent attempt at speech modeling myself, and in the light of my later-acquired knowledge of CHAT, Burke, Bakhtin, Mead, and various other people in different fields, and of the tendency of people to interact through the world through cognitive biases, complexes, and embodied perceptions that were not readily available to artificial systems, I didn't think the singularity was so near. The terrible thing about computer programs is that they do just what you tell them to do, and no more. They have no drive to improve, except as programmed. When they do improve, their creativity is limited. And the approach now still substantially is pattern-recognition based. The current paradigm is something called Convolutional Neural Network Long Short-Term Memory Networks (CNN/LSTM) for speech recognition, in which the convolutional neural networks reduce the variants of speech input into manageable patterns, and temporal processing (temporal patterns of the real wold phenomena to which the AI system is responding). But while such systems combined with natural language processing can increasingly mimic human response, and "learn" on their own, and while they are approaching the "weak" form of artificial general intelligence (AGI), the intelligence needed for a machine to perform any intellectual task that a human being can, they are an awfully long way from "strong" AGI--that is, something approaching human consciousness. I think that's because they are a long way from capturing the kind of social embeddedness of almost all animal behavior, and the sense in which human cognition is embedded in the messy things, like emotion. A computer algorithm can recognize the patterns of emotion, but that's it. An AGI system that can experience emotions, or have motivation, is quite another thing entirely. I can tell you that AI confidence is still there. In raising questions about cultural and physical embodiment in artficial intelligence interations with someone in the field recently, he dismissed the idea as being that relevant. His thought was that "what I find essential is that we acknowledge that there's no obvious evidence supporting that the current paradigm of CNN/LSTM under various reinforcement algorithms isn't enough for A AGI and in particular for broad animal-like intelligence like that of ravens and dogs." But ravens and dogs are embedded in social interaction, in intentionality, in consciousness--qualitatively different than ours, maybe, but there. Dogs don't do what you ask them to, always. When they do things, they do them for their own intentionality, which may be to please you, or may be to do something you never asked the dog to do, which is either inherent in its nature, or an expression of social interactions with you or others, many of which you and they may not be consciously aware of. The deep structure of metaphor, the spatiotemporal relations of language that Langacker describes as being necessary for construal, the worlds of narrativized experience, are mostly outside of the reckoning, so far as I know (though I'm not an expert--I could be at least partly wrong) of the current CNN/LSTM paradigm. My old interlocutor in thinking about my language program, Noam Chomsky, has been a pretty sharp critic of the pattern recognition approach to artificial intelligence. Here's Chomsky's take on the idea: http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html And here's Peter Norvig's response; he's a director of research at Google, where Kurzweil is, and where, I assume, they are as close to the strong version of artificial general intelligence as anyone out there... http://norvig.com/chomsky.html Frankly, I would be quite interested in what you think of these things. I'm merely an Isaiah Berlin fox, chasing to and fro at all the pretty ideas out there. But you, many of you, are, I suspect, the untapped hedgehogs whose ideas on these things would see more readily what I dimly grasp must be required, not just for achieving a strong AGI, but for achieving something that we would see as an ethical, reasonable artificial mind that expands human experience, rather than becomes a prison that reduces human interactions to its own level. My own thinking is that lately, Cognitive Metaphor Theory (CMT), which I knew more of in its earlier (now "standard model') days, is getting even more interesting than it was. I'd done a transfer term to UC Berkeley to study with George Lakoff, but we didn't hit it off well, perhaps I kept asking him questions about social embeddedness, and similarities to Vygotsky's theory of complex thought, and was too expressive about my interest in linking out from his approach than folding in. It seems that the idea I was rather woolily suggesting to Lakoff back then has caught on: namely, that utterances could be explored for cultural variation and historical embeddedness, a form ofsocial context to the narratives and metaphors and blended spaces that underlay speech utterances and thought; that there was a degree of social embodiment as well as physiological embodiment through which language operated. I thought then, and it looks like some other people now, are thinking that someone seeking to understand utterances (as a strong AGI system would need to do) really, would need to engage in internalizing and ventriloqusing a form of Geertz's thick description of interactions. In such forms, words do not mean what they say, and can have different affect that is a bit more complex than I think temporal processing currently addresses. I think these are the kind of things that artificial intelligence would need truly to advance, and that Bakhtin and Vygotsky and Leont'ev and in the visual world, Eisenstein were addressing all along... And, of course, you guys. Regards, Douglas Willams On Tuesday, July 3, 2018, 10:35:45 AM PDT, David H Kirshner wrote: The other side of the coin is that ineffable human experience is becoming more effable. Computers can now look at a human brain scan and determine the degree of subjectively experienced pain: In 2013, Tor Wager, a neuroscientist at the University of Colorado, Boulder, took the logical next step by creating an algorithm that could recognize pain?s distinctive patterns; today, it can pick out brains in pain with more than ninety-five-per-cent accuracy. When the algorithm is asked to sort activation maps by apparent intensity, its ranking matches participants? subjective pain ratings. By analyzing neural activity, it can tell not just whether someone is in pain but also how intense the experience is. So, perhaps the computer can?t ?feel our pain,? but it can sure ?sense our pain!? Here?s the full article: https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain David From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Glassman, Michael Sent: Tuesday, July 3, 2018 8:16 AM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: Interesting article on robots and social learning It seems like we are still having the same argument as when robots first came on the scene. In response to John McCarthy, who was claiming that eventually robots can have belief systems and motivations similar to humans through AI John Searle wrote the Chinese room. There have been a lot of responses to the Chinese room over the years and a number of digital philosopher claim it is no longer salient, but I don?t think anybody has ever effectively answered his central question. Just a quick recap. You come to a closed door and know there is a person on the other side. To communicate you decide the teacher the person on the other side Chinese. You do this by continuously exchanging rules systems under the door. After a while you are able to have a conversation with the individual in perfect Chinese. But does that person actually know Chinese just from the rule systems. I think Searle?s major point is are you really learning if you don?t know why you?re learning, or are you just repeating. Learning is embedded in the human condition and the reason it works so well and is adaptable is because we understand it when we use what we learn in the world in response to others. To put it in response to the post, does a bomb defusion robot really learn how to defuse a bomb if it does not know why it is doing it. It might cut the right wires at the right time but it doesn?t understand why and therefore is not doing the task just a series of steps it has been able to absorb. Is that the opposite of human learning? What the researcher did really isn?t that special at this point. Well I definitely couldn?t do it and it is amazing, but it is in essence a miniature version of Libratus (which beat experts at Texas Hold em) and Alphago (which beat the second best Go player in the world). My guess it is the same use of deep learning in which the program integrates new information into what it is already capable of. If machines can learn from interacting with other humans then they can learn from interacting with other machines. It is the same principle (though much, much simpler in this case). The question is what does it mean. As we defining learning down because of the zeitgeist. Greg started his post saying a socio-cultural theorist be interested in this research. I wonder if they might more likely to be the ones putting on the brakes, asking questions about it. Michael From: xmca-l-bounces@mailman.ucsd.edu > On Behalf Of Andy Blunden Sent: Tuesday, July 03, 2018 7:04 AM To: xmca-l@mailman.ucsd.edu Subject: [Xmca-l] Re: Interesting article on robots and social learning Does a robot have "motivation"? andy ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: Hi Greg, What is most interesting to me about the understanding of learning which informs most AI projects is that it seems to assume that affect is irrelevant. The role of caring, liking, worrying etc. in social learning seems to be almost universally overlooked because information is seen as something that can be ?got? and ?given? more than something that is distributed in relationships. Does anyone know about any AI projects which consider how machines might feel about what they learn? All the best, Rod From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Greg Thompson Sent: 03 July 2018 02:50 To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Interesting article on robots and social learning I?m ambivalent about this project but I suspect that some young CHAT scholar out there could have a lot to contribute to a project like this one: https://www.sapiens.org/column/machinations/artificial-intelligence-culture/ -Greg -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson ________________________________ [Image removed by sender.] This email and any files with it are confidential and intended solely for the use of the recipient to whom it is addressed. If you are not the intended recipient then copying, distribution or other use of the information contained is strictly prohibited and you should not rely on it. If you have received this email in error please let the sender know immediately and delete it from your system(s). Internet emails are not necessarily secure. While we take every care, University of Plymouth accepts no responsibility for viruses and it is your responsibility to scan emails and their attachments. University of Plymouth does not accept responsibility for any changes made after it was sent. Nothing in this email or its attachments constitutes an order for goods or services unless accompanied by an official order form. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180714/fd62c9c0/attachment.html From andyb@marxists.org Sat Jul 14 18:55:09 2018 From: andyb@marxists.org (Andy Blunden) Date: Sun, 15 Jul 2018 11:55:09 +1000 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: <3B91542B0D4F274D871B38AA48E991F953B3E4C1@CIO-KRC-D1MBX04.osuad.osu.edu> References: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> <1860198877.3850789.1531537929986@mail.yahoo.com> <7c142464-a2b2-ede1-e258-388e449e10f6@marxists.org> <3B91542B0D4F274D871B38AA48E991F953B3E4C1@CIO-KRC-D1MBX04.osuad.osu.edu> Message-ID: I think we go back to Martin's earlier ironic comment here, Michael. Andy ------------------------------------------------------------ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 15/07/2018 9:44 AM, Glassman, Michael wrote: > > The Turing test, at least the test he wrote in his > article, is actually a big more complicated than this, and > especially poignant today. Turing?s test of whether > computers are acting as human was based on an old English > game show called The Lying Game (I suppose one of the > reasons for the title of the movie on Turing, though of > course it had multiple meanings. But for some reason they > never mentioned the origin of the phrase in the movie). > Anyway in the lying game the contestant had to listen to > two individuals, one of whom was telling the truth about > the situation and one of whom was lying. The way Turing > describes it, it sounds quite brutal. The contestant had > to figure out who the liar was (there was a similar much > milder version years later in the US). Anyway Turing?s > proposal, if I remember correctly, was that a computer > could be considered thinking like a human if the comp the > contestant was listening to was lying and he or she > couldn?t tell. In essence the computer would successfully > lie. Everybody think Turing believed that computers would > eventually think like humans but my reading of the article > was that he had no idea, but as the computer stood at the > time there was no chance. > > > > The reason this is so poignant is the Mueller indictments > that came down yesterday. For those outside the U.S. or > not following the news the indictments were against > Russian military leading a scheme to convince individuals > of lies about various actor in the 2016 election (also > times release of information and breaking in to voting > systems). But it is the propagation of lies by robots and > people believing them that interests me. I feel like we > aren?t putting enough thought into that. Many of the > people receiving the information could not tell it was no > from humans and believed it even though in many cases it > was generated by robots, passing it seems to me Turing?s > test. How and why did this happen? Of course Turing died > before the Internet so he couldn?t have known about it. > But I wonder if part of the reason the robots were > successful is that they have the ability to mine, collect > and aggregate people?s biases and then reflect them back > to us. We tend to engage, believe things in the contexts > of our own biases. They say in salesmanship that the > trick is figuring out what people want to here and then > couching whatever you want to see in that. Trump is a > master of reading what a group of people want to hear at > the moment, their biases, and then mirroring it back to them > > > > If we went back to the Chinese room and the person inside > was able to read our biases from our messages would they > then be human. > > > > We live in a strange age. > > > > *From:*xmca-l-bounces@mailman.ucsd.edu > *On Behalf Of *Andy Blunden > *Sent:* Saturday, July 14, 2018 8:58 AM > *To:* xmca-l@mailman.ucsd.edu > *Subject:* [Xmca-l] Re: Interesting article on robots and > social learning > > > > I understand that the Turing Test is one which AI people > can use to measure the success of their AI - if you can't > tell the difference between a computer and a human > interaction then the computer has passed the Turing test. > I tend to rely on a kind of anti-Turing Test, that is, > that if you can tell the difference between the computer > and the human interaction, then you have passed the > anti-Turing test, that is, you know something about humans. > > Andy > > ------------------------------------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 14/07/2018 1:12 PM, Douglas Williams wrote: > > Hi-- > > I think I'll come out of lurking for this one. > Actually, what you're talking about with this pain > algorithm system sounds like a modeling system that > someone might need to develop what Alan Turing > described as a P-type computing device. A P-type > computer would receive its programming from inputs of > pleasure and pain. It was probably derived from > reading some of the behavioralist models of mind at > the time. Turing thought that he was probably pretty > close to being able to develop such a computing > device, which, because its input was similar, could > model human thought. The Eliza Rogersian analysis > computer program was another early idea in which the > goal was to model the patterns of human interaction, > and gradually approach closer to human thought and > interaction that way. And by the 2000's, the idea of > the "singularity" was afloat, in which one could model > human minds so well as to enable a human to be > uploaded into a computer, and live forever as software > (Kurzweil, 2005). But given that we barely had a > sufficient model of mind to say Boo with at the time > (what is consciousness? where does intention come > from? What is the balance of nature/nurture in > motivation? Speech utterances? and so on), and you're > right, AI doesn't have much of a theory of emotion, > either--the goal of computer software modeling human > thought seemed very far away to me. > > > > At someone's request, I wrote a rather whimsical paper > called "What is Artificial Intelligence?" back in 2006 > about such things. My argument was that statistical > modeling of human interaction and capturing thought > was not too easy after all, precisely because of the > parts of mind we don't think of, and the social > interactions that, at the time, were not a primary > focus. I mused about that in the context of my trying > to write a computer program by applying Chomsky's > syntactic structures to interpret intention of a few > simple questions--without, alas, in my case, a > corpus-supported Markov chain logic to do it. > Generative grammar would take care of it, right? Wrong. > > > So as someone who had done a little primitive, > incompetent attempt at speech modeling myself, and in > the light of my later-acquired knowledge of CHAT, > Burke, Bakhtin, Mead, and various other people in > different fields, and of the tendency of people to > interact through the world through cognitive biases, > complexes, and embodied perceptions that were not > readily available to artificial systems, I didn't > think the singularity was so near. > > The terrible thing about computer programs is that > they do just what you tell them to do, and no more. > They have no drive to improve, except as programmed. > When they do improve, their creativity is limited. And > the approach now still substantially is > pattern-recognition based. The current paradigm is > something called Convolutional Neural Network Long > Short-Term Memory Networks (CNN/LSTM) for speech > recognition, in which the convolutional neural > networks reduce the variants of speech input into > manageable patterns, and temporal processing (temporal > patterns of the real wold phenomena to which the AI > system is responding). But while such systems combined > with natural language processing can increasingly > mimic human response, and "learn" on their own, and > while they are approaching the "weak" form of > artificial general intelligence (AGI), the > intelligence needed for a machine to perform any > intellectual task that a human being can, they are an > awfully long way from "strong" AGI--that is, something > approaching human consciousness. I think that's > because they are a long way from capturing the kind of > social embeddedness of almost all animal behavior, and > the sense in which human cognition is embedded in the > messy things, like emotion. A computer algorithm can > recognize the patterns of emotion, but that's it. An > AGI system that can experience emotions, or have > motivation, is quite another thing entirely. > > I can tell you that AI confidence is still there. In > raising questions about cultural and physical > embodiment in artficial intelligence interations with > someone in the field recently, he dismissed the idea > as being that relevant. His thought was that "what I > find essential is that we acknowledge that there's no > obvious evidence supporting that the current paradigm > of CNN/LSTM under various reinforcement algorithms > isn't enough for A AGI and in particular for broad > animal-like intelligence like that of ravens and dogs." > > But ravens and dogs are embedded in social > interaction, in intentionality, in > consciousness--qualitatively different than ours, > maybe, but there. Dogs don't do what you ask them to, > always. When they do things, they do them for their > own intentionality, which may be to please you, or may > be to do something you never asked the dog to do, > which is either inherent in its nature, or an > expression of social interactions with you or others, > many of which you and they may not be consciously > aware of. The deep structure of metaphor, the > spatiotemporal relations of language that Langacker > describes as being necessary for construal, the worlds > of narrativized experience, are mostly outside of the > reckoning, so far as I know (though I'm not an > expert--I could be at least partly wrong) of the > current CNN/LSTM paradigm. > > My old interlocutor in thinking about my language > program, Noam Chomsky, has been a pretty sharp critic > of the pattern recognition approach to artificial > intelligence. > > Here's Chomsky's take on the idea: > > http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html > > And here's Peter Norvig's response; he's a director of > research at Google, where Kurzweil is, and where, I > assume, they are as close to the strong version of > artificial general intelligence as anyone out there... > > http://norvig.com/chomsky.html > > Frankly, I would be quite interested in what you think > of these things. I'm merely an Isaiah Berlin fox, > chasing to and fro at all the pretty ideas out there. > But you, many of you, are, I suspect, the untapped > hedgehogs whose ideas on these things would see more > readily what I dimly grasp must be required, not just > for achieving a strong AGI, but for achieving > something that we would see as an ethical, reasonable > artificial mind that expands human experience, rather > than becomes a prison that reduces human interactions > to its own level. > > My own thinking is that lately, Cognitive Metaphor > Theory (CMT), which I knew more of in its earlier (now > "standard model') days, is getting even more > interesting than it was. I'd done a transfer term to > UC Berkeley to study with George Lakoff, but we didn't > hit it off well, perhaps I kept asking him questions > about social embeddedness, and similarities to > Vygotsky's theory of complex thought, and was too > expressive about my interest in linking out from his > approach than folding in. It seems that the idea I was > rather woolily suggesting to Lakoff back then has > caught on: namely, that utterances could be explored > for cultural variation and historical embeddedness, a > form ofsocial context to the narratives and metaphors > and blended spaces that underlay speech utterances and > thought; that there was a degree of social embodiment > as well as physiological embodiment through which > language operated. I thought then, and it looks like > some other people now, are thinking that someone > seeking to understand utterances (as a strong AGI > system would need to do) really, would need to engage > in internalizing and ventriloqusing a form of Geertz's > thick description of interactions. In such forms, > words do not mean what they say, and can have > different affect that is a bit more complex than I > think temporal processing currently addresses. > > I think these are the kind of things that artificial > intelligence would need truly to advance, and that > Bakhtin and Vygotsky and Leont'ev and in the visual > world, Eisenstein were addressing all along... > > And, of course, you guys. > > > > Regards, > > Douglas Willams > > > > > > > > On Tuesday, July 3, 2018, 10:35:45 AM PDT, David H > Kirshner wrote: > > > > > > The other side of the coin is that ineffable human > experience is becoming more effable. > > Computers can now look at a human brain scan and > determine the degree of subjectively experienced pain: > > > > In 2013, Tor Wager, a neuroscientist at the University > of Colorado, Boulder, took the logical next step by > creating an algorithm that could recognize pain?s > distinctive patterns; today, it can pick out brains in > pain with more than ninety-five-per-cent accuracy. > When the algorithm is asked to sort activation maps by > apparent intensity, its ranking matches participants? > subjective pain ratings. By analyzing neural activity, > it can tell not just whether someone is in pain but > also how intense the experience is. > > > > So, perhaps the computer can?t ?feel our pain,? but it > can sure ?sense our pain!? > > > > Here?s the full article: > > https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain > > > > David > > > > *From:*xmca-l-bounces@mailman.ucsd.edu > > > *On Behalf Of > *Glassman, Michael > *Sent:* Tuesday, July 3, 2018 8:16 AM > *To:* eXtended Mind, Culture, Activity > > *Subject:* [Xmca-l] Re: Interesting article on robots > and social learning > > > > / / > > > > It seems like we are still having the same argument as > when robots first came on the scene. In response to > John McCarthy, who was claiming that eventually robots > can have belief systems and motivations similar to > humans through AI John Searle wrote the Chinese room. > There have been a lot of responses to the Chinese room > over the years and a number of digital philosopher > claim it is no longer salient, but I don?t think > anybody has ever effectively answered his central > question. > > > > Just a quick recap. You come to a closed door and > know there is a person on the other side. To > communicate you decide the teacher the person on the > other side Chinese. You do this by continuously > exchanging rules systems under the door. After a > while you are able to have a conversation with the > individual in perfect Chinese. But does that person > actually know Chinese just from the rule systems. I > think Searle?s major point is are you really learning > if you don?t know why you?re learning, or are you just > repeating. Learning is embedded in the human condition > and the reason it works so well and is adaptable is > because we understand it when we use what we learn in > the world in response to others. To put it in > response to the post, does a bomb defusion robot > really learn how to defuse a bomb if it does not know > why it is doing it. It might cut the right wires at > the right time but it doesn?t understand why and > therefore is not doing the task just a series of steps > it has been able to absorb. Is that the opposite of > human learning? > > > > What the researcher did really isn?t that special at > this point. Well I definitely couldn?t do it and it > is amazing, but it is in essence a miniature version > of Libratus (which beat experts at Texas Hold em) and > Alphago (which beat the second best Go player in the > world). My guess it is the same use of deep learning > in which the program integrates new information into > what it is already capable of. If machines can learn > from interacting with other humans then they can learn > from interacting with other machines. It is the same > principle (though much, much simpler in this case). > The question is what does it mean. As we defining > learning down because of the zeitgeist. Greg started > his post saying a socio-cultural theorist be > interested in this research. I wonder if they might > more likely to be the ones putting on the brakes, > asking questions about it. > > > > Michael > > > > *From:*xmca-l-bounces@mailman.ucsd.edu > > > *On Behalf > Of *Andy Blunden > *Sent:* Tuesday, July 03, 2018 7:04 AM > *To:* xmca-l@mailman.ucsd.edu > > *Subject:* [Xmca-l] Re: Interesting article on robots > and social learning > > > > Does a robot have "motivation"? > > andy > > ------------------------------------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: > > Hi Greg, > > > > What is most interesting to me about the > understanding of learning which informs most AI > projects is that it seems to assume that affect is > irrelevant. The role of caring, liking, worrying > etc. in social learning seems to be almost > universally overlooked because information is seen > as something that can be ?got? and ?given? more > than something that is distributed in relationships. > > > > Does anyone know about any AI projects which > consider how machines might feel about what they > learn? > > > > All the best, > > > Rod > > > > *From:*xmca-l-bounces@mailman.ucsd.edu > > > *On > Behalf Of *Greg Thompson > *Sent:* 03 July 2018 02:50 > *To:* eXtended Mind, Culture, Activity > > > *Subject:* [Xmca-l] Interesting article on robots > and social learning > > > > I?m ambivalent about this project but I suspect > that some young CHAT scholar out there could have > a lot to contribute to a project like this one: > > https://www.sapiens.org/column/machinations/artificial-intelligence-culture/ > > > > -Greg > > -- > > Gregory A. Thompson, Ph.D. > > Assistant Professor > > Department of Anthropology > > 880 Spencer W. Kimball Tower > > Brigham Young University > > Provo, UT 84602 > > WEBSITE: greg.a.thompson.byu.edu > > http://byu.academia.edu/GregoryThompson > > ------------------------------------------------------------ > > Image removed by sender. > > > This email and any files with it are confidential > and intended solely for the use of the recipient > to whom it is addressed. If you are not the > intended recipient then copying, distribution or > other use of the information contained is strictly > prohibited and you should not rely on it. If you > have received this email in error please let the > sender know immediately and delete it from your > system(s). Internet emails are not necessarily > secure. While we take every care, University of > Plymouth accepts no responsibility for viruses and > it is your responsibility to scan emails and their > attachments. University of Plymouth does not > accept responsibility for any changes made after > it was sent. Nothing in this email or its > attachments constitutes an order for goods or > services unless accompanied by an official order > form. > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180715/36fe97b0/attachment.html From mcole@ucsd.edu Sun Jul 15 06:40:42 2018 From: mcole@ucsd.edu (mike cole) Date: Sun, 15 Jul 2018 06:40:42 -0700 Subject: [Xmca-l] =?utf-8?q?Fwd=3A_CfP=3A_13th_International_Conference_on_Design_?= =?utf-8?q?Principles_=26_Practices=2C_Saint_Petersburg_State_Unive?= =?utf-8?q?rsity=2C_Saint_Petersburg=2C_Russia_1=E2=80=933_March_20?= =?utf-8?q?19?= In-Reply-To: <010101649cd01282-82827f7a-2ec3-40ac-b387-8fa6470b9da7-000000@us-west-2.amazonses.com> References: <010101649cd01282-82827f7a-2ec3-40ac-b387-8fa6470b9da7-000000@us-west-2.amazonses.com> Message-ID: ---------- Forwarded message --------- From: The Design Principles & Practices Conference < Conference@designprinciplesandpractices.com> Date: Sun, Jul 15, 2018 at 12:33 AM Subject: CfP: 13th International Conference on Design Principles & Practices, Saint Petersburg State University, Saint Petersburg, Russia 1?3 March 2019 To: Submit a proposal by 10 August 2018! *Having trouble?* Click here to view this email in your browser. Call for Papers We are pleased to announce the Call for Papers for the *Thirteenth International Conference on Design Principles & Practices*, held *1?3 March 2019* at *Saint Petersburg State University* in *Saint Petersburg, Russia*. We invite proposals for paper presentations, workshops/interactive sessions, posters/exhibits, colloquia, focused discussions, innovation showcases, virtual posters, or virtual lightning talks. The conference features research addressing the annual themes and the *2019 Special Focus: "Design + Context."* *Call for Papers * *Themes * *Presentation Types * *Scope & Concerns * *List of Accepted Proposals* *Returning Member Registration* Would you like to present at the 2019 Conference? Submit your proposal by *10 August 2018*.* *Click Here to Submit * *We welcome the submission of proposals to the conference at any time of the year before the final submission deadline. All proposals will be reviewed within two to four weeks of submission. Forward This Call for Papers Manage Your Email Preferences *Common Ground Research Networks* | University of Illinois Research Park | 2001 South First St ., 202 | Champaign, IL 61820 USA Unsubscribe | Conditions Copyright ? 2018 Common Ground Research Networks This email was sent from The Design Principles & Practices Conference to mcole@weber.ucsd.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180715/681ab884/attachment.html From greg.a.thompson@gmail.com Sun Jul 15 09:12:18 2018 From: greg.a.thompson@gmail.com (Greg Thompson) Date: Mon, 16 Jul 2018 01:12:18 +0900 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: References: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> <1860198877.3850789.1531537929986@mail.yahoo.com> <7c142464-a2b2-ede1-e258-388e449e10f6@marxists.org> <3B91542B0D4F274D871B38AA48E991F953B3E4C1@CIO-KRC-D1MBX04.osuad.osu.edu> Message-ID: And I'm still curious if any others out there might have anything to contribute to Doug's query regarding what CHAT theory (particularly developmental theories) might have to offer thinking about AI? It seems an interesting question to think through even if you aren't on board with the larger AI project... -greg On Sun, Jul 15, 2018 at 10:55 AM, Andy Blunden wrote: > I think we go back to Martin's earlier ironic comment here, Michael. > > Andy > ------------------------------ > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > On 15/07/2018 9:44 AM, Glassman, Michael wrote: > > The Turing test, at least the test he wrote in his article, is actually a > big more complicated than this, and especially poignant today. Turing?s > test of whether computers are acting as human was based on an old English > game show called The Lying Game (I suppose one of the reasons for the title > of the movie on Turing, though of course it had multiple meanings. But for > some reason they never mentioned the origin of the phrase in the movie). > Anyway in the lying game the contestant had to listen to two individuals, > one of whom was telling the truth about the situation and one of whom was > lying. The way Turing describes it, it sounds quite brutal. The contestant > had to figure out who the liar was (there was a similar much milder version > years later in the US). Anyway Turing?s proposal, if I remember correctly, > was that a computer could be considered thinking like a human if the comp > the contestant was listening to was lying and he or she couldn?t tell. In > essence the computer would successfully lie. Everybody think Turing > believed that computers would eventually think like humans but my reading > of the article was that he had no idea, but as the computer stood at the > time there was no chance. > > > > The reason this is so poignant is the Mueller indictments that came down > yesterday. For those outside the U.S. or not following the news the > indictments were against Russian military leading a scheme to convince > individuals of lies about various actor in the 2016 election (also times > release of information and breaking in to voting systems). But it is the > propagation of lies by robots and people believing them that interests me. > I feel like we aren?t putting enough thought into that. Many of the people > receiving the information could not tell it was no from humans and believed > it even though in many cases it was generated by robots, passing it seems > to me Turing?s test. How and why did this happen? Of course Turing died > before the Internet so he couldn?t have known about it. But I wonder if > part of the reason the robots were successful is that they have the ability > to mine, collect and aggregate people?s biases and then reflect them back > to us. We tend to engage, believe things in the contexts of our own > biases. They say in salesmanship that the trick is figuring out what > people want to here and then couching whatever you want to see in that. > Trump is a master of reading what a group of people want to hear at the > moment, their biases, and then mirroring it back to them > > > > If we went back to the Chinese room and the person inside was able to read > our biases from our messages would they then be human. > > > > We live in a strange age. > > > > *From:* xmca-l-bounces@mailman.ucsd.edu > *On Behalf Of *Andy Blunden > *Sent:* Saturday, July 14, 2018 8:58 AM > *To:* xmca-l@mailman.ucsd.edu > *Subject:* [Xmca-l] Re: Interesting article on robots and social learning > > > > I understand that the Turing Test is one which AI people can use to > measure the success of their AI - if you can't tell the difference between > a computer and a human interaction then the computer has passed the Turing > test. I tend to rely on a kind of anti-Turing Test, that is, that if you > can tell the difference between the computer and the human interaction, > then you have passed the anti-Turing test, that is, you know something > about humans. > > Andy > ------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 14/07/2018 1:12 PM, Douglas Williams wrote: > > Hi-- > > I think I'll come out of lurking for this one. Actually, what you're > talking about with this pain algorithm system sounds like a modeling system > that someone might need to develop what Alan Turing described as a P-type > computing device. A P-type computer would receive its programming from > inputs of pleasure and pain. It was probably derived from reading some of > the behavioralist models of mind at the time. Turing thought that he was > probably pretty close to being able to develop such a computing device, > which, because its input was similar, could model human thought. The Eliza > Rogersian analysis computer program was another early idea in which the > goal was to model the patterns of human interaction, and gradually approach > closer to human thought and interaction that way. And by the 2000's, the > idea of the "singularity" was afloat, in which one could model human minds > so well as to enable a human to be uploaded into a computer, and live > forever as software (Kurzweil, 2005). But given that we barely had a > sufficient model of mind to say Boo with at the time (what is > consciousness? where does intention come from? What is the balance of > nature/nurture in motivation? Speech utterances? and so on), and you're > right, AI doesn't have much of a theory of emotion, either--the goal of > computer software modeling human thought seemed very far away to me. > > > > At someone's request, I wrote a rather whimsical paper called "What is > Artificial Intelligence?" back in 2006 about such things. My argument was > that statistical modeling of human interaction and capturing thought was > not too easy after all, precisely because of the parts of mind we don't > think of, and the social interactions that, at the time, were not a primary > focus. I mused about that in the context of my trying to write a computer > program by applying Chomsky's syntactic structures to interpret intention > of a few simple questions--without, alas, in my case, a corpus-supported > Markov chain logic to do it. Generative grammar would take care of it, > right? Wrong. > > > So as someone who had done a little primitive, incompetent attempt at > speech modeling myself, and in the light of my later-acquired knowledge of > CHAT, Burke, Bakhtin, Mead, and various other people in different fields, > and of the tendency of people to interact through the world through > cognitive biases, complexes, and embodied perceptions that were not readily > available to artificial systems, I didn't think the singularity was so near. > > The terrible thing about computer programs is that they do just what you > tell them to do, and no more. They have no drive to improve, except as > programmed. When they do improve, their creativity is limited. And the > approach now still substantially is pattern-recognition based. The current > paradigm is something called Convolutional Neural Network Long Short-Term > Memory Networks (CNN/LSTM) for speech recognition, in which the > convolutional neural networks reduce the variants of speech input into > manageable patterns, and temporal processing (temporal patterns of the real > wold phenomena to which the AI system is responding). But while such > systems combined with natural language processing can increasingly mimic > human response, and "learn" on their own, and while they are approaching > the "weak" form of artificial general intelligence (AGI), the intelligence > needed for a machine to perform any intellectual task that a human being > can, they are an awfully long way from "strong" AGI--that is, something > approaching human consciousness. I think that's because they are a long way > from capturing the kind of social embeddedness of almost all animal > behavior, and the sense in which human cognition is embedded in the messy > things, like emotion. A computer algorithm can recognize the patterns of > emotion, but that's it. An AGI system that can experience emotions, or have > motivation, is quite another thing entirely. > > I can tell you that AI confidence is still there. In raising questions > about cultural and physical embodiment in artficial intelligence > interations with someone in the field recently, he dismissed the idea as > being that relevant. His thought was that "what I find essential is that we > acknowledge that there's no obvious evidence supporting that the current > paradigm of CNN/LSTM under various reinforcement algorithms isn't enough > for A AGI and in particular for broad animal-like intelligence like that of > ravens and dogs." > > But ravens and dogs are embedded in social interaction, in intentionality, > in consciousness--qualitatively different than ours, maybe, but there. Dogs > don't do what you ask them to, always. When they do things, they do them > for their own intentionality, which may be to please you, or may be to do > something you never asked the dog to do, which is either inherent in its > nature, or an expression of social interactions with you or others, many of > which you and they may not be consciously aware of. The deep structure of > metaphor, the spatiotemporal relations of language that Langacker describes > as being necessary for construal, the worlds of narrativized experience, > are mostly outside of the reckoning, so far as I know (though I'm not an > expert--I could be at least partly wrong) of the current CNN/LSTM paradigm. > > My old interlocutor in thinking about my language program, Noam Chomsky, > has been a pretty sharp critic of the pattern recognition approach to > artificial intelligence. > > Here's Chomsky's take on the idea: > > http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html > > And here's Peter Norvig's response; he's a director of research at Google, > where Kurzweil is, and where, I assume, they are as close to the strong > version of artificial general intelligence as anyone out there... > > http://norvig.com/chomsky.html > > Frankly, I would be quite interested in what you think of these things. > I'm merely an Isaiah Berlin fox, chasing to and fro at all the pretty ideas > out there. But you, many of you, are, I suspect, the untapped hedgehogs > whose ideas on these things would see more readily what I dimly grasp must > be required, not just for achieving a strong AGI, but for achieving > something that we would see as an ethical, reasonable artificial mind that > expands human experience, rather than becomes a prison that reduces human > interactions to its own level. > > My own thinking is that lately, Cognitive Metaphor Theory (CMT), which I > knew more of in its earlier (now "standard model') days, is getting even > more interesting than it was. I'd done a transfer term to UC Berkeley to > study with George Lakoff, but we didn't hit it off well, perhaps I kept > asking him questions about social embeddedness, and similarities to > Vygotsky's theory of complex thought, and was too expressive about my > interest in linking out from his approach than folding in. It seems that > the idea I was rather woolily suggesting to Lakoff back then has caught on: > namely, that utterances could be explored for cultural variation and > historical embeddedness, a form ofsocial context to the narratives and > metaphors and blended spaces that underlay speech utterances and thought; > that there was a degree of social embodiment as well as physiological > embodiment through which language operated. I thought then, and it looks > like some other people now, are thinking that someone seeking to understand > utterances (as a strong AGI system would need to do) really, would need to > engage in internalizing and ventriloqusing a form of Geertz's thick > description of interactions. In such forms, words do not mean what they > say, and can have different affect that is a bit more complex than I think > temporal processing currently addresses. > > I think these are the kind of things that artificial intelligence would > need truly to advance, and that Bakhtin and Vygotsky and Leont'ev and in > the visual world, Eisenstein were addressing all along... > > And, of course, you guys. > > > > Regards, > > Douglas Willams > > > > > > > > On Tuesday, July 3, 2018, 10:35:45 AM PDT, David H Kirshner > wrote: > > > > > > The other side of the coin is that ineffable human experience is becoming > more effable. > > Computers can now look at a human brain scan and determine the degree of > subjectively experienced pain: > > > > In 2013, Tor Wager, a neuroscientist at the University of Colorado, > Boulder, took the logical next step by creating an algorithm that could > recognize pain?s distinctive patterns; today, it can pick out brains in > pain with more than ninety-five-per-cent accuracy. When the algorithm is > asked to sort activation maps by apparent intensity, its ranking matches > participants? subjective pain ratings. By analyzing neural activity, it can > tell not just whether someone is in pain but also how intense the > experience is. > > > > So, perhaps the computer can?t ?feel our pain,? but it can sure ?sense our > pain!? > > > > Here?s the full article: > > > https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain > > > > David > > > > *From:* xmca-l-bounces@mailman.ucsd.edu > *On Behalf Of *Glassman, Michael > *Sent:* Tuesday, July 3, 2018 8:16 AM > *To:* eXtended Mind, Culture, Activity > > *Subject:* [Xmca-l] Re: Interesting article on robots and social learning > > > > > > > > It seems like we are still having the same argument as when robots first > came on the scene. In response to John McCarthy, who was claiming that > eventually robots can have belief systems and motivations similar to humans > through AI John Searle wrote the Chinese room. There have been a lot of > responses to the Chinese room over the years and a number of digital > philosopher claim it is no longer salient, but I don?t think anybody has > ever effectively answered his central question. > > > > Just a quick recap. You come to a closed door and know there is a person > on the other side. To communicate you decide the teacher the person on the > other side Chinese. You do this by continuously exchanging rules systems > under the door. After a while you are able to have a conversation with the > individual in perfect Chinese. But does that person actually know Chinese > just from the rule systems. I think Searle?s major point is are you really > learning if you don?t know why you?re learning, or are you just repeating. > Learning is embedded in the human condition and the reason it works so well > and is adaptable is because we understand it when we use what we learn in > the world in response to others. To put it in response to the post, does a > bomb defusion robot really learn how to defuse a bomb if it does not know > why it is doing it. It might cut the right wires at the right time but it > doesn?t understand why and therefore is not doing the task just a series of > steps it has been able to absorb. Is that the opposite of human learning? > > > > What the researcher did really isn?t that special at this point. Well I > definitely couldn?t do it and it is amazing, but it is in essence a > miniature version of Libratus (which beat experts at Texas Hold em) and > Alphago (which beat the second best Go player in the world). My guess it > is the same use of deep learning in which the program integrates new > information into what it is already capable of. If machines can learn from > interacting with other humans then they can learn from interacting with > other machines. It is the same principle (though much, much simpler in > this case). The question is what does it mean. As we defining learning > down because of the zeitgeist. Greg started his post saying a > socio-cultural theorist be interested in this research. I wonder if they > might more likely to be the ones putting on the brakes, asking questions > about it. > > > > Michael > > > > *From:* xmca-l-bounces@mailman.ucsd.edu *On > Behalf Of *Andy Blunden > *Sent:* Tuesday, July 03, 2018 7:04 AM > *To:* xmca-l@mailman.ucsd.edu > *Subject:* [Xmca-l] Re: Interesting article on robots and social learning > > > > Does a robot have "motivation"? > > andy > ------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: > > Hi Greg, > > > > What is most interesting to me about the understanding of learning which > informs most AI projects is that it seems to assume that affect is > irrelevant. The role of caring, liking, worrying etc. in social learning > seems to be almost universally overlooked because information is seen as > something that can be ?got? and ?given? more than something that is > distributed in relationships. > > > > Does anyone know about any AI projects which consider how machines might > feel about what they learn? > > > > All the best, > > > Rod > > > > *From:* xmca-l-bounces@mailman.ucsd.edu > *On Behalf Of *Greg Thompson > *Sent:* 03 July 2018 02:50 > *To:* eXtended Mind, Culture, Activity > > *Subject:* [Xmca-l] Interesting article on robots and social learning > > > > I?m ambivalent about this project but I suspect that some young CHAT > scholar out there could have a lot to contribute to a project like this one: > > > > https://www.sapiens.org/column/machinations/artificial-intelligence- > culture/ > > > > -Greg > > -- > > Gregory A. Thompson, Ph.D. > > Assistant Professor > > Department of Anthropology > > 880 Spencer W. Kimball Tower > > Brigham Young University > > Provo, UT 84602 > > WEBSITE: greg.a.thompson.byu.edu > http://byu.academia.edu/GregoryThompson > ------------------------------ > > [image: Image removed by sender.] > > This email and any files with it are confidential and intended solely for > the use of the recipient to whom it is addressed. If you are not the > intended recipient then copying, distribution or other use of the > information contained is strictly prohibited and you should not rely on it. > If you have received this email in error please let the sender know > immediately and delete it from your system(s). Internet emails are not > necessarily secure. While we take every care, University of Plymouth > accepts no responsibility for viruses and it is your responsibility to scan > emails and their attachments. University of Plymouth does not accept > responsibility for any changes made after it was sent. Nothing in this > email or its attachments constitutes an order for goods or services unless > accompanied by an official order form. > > > > > > > -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180716/19ea2230/attachment.html From dkirsh@lsu.edu Sun Jul 15 10:55:18 2018 From: dkirsh@lsu.edu (David H Kirshner) Date: Sun, 15 Jul 2018 17:55:18 +0000 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: References: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> <1860198877.3850789.1531537929986@mail.yahoo.com> <7c142464-a2b2-ede1-e258-388e449e10f6@marxists.org> <3B91542B0D4F274D871B38AA48E991F953B3E4C1@CIO-KRC-D1MBX04.osuad.osu.edu> Message-ID: The Turing Test is usually considered to be a cognitive challenge for programmer and test subject: Can I design a smart enough program; can I catch the flaws in the program. But early response to Eliza, suggests otherwise. Eliza was a natural language processing computer program created in the mid-60s to simulate a psychotherapist. It?s architecture was highly simplistic. The structure of the input sentence (supplied by patient, a real human being) was analyzed, key words identified, and then converted into a question. Here?s a brief sample of interactive text from https://www.cs.umd.edu/class/fall2012/cmsc828d/oldreportfiles/guha1.pdf: Patient: I am unhappy. Eliza: Are you unhappy often? Patient: Yes. Eliza: Can you elaborate on that? Patient: Too much work. Eliza: Why do you say too much work? Patient: I am working a lot. ? As far as I know Eliza was not subjected to a Turing test, though it was the first program with sufficient linguistic ability to even be a candidate for the Turing Test. Eliza was used with people who already knew it was a computer. But that didn?t deter them from identifying with it: ?many early users were convinced of ELIZA?s intelligence and understanding, despite [program creator, Joseph] Weizenbaum?s insistence to the contrary? (https://en.wikipedia.org/wiki/ELIZA). So, the Turing Test is as much (or more) about our cultures tendencies of identification as it is about the technical practices of AI simulation. David From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Greg Thompson Sent: Sunday, July 15, 2018 11:12 AM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: Interesting article on robots and social learning And I'm still curious if any others out there might have anything to contribute to Doug's query regarding what CHAT theory (particularly developmental theories) might have to offer thinking about AI? It seems an interesting question to think through even if you aren't on board with the larger AI project... -greg On Sun, Jul 15, 2018 at 10:55 AM, Andy Blunden > wrote: I think we go back to Martin's earlier ironic comment here, Michael. Andy ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 15/07/2018 9:44 AM, Glassman, Michael wrote: The Turing test, at least the test he wrote in his article, is actually a big more complicated than this, and especially poignant today. Turing?s test of whether computers are acting as human was based on an old English game show called The Lying Game (I suppose one of the reasons for the title of the movie on Turing, though of course it had multiple meanings. But for some reason they never mentioned the origin of the phrase in the movie). Anyway in the lying game the contestant had to listen to two individuals, one of whom was telling the truth about the situation and one of whom was lying. The way Turing describes it, it sounds quite brutal. The contestant had to figure out who the liar was (there was a similar much milder version years later in the US). Anyway Turing?s proposal, if I remember correctly, was that a computer could be considered thinking like a human if the comp the contestant was listening to was lying and he or she couldn?t tell. In essence the computer would successfully lie. Everybody think Turing believed that computers would eventually think like humans but my reading of the article was that he had no idea, but as the computer stood at the time there was no chance. The reason this is so poignant is the Mueller indictments that came down yesterday. For those outside the U.S. or not following the news the indictments were against Russian military leading a scheme to convince individuals of lies about various actor in the 2016 election (also times release of information and breaking in to voting systems). But it is the propagation of lies by robots and people believing them that interests me. I feel like we aren?t putting enough thought into that. Many of the people receiving the information could not tell it was no from humans and believed it even though in many cases it was generated by robots, passing it seems to me Turing?s test. How and why did this happen? Of course Turing died before the Internet so he couldn?t have known about it. But I wonder if part of the reason the robots were successful is that they have the ability to mine, collect and aggregate people?s biases and then reflect them back to us. We tend to engage, believe things in the contexts of our own biases. They say in salesmanship that the trick is figuring out what people want to here and then couching whatever you want to see in that. Trump is a master of reading what a group of people want to hear at the moment, their biases, and then mirroring it back to them If we went back to the Chinese room and the person inside was able to read our biases from our messages would they then be human. We live in a strange age. From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Andy Blunden Sent: Saturday, July 14, 2018 8:58 AM To: xmca-l@mailman.ucsd.edu Subject: [Xmca-l] Re: Interesting article on robots and social learning I understand that the Turing Test is one which AI people can use to measure the success of their AI - if you can't tell the difference between a computer and a human interaction then the computer has passed the Turing test. I tend to rely on a kind of anti-Turing Test, that is, that if you can tell the difference between the computer and the human interaction, then you have passed the anti-Turing test, that is, you know something about humans. Andy ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 14/07/2018 1:12 PM, Douglas Williams wrote: Hi-- I think I'll come out of lurking for this one. Actually, what you're talking about with this pain algorithm system sounds like a modeling system that someone might need to develop what Alan Turing described as a P-type computing device. A P-type computer would receive its programming from inputs of pleasure and pain. It was probably derived from reading some of the behavioralist models of mind at the time. Turing thought that he was probably pretty close to being able to develop such a computing device, which, because its input was similar, could model human thought. The Eliza Rogersian analysis computer program was another early idea in which the goal was to model the patterns of human interaction, and gradually approach closer to human thought and interaction that way. And by the 2000's, the idea of the "singularity" was afloat, in which one could model human minds so well as to enable a human to be uploaded into a computer, and live forever as software (Kurzweil, 2005). But given that we barely had a sufficient model of mind to say Boo with at the time (what is consciousness? where does intention come from? What is the balance of nature/nurture in motivation? Speech utterances? and so on), and you're right, AI doesn't have much of a theory of emotion, either--the goal of computer software modeling human thought seemed very far away to me. At someone's request, I wrote a rather whimsical paper called "What is Artificial Intelligence?" back in 2006 about such things. My argument was that statistical modeling of human interaction and capturing thought was not too easy after all, precisely because of the parts of mind we don't think of, and the social interactions that, at the time, were not a primary focus. I mused about that in the context of my trying to write a computer program by applying Chomsky's syntactic structures to interpret intention of a few simple questions--without, alas, in my case, a corpus-supported Markov chain logic to do it. Generative grammar would take care of it, right? Wrong. So as someone who had done a little primitive, incompetent attempt at speech modeling myself, and in the light of my later-acquired knowledge of CHAT, Burke, Bakhtin, Mead, and various other people in different fields, and of the tendency of people to interact through the world through cognitive biases, complexes, and embodied perceptions that were not readily available to artificial systems, I didn't think the singularity was so near. The terrible thing about computer programs is that they do just what you tell them to do, and no more. They have no drive to improve, except as programmed. When they do improve, their creativity is limited. And the approach now still substantially is pattern-recognition based. The current paradigm is something called Convolutional Neural Network Long Short-Term Memory Networks (CNN/LSTM) for speech recognition, in which the convolutional neural networks reduce the variants of speech input into manageable patterns, and temporal processing (temporal patterns of the real wold phenomena to which the AI system is responding). But while such systems combined with natural language processing can increasingly mimic human response, and "learn" on their own, and while they are approaching the "weak" form of artificial general intelligence (AGI), the intelligence needed for a machine to perform any intellectual task that a human being can, they are an awfully long way from "strong" AGI--that is, something approaching human consciousness. I think that's because they are a long way from capturing the kind of social embeddedness of almost all animal behavior, and the sense in which human cognition is embedded in the messy things, like emotion. A computer algorithm can recognize the patterns of emotion, but that's it. An AGI system that can experience emotions, or have motivation, is quite another thing entirely. I can tell you that AI confidence is still there. In raising questions about cultural and physical embodiment in artficial intelligence interations with someone in the field recently, he dismissed the idea as being that relevant. His thought was that "what I find essential is that we acknowledge that there's no obvious evidence supporting that the current paradigm of CNN/LSTM under various reinforcement algorithms isn't enough for A AGI and in particular for broad animal-like intelligence like that of ravens and dogs." But ravens and dogs are embedded in social interaction, in intentionality, in consciousness--qualitatively different than ours, maybe, but there. Dogs don't do what you ask them to, always. When they do things, they do them for their own intentionality, which may be to please you, or may be to do something you never asked the dog to do, which is either inherent in its nature, or an expression of social interactions with you or others, many of which you and they may not be consciously aware of. The deep structure of metaphor, the spatiotemporal relations of language that Langacker describes as being necessary for construal, the worlds of narrativized experience, are mostly outside of the reckoning, so far as I know (though I'm not an expert--I could be at least partly wrong) of the current CNN/LSTM paradigm. My old interlocutor in thinking about my language program, Noam Chomsky, has been a pretty sharp critic of the pattern recognition approach to artificial intelligence. Here's Chomsky's take on the idea: http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html And here's Peter Norvig's response; he's a director of research at Google, where Kurzweil is, and where, I assume, they are as close to the strong version of artificial general intelligence as anyone out there... http://norvig.com/chomsky.html Frankly, I would be quite interested in what you think of these things. I'm merely an Isaiah Berlin fox, chasing to and fro at all the pretty ideas out there. But you, many of you, are, I suspect, the untapped hedgehogs whose ideas on these things would see more readily what I dimly grasp must be required, not just for achieving a strong AGI, but for achieving something that we would see as an ethical, reasonable artificial mind that expands human experience, rather than becomes a prison that reduces human interactions to its own level. My own thinking is that lately, Cognitive Metaphor Theory (CMT), which I knew more of in its earlier (now "standard model') days, is getting even more interesting than it was. I'd done a transfer term to UC Berkeley to study with George Lakoff, but we didn't hit it off well, perhaps I kept asking him questions about social embeddedness, and similarities to Vygotsky's theory of complex thought, and was too expressive about my interest in linking out from his approach than folding in. It seems that the idea I was rather woolily suggesting to Lakoff back then has caught on: namely, that utterances could be explored for cultural variation and historical embeddedness, a form ofsocial context to the narratives and metaphors and blended spaces that underlay speech utterances and thought; that there was a degree of social embodiment as well as physiological embodiment through which language operated. I thought then, and it looks like some other people now, are thinking that someone seeking to understand utterances (as a strong AGI system would need to do) really, would need to engage in internalizing and ventriloqusing a form of Geertz's thick description of interactions. In such forms, words do not mean what they say, and can have different affect that is a bit more complex than I think temporal processing currently addresses. I think these are the kind of things that artificial intelligence would need truly to advance, and that Bakhtin and Vygotsky and Leont'ev and in the visual world, Eisenstein were addressing all along... And, of course, you guys. Regards, Douglas Willams On Tuesday, July 3, 2018, 10:35:45 AM PDT, David H Kirshner wrote: The other side of the coin is that ineffable human experience is becoming more effable. Computers can now look at a human brain scan and determine the degree of subjectively experienced pain: In 2013, Tor Wager, a neuroscientist at the University of Colorado, Boulder, took the logical next step by creating an algorithm that could recognize pain?s distinctive patterns; today, it can pick out brains in pain with more than ninety-five-per-cent accuracy. When the algorithm is asked to sort activation maps by apparent intensity, its ranking matches participants? subjective pain ratings. By analyzing neural activity, it can tell not just whether someone is in pain but also how intense the experience is. So, perhaps the computer can?t ?feel our pain,? but it can sure ?sense our pain!? Here?s the full article: https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain David From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Glassman, Michael Sent: Tuesday, July 3, 2018 8:16 AM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: Interesting article on robots and social learning It seems like we are still having the same argument as when robots first came on the scene. In response to John McCarthy, who was claiming that eventually robots can have belief systems and motivations similar to humans through AI John Searle wrote the Chinese room. There have been a lot of responses to the Chinese room over the years and a number of digital philosopher claim it is no longer salient, but I don?t think anybody has ever effectively answered his central question. Just a quick recap. You come to a closed door and know there is a person on the other side. To communicate you decide the teacher the person on the other side Chinese. You do this by continuously exchanging rules systems under the door. After a while you are able to have a conversation with the individual in perfect Chinese. But does that person actually know Chinese just from the rule systems. I think Searle?s major point is are you really learning if you don?t know why you?re learning, or are you just repeating. Learning is embedded in the human condition and the reason it works so well and is adaptable is because we understand it when we use what we learn in the world in response to others. To put it in response to the post, does a bomb defusion robot really learn how to defuse a bomb if it does not know why it is doing it. It might cut the right wires at the right time but it doesn?t understand why and therefore is not doing the task just a series of steps it has been able to absorb. Is that the opposite of human learning? What the researcher did really isn?t that special at this point. Well I definitely couldn?t do it and it is amazing, but it is in essence a miniature version of Libratus (which beat experts at Texas Hold em) and Alphago (which beat the second best Go player in the world). My guess it is the same use of deep learning in which the program integrates new information into what it is already capable of. If machines can learn from interacting with other humans then they can learn from interacting with other machines. It is the same principle (though much, much simpler in this case). The question is what does it mean. As we defining learning down because of the zeitgeist. Greg started his post saying a socio-cultural theorist be interested in this research. I wonder if they might more likely to be the ones putting on the brakes, asking questions about it. Michael From: xmca-l-bounces@mailman.ucsd.edu > On Behalf Of Andy Blunden Sent: Tuesday, July 03, 2018 7:04 AM To: xmca-l@mailman.ucsd.edu Subject: [Xmca-l] Re: Interesting article on robots and social learning Does a robot have "motivation"? andy ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: Hi Greg, What is most interesting to me about the understanding of learning which informs most AI projects is that it seems to assume that affect is irrelevant. The role of caring, liking, worrying etc. in social learning seems to be almost universally overlooked because information is seen as something that can be ?got? and ?given? more than something that is distributed in relationships. Does anyone know about any AI projects which consider how machines might feel about what they learn? All the best, Rod From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Greg Thompson Sent: 03 July 2018 02:50 To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Interesting article on robots and social learning I?m ambivalent about this project but I suspect that some young CHAT scholar out there could have a lot to contribute to a project like this one: https://www.sapiens.org/column/machinations/artificial-intelligence-culture/ -Greg -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson ________________________________ This email and any files with it are confidential and intended solely for the use of the recipient to whom it is addressed. If you are not the intended recipient then copying, distribution or other use of the information contained is strictly prohibited and you should not rely on it. If you have received this email in error please let the sender know immediately and delete it from your system(s). Internet emails are not necessarily secure. While we take every care, University of Plymouth accepts no responsibility for viruses and it is your responsibility to scan emails and their attachments. University of Plymouth does not accept responsibility for any changes made after it was sent. Nothing in this email or its attachments constitutes an order for goods or services unless accompanied by an official order form. -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180715/83bbfddd/attachment.html From glassman.13@osu.edu Sun Jul 15 17:23:15 2018 From: glassman.13@osu.edu (Glassman, Michael) Date: Mon, 16 Jul 2018 00:23:15 +0000 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: References: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> <1860198877.3850789.1531537929986@mail.yahoo.com> <7c142464-a2b2-ede1-e258-388e449e10f6@marxists.org> <3B91542B0D4F274D871B38AA48E991F953B3E4C1@CIO-KRC-D1MBX04.osuad.osu.edu> Message-ID: <3B91542B0D4F274D871B38AA48E991F953B3E5D1@CIO-KRC-D1MBX04.osuad.osu.edu> I wonder if where CHAT might be most interesting in addressing AI are on topics of bias and oppression. I believe that there is a real danger that AI can be used as a tool for oppression, especially from some of its early uses. One of the things people discussing the possibilities of AI don?t discuss near enough is that it picks up and integrates biases from the information it receives. Sometimes this can be interesting such as the program Libratus that beat world class poker players at Texas Hold ?em. One of the less discussed aspects is that one of the reasons it was capable of doing this is it picks up on the playing biases of the players it is competing with and integrates them into its decision making process. This I think is one of the reasons that it has to play only one player at a time to be successful. The danger is when it integrates these biases into a larger decision making process. There is an AI program called Northpointe used by the justice department that uses a combination of big data and deep learning to make decisions about whether people convicted of crimes will wind up back in jail. This should have implications for sentencing. The program, surprise, tends to be much harsher with Black individuals than white individuals. Even if you keep ethnicity outside of the equation it has enough other information to create a natural bias. There are also some of the more advanced translation programs which tend to incorporate the biases of the languages (e.g. mysoginistic) into the translations without those getting the translations realizing it. AI , especially machine learning, is in many ways a prisoner to the information it receives. Who decides what information it receives? Much like the intelligence tests of an earlier age people will use AI decision making as being neutral or objective when it actually mirrors back (almost perfectly) those who are feeding it information. Like I said I don?t see this point raised nearly enough. Perhaps CHAT is one of the fields in a position to constantly point this out, explore the ways that AI is culturally biases, and those that dominate information flow can easily use it as a tool for oppression. Michael From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Greg Thompson Sent: Sunday, July 15, 2018 12:12 PM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: Interesting article on robots and social learning And I'm still curious if any others out there might have anything to contribute to Doug's query regarding what CHAT theory (particularly developmental theories) might have to offer thinking about AI? It seems an interesting question to think through even if you aren't on board with the larger AI project... -greg On Sun, Jul 15, 2018 at 10:55 AM, Andy Blunden > wrote: I think we go back to Martin's earlier ironic comment here, Michael. Andy ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 15/07/2018 9:44 AM, Glassman, Michael wrote: The Turing test, at least the test he wrote in his article, is actually a big more complicated than this, and especially poignant today. Turing?s test of whether computers are acting as human was based on an old English game show called The Lying Game (I suppose one of the reasons for the title of the movie on Turing, though of course it had multiple meanings. But for some reason they never mentioned the origin of the phrase in the movie). Anyway in the lying game the contestant had to listen to two individuals, one of whom was telling the truth about the situation and one of whom was lying. The way Turing describes it, it sounds quite brutal. The contestant had to figure out who the liar was (there was a similar much milder version years later in the US). Anyway Turing?s proposal, if I remember correctly, was that a computer could be considered thinking like a human if the comp the contestant was listening to was lying and he or she couldn?t tell. In essence the computer would successfully lie. Everybody think Turing believed that computers would eventually think like humans but my reading of the article was that he had no idea, but as the computer stood at the time there was no chance. The reason this is so poignant is the Mueller indictments that came down yesterday. For those outside the U.S. or not following the news the indictments were against Russian military leading a scheme to convince individuals of lies about various actor in the 2016 election (also times release of information and breaking in to voting systems). But it is the propagation of lies by robots and people believing them that interests me. I feel like we aren?t putting enough thought into that. Many of the people receiving the information could not tell it was no from humans and believed it even though in many cases it was generated by robots, passing it seems to me Turing?s test. How and why did this happen? Of course Turing died before the Internet so he couldn?t have known about it. But I wonder if part of the reason the robots were successful is that they have the ability to mine, collect and aggregate people?s biases and then reflect them back to us. We tend to engage, believe things in the contexts of our own biases. They say in salesmanship that the trick is figuring out what people want to here and then couching whatever you want to see in that. Trump is a master of reading what a group of people want to hear at the moment, their biases, and then mirroring it back to them If we went back to the Chinese room and the person inside was able to read our biases from our messages would they then be human. We live in a strange age. From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Andy Blunden Sent: Saturday, July 14, 2018 8:58 AM To: xmca-l@mailman.ucsd.edu Subject: [Xmca-l] Re: Interesting article on robots and social learning I understand that the Turing Test is one which AI people can use to measure the success of their AI - if you can't tell the difference between a computer and a human interaction then the computer has passed the Turing test. I tend to rely on a kind of anti-Turing Test, that is, that if you can tell the difference between the computer and the human interaction, then you have passed the anti-Turing test, that is, you know something about humans. Andy ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 14/07/2018 1:12 PM, Douglas Williams wrote: Hi-- I think I'll come out of lurking for this one. Actually, what you're talking about with this pain algorithm system sounds like a modeling system that someone might need to develop what Alan Turing described as a P-type computing device. A P-type computer would receive its programming from inputs of pleasure and pain. It was probably derived from reading some of the behavioralist models of mind at the time. Turing thought that he was probably pretty close to being able to develop such a computing device, which, because its input was similar, could model human thought. The Eliza Rogersian analysis computer program was another early idea in which the goal was to model the patterns of human interaction, and gradually approach closer to human thought and interaction that way. And by the 2000's, the idea of the "singularity" was afloat, in which one could model human minds so well as to enable a human to be uploaded into a computer, and live forever as software (Kurzweil, 2005). But given that we barely had a sufficient model of mind to say Boo with at the time (what is consciousness? where does intention come from? What is the balance of nature/nurture in motivation? Speech utterances? and so on), and you're right, AI doesn't have much of a theory of emotion, either--the goal of computer software modeling human thought seemed very far away to me. At someone's request, I wrote a rather whimsical paper called "What is Artificial Intelligence?" back in 2006 about such things. My argument was that statistical modeling of human interaction and capturing thought was not too easy after all, precisely because of the parts of mind we don't think of, and the social interactions that, at the time, were not a primary focus. I mused about that in the context of my trying to write a computer program by applying Chomsky's syntactic structures to interpret intention of a few simple questions--without, alas, in my case, a corpus-supported Markov chain logic to do it. Generative grammar would take care of it, right? Wrong. So as someone who had done a little primitive, incompetent attempt at speech modeling myself, and in the light of my later-acquired knowledge of CHAT, Burke, Bakhtin, Mead, and various other people in different fields, and of the tendency of people to interact through the world through cognitive biases, complexes, and embodied perceptions that were not readily available to artificial systems, I didn't think the singularity was so near. The terrible thing about computer programs is that they do just what you tell them to do, and no more. They have no drive to improve, except as programmed. When they do improve, their creativity is limited. And the approach now still substantially is pattern-recognition based. The current paradigm is something called Convolutional Neural Network Long Short-Term Memory Networks (CNN/LSTM) for speech recognition, in which the convolutional neural networks reduce the variants of speech input into manageable patterns, and temporal processing (temporal patterns of the real wold phenomena to which the AI system is responding). But while such systems combined with natural language processing can increasingly mimic human response, and "learn" on their own, and while they are approaching the "weak" form of artificial general intelligence (AGI), the intelligence needed for a machine to perform any intellectual task that a human being can, they are an awfully long way from "strong" AGI--that is, something approaching human consciousness. I think that's because they are a long way from capturing the kind of social embeddedness of almost all animal behavior, and the sense in which human cognition is embedded in the messy things, like emotion. A computer algorithm can recognize the patterns of emotion, but that's it. An AGI system that can experience emotions, or have motivation, is quite another thing entirely. I can tell you that AI confidence is still there. In raising questions about cultural and physical embodiment in artficial intelligence interations with someone in the field recently, he dismissed the idea as being that relevant. His thought was that "what I find essential is that we acknowledge that there's no obvious evidence supporting that the current paradigm of CNN/LSTM under various reinforcement algorithms isn't enough for A AGI and in particular for broad animal-like intelligence like that of ravens and dogs." But ravens and dogs are embedded in social interaction, in intentionality, in consciousness--qualitatively different than ours, maybe, but there. Dogs don't do what you ask them to, always. When they do things, they do them for their own intentionality, which may be to please you, or may be to do something you never asked the dog to do, which is either inherent in its nature, or an expression of social interactions with you or others, many of which you and they may not be consciously aware of. The deep structure of metaphor, the spatiotemporal relations of language that Langacker describes as being necessary for construal, the worlds of narrativized experience, are mostly outside of the reckoning, so far as I know (though I'm not an expert--I could be at least partly wrong) of the current CNN/LSTM paradigm. My old interlocutor in thinking about my language program, Noam Chomsky, has been a pretty sharp critic of the pattern recognition approach to artificial intelligence. Here's Chomsky's take on the idea: http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html And here's Peter Norvig's response; he's a director of research at Google, where Kurzweil is, and where, I assume, they are as close to the strong version of artificial general intelligence as anyone out there... http://norvig.com/chomsky.html Frankly, I would be quite interested in what you think of these things. I'm merely an Isaiah Berlin fox, chasing to and fro at all the pretty ideas out there. But you, many of you, are, I suspect, the untapped hedgehogs whose ideas on these things would see more readily what I dimly grasp must be required, not just for achieving a strong AGI, but for achieving something that we would see as an ethical, reasonable artificial mind that expands human experience, rather than becomes a prison that reduces human interactions to its own level. My own thinking is that lately, Cognitive Metaphor Theory (CMT), which I knew more of in its earlier (now "standard model') days, is getting even more interesting than it was. I'd done a transfer term to UC Berkeley to study with George Lakoff, but we didn't hit it off well, perhaps I kept asking him questions about social embeddedness, and similarities to Vygotsky's theory of complex thought, and was too expressive about my interest in linking out from his approach than folding in. It seems that the idea I was rather woolily suggesting to Lakoff back then has caught on: namely, that utterances could be explored for cultural variation and historical embeddedness, a form ofsocial context to the narratives and metaphors and blended spaces that underlay speech utterances and thought; that there was a degree of social embodiment as well as physiological embodiment through which language operated. I thought then, and it looks like some other people now, are thinking that someone seeking to understand utterances (as a strong AGI system would need to do) really, would need to engage in internalizing and ventriloqusing a form of Geertz's thick description of interactions. In such forms, words do not mean what they say, and can have different affect that is a bit more complex than I think temporal processing currently addresses. I think these are the kind of things that artificial intelligence would need truly to advance, and that Bakhtin and Vygotsky and Leont'ev and in the visual world, Eisenstein were addressing all along... And, of course, you guys. Regards, Douglas Willams On Tuesday, July 3, 2018, 10:35:45 AM PDT, David H Kirshner wrote: The other side of the coin is that ineffable human experience is becoming more effable. Computers can now look at a human brain scan and determine the degree of subjectively experienced pain: In 2013, Tor Wager, a neuroscientist at the University of Colorado, Boulder, took the logical next step by creating an algorithm that could recognize pain?s distinctive patterns; today, it can pick out brains in pain with more than ninety-five-per-cent accuracy. When the algorithm is asked to sort activation maps by apparent intensity, its ranking matches participants? subjective pain ratings. By analyzing neural activity, it can tell not just whether someone is in pain but also how intense the experience is. So, perhaps the computer can?t ?feel our pain,? but it can sure ?sense our pain!? Here?s the full article: https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain David From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Glassman, Michael Sent: Tuesday, July 3, 2018 8:16 AM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: Interesting article on robots and social learning It seems like we are still having the same argument as when robots first came on the scene. In response to John McCarthy, who was claiming that eventually robots can have belief systems and motivations similar to humans through AI John Searle wrote the Chinese room. There have been a lot of responses to the Chinese room over the years and a number of digital philosopher claim it is no longer salient, but I don?t think anybody has ever effectively answered his central question. Just a quick recap. You come to a closed door and know there is a person on the other side. To communicate you decide the teacher the person on the other side Chinese. You do this by continuously exchanging rules systems under the door. After a while you are able to have a conversation with the individual in perfect Chinese. But does that person actually know Chinese just from the rule systems. I think Searle?s major point is are you really learning if you don?t know why you?re learning, or are you just repeating. Learning is embedded in the human condition and the reason it works so well and is adaptable is because we understand it when we use what we learn in the world in response to others. To put it in response to the post, does a bomb defusion robot really learn how to defuse a bomb if it does not know why it is doing it. It might cut the right wires at the right time but it doesn?t understand why and therefore is not doing the task just a series of steps it has been able to absorb. Is that the opposite of human learning? What the researcher did really isn?t that special at this point. Well I definitely couldn?t do it and it is amazing, but it is in essence a miniature version of Libratus (which beat experts at Texas Hold em) and Alphago (which beat the second best Go player in the world). My guess it is the same use of deep learning in which the program integrates new information into what it is already capable of. If machines can learn from interacting with other humans then they can learn from interacting with other machines. It is the same principle (though much, much simpler in this case). The question is what does it mean. As we defining learning down because of the zeitgeist. Greg started his post saying a socio-cultural theorist be interested in this research. I wonder if they might more likely to be the ones putting on the brakes, asking questions about it. Michael From: xmca-l-bounces@mailman.ucsd.edu > On Behalf Of Andy Blunden Sent: Tuesday, July 03, 2018 7:04 AM To: xmca-l@mailman.ucsd.edu Subject: [Xmca-l] Re: Interesting article on robots and social learning Does a robot have "motivation"? andy ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: Hi Greg, What is most interesting to me about the understanding of learning which informs most AI projects is that it seems to assume that affect is irrelevant. The role of caring, liking, worrying etc. in social learning seems to be almost universally overlooked because information is seen as something that can be ?got? and ?given? more than something that is distributed in relationships. Does anyone know about any AI projects which consider how machines might feel about what they learn? All the best, Rod From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Greg Thompson Sent: 03 July 2018 02:50 To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Interesting article on robots and social learning I?m ambivalent about this project but I suspect that some young CHAT scholar out there could have a lot to contribute to a project like this one: https://www.sapiens.org/column/machinations/artificial-intelligence-culture/ -Greg -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson ________________________________ This email and any files with it are confidential and intended solely for the use of the recipient to whom it is addressed. If you are not the intended recipient then copying, distribution or other use of the information contained is strictly prohibited and you should not rely on it. If you have received this email in error please let the sender know immediately and delete it from your system(s). Internet emails are not necessarily secure. While we take every care, University of Plymouth accepts no responsibility for viruses and it is your responsibility to scan emails and their attachments. University of Plymouth does not accept responsibility for any changes made after it was sent. Nothing in this email or its attachments constitutes an order for goods or services unless accompanied by an official order form. -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180716/b2a60f47/attachment.html From greg.a.thompson@gmail.com Mon Jul 16 08:29:03 2018 From: greg.a.thompson@gmail.com (Greg Thompson) Date: Tue, 17 Jul 2018 00:29:03 +0900 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: References: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> <1860198877.3850789.1531537929986@mail.yahoo.com> <7c142464-a2b2-ede1-e258-388e449e10f6@marxists.org> <3B91542B0D4F274D871B38AA48E991F953B3E4C1@CIO-KRC-D1MBX04.osuad.osu.edu> Message-ID: David Ki, Interesting. It seems to me that this points to the distributed nature of cognition since the effectiveness of the therapy must have something to do with the uptake of these responses by the analysand (along these lines also consider Garfinkel's classic study of a therapy program based on yes/no questions that randomly produces answers and yet which the analysands felt was effective therapy). So I suppose one relevant question for the development of AI would be: What are the processes through which cognition can be distributed? And, how can AI develop such processes? That seems like another way of trying to figure out how AI can "leverage" the cultural context around it. [As for the Turing test, I like to play these games with customer service encounters to see if I can figure out if they are human or "bots".] -greg On Mon, Jul 16, 2018 at 2:55 AM, David H Kirshner wrote: > The Turing Test is usually considered to be a cognitive challenge for > programmer and test subject: Can I design a smart enough program; can I > catch the flaws in the program. But early response to Eliza, suggests > otherwise. > > > > Eliza was a natural language processing computer program created in the > mid-60s to simulate a psychotherapist. It?s architecture was highly > simplistic. The structure of the input sentence (supplied by patient, a > real human being) was analyzed, key words identified, and then converted > into a question. Here?s a brief sample of interactive text from > https://www.cs.umd.edu/class/fall2012/cmsc828d/oldreportfiles/guha1.pdf: > > > > *Patient: *I am unhappy. > > *Eliza: *Are you unhappy often? > > *Patient: *Yes. > > *Eliza: *Can you elaborate on that? > > *Patient: *Too much work. > > *Eliza: *Why do you say too much work? > > *Patient: *I am working a lot. > > ? > > > > As far as I know Eliza was not subjected to a Turing test, though it was > the first program with sufficient linguistic ability to even be a candidate > for the Turing Test. Eliza was used with people who already knew it was a > computer. But that didn?t deter them from identifying with it: > > > > ?many early users were convinced of ELIZA?s intelligence and > understanding, despite [program creator, Joseph] Weizenbaum?s insistence to > the contrary? (https://en.wikipedia.org/wiki/ELIZA). > > > > So, the Turing Test is as much (or more) about our cultures tendencies of > identification as it is about the technical practices of AI simulation. > > > > David > > > > *From:* xmca-l-bounces@mailman.ucsd.edu *On > Behalf Of *Greg Thompson > *Sent:* Sunday, July 15, 2018 11:12 AM > > *To:* eXtended Mind, Culture, Activity > *Subject:* [Xmca-l] Re: Interesting article on robots and social learning > > > > And I'm still curious if any others out there might have anything to > contribute to Doug's query regarding what CHAT theory (particularly > developmental theories) might have to offer thinking about AI? > > > > It seems an interesting question to think through even if you aren't on > board with the larger AI project... > > > > -greg > > > > On Sun, Jul 15, 2018 at 10:55 AM, Andy Blunden wrote: > > I think we go back to Martin's earlier ironic comment here, Michael. > > Andy > ------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 15/07/2018 9:44 AM, Glassman, Michael wrote: > > The Turing test, at least the test he wrote in his article, is actually a > big more complicated than this, and especially poignant today. Turing?s > test of whether computers are acting as human was based on an old English > game show called The Lying Game (I suppose one of the reasons for the title > of the movie on Turing, though of course it had multiple meanings. But for > some reason they never mentioned the origin of the phrase in the movie). > Anyway in the lying game the contestant had to listen to two individuals, > one of whom was telling the truth about the situation and one of whom was > lying. The way Turing describes it, it sounds quite brutal. The contestant > had to figure out who the liar was (there was a similar much milder version > years later in the US). Anyway Turing?s proposal, if I remember correctly, > was that a computer could be considered thinking like a human if the comp > the contestant was listening to was lying and he or she couldn?t tell. In > essence the computer would successfully lie. Everybody think Turing > believed that computers would eventually think like humans but my reading > of the article was that he had no idea, but as the computer stood at the > time there was no chance. > > > > The reason this is so poignant is the Mueller indictments that came down > yesterday. For those outside the U.S. or not following the news the > indictments were against Russian military leading a scheme to convince > individuals of lies about various actor in the 2016 election (also times > release of information and breaking in to voting systems). But it is the > propagation of lies by robots and people believing them that interests me. > I feel like we aren?t putting enough thought into that. Many of the people > receiving the information could not tell it was no from humans and believed > it even though in many cases it was generated by robots, passing it seems > to me Turing?s test. How and why did this happen? Of course Turing died > before the Internet so he couldn?t have known about it. But I wonder if > part of the reason the robots were successful is that they have the ability > to mine, collect and aggregate people?s biases and then reflect them back > to us. We tend to engage, believe things in the contexts of our own > biases. They say in salesmanship that the trick is figuring out what > people want to here and then couching whatever you want to see in that. > Trump is a master of reading what a group of people want to hear at the > moment, their biases, and then mirroring it back to them > > > > If we went back to the Chinese room and the person inside was able to read > our biases from our messages would they then be human. > > > > We live in a strange age. > > > > *From:* xmca-l-bounces@mailman.ucsd.edu > *On Behalf Of *Andy Blunden > *Sent:* Saturday, July 14, 2018 8:58 AM > *To:* xmca-l@mailman.ucsd.edu > *Subject:* [Xmca-l] Re: Interesting article on robots and social learning > > > > I understand that the Turing Test is one which AI people can use to > measure the success of their AI - if you can't tell the difference between > a computer and a human interaction then the computer has passed the Turing > test. I tend to rely on a kind of anti-Turing Test, that is, that if you > can tell the difference between the computer and the human interaction, > then you have passed the anti-Turing test, that is, you know something > about humans. > > Andy > ------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 14/07/2018 1:12 PM, Douglas Williams wrote: > > Hi-- > > I think I'll come out of lurking for this one. Actually, what you're > talking about with this pain algorithm system sounds like a modeling system > that someone might need to develop what Alan Turing described as a P-type > computing device. A P-type computer would receive its programming from > inputs of pleasure and pain. It was probably derived from reading some of > the behavioralist models of mind at the time. Turing thought that he was > probably pretty close to being able to develop such a computing device, > which, because its input was similar, could model human thought. The Eliza > Rogersian analysis computer program was another early idea in which the > goal was to model the patterns of human interaction, and gradually approach > closer to human thought and interaction that way. And by the 2000's, the > idea of the "singularity" was afloat, in which one could model human minds > so well as to enable a human to be uploaded into a computer, and live > forever as software (Kurzweil, 2005). But given that we barely had a > sufficient model of mind to say Boo with at the time (what is > consciousness? where does intention come from? What is the balance of > nature/nurture in motivation? Speech utterances? and so on), and you're > right, AI doesn't have much of a theory of emotion, either--the goal of > computer software modeling human thought seemed very far away to me. > > > > At someone's request, I wrote a rather whimsical paper called "What is > Artificial Intelligence?" back in 2006 about such things. My argument was > that statistical modeling of human interaction and capturing thought was > not too easy after all, precisely because of the parts of mind we don't > think of, and the social interactions that, at the time, were not a primary > focus. I mused about that in the context of my trying to write a computer > program by applying Chomsky's syntactic structures to interpret intention > of a few simple questions--without, alas, in my case, a corpus-supported > Markov chain logic to do it. Generative grammar would take care of it, > right? Wrong. > > > So as someone who had done a little primitive, incompetent attempt at > speech modeling myself, and in the light of my later-acquired knowledge of > CHAT, Burke, Bakhtin, Mead, and various other people in different fields, > and of the tendency of people to interact through the world through > cognitive biases, complexes, and embodied perceptions that were not readily > available to artificial systems, I didn't think the singularity was so near. > > The terrible thing about computer programs is that they do just what you > tell them to do, and no more. They have no drive to improve, except as > programmed. When they do improve, their creativity is limited. And the > approach now still substantially is pattern-recognition based. The current > paradigm is something called Convolutional Neural Network Long Short-Term > Memory Networks (CNN/LSTM) for speech recognition, in which the > convolutional neural networks reduce the variants of speech input into > manageable patterns, and temporal processing (temporal patterns of the real > wold phenomena to which the AI system is responding). But while such > systems combined with natural language processing can increasingly mimic > human response, and "learn" on their own, and while they are approaching > the "weak" form of artificial general intelligence (AGI), the intelligence > needed for a machine to perform any intellectual task that a human being > can, they are an awfully long way from "strong" AGI--that is, something > approaching human consciousness. I think that's because they are a long way > from capturing the kind of social embeddedness of almost all animal > behavior, and the sense in which human cognition is embedded in the messy > things, like emotion. A computer algorithm can recognize the patterns of > emotion, but that's it. An AGI system that can experience emotions, or have > motivation, is quite another thing entirely. > > I can tell you that AI confidence is still there. In raising questions > about cultural and physical embodiment in artficial intelligence > interations with someone in the field recently, he dismissed the idea as > being that relevant. His thought was that "what I find essential is that we > acknowledge that there's no obvious evidence supporting that the current > paradigm of CNN/LSTM under various reinforcement algorithms isn't enough > for A AGI and in particular for broad animal-like intelligence like that of > ravens and dogs." > > But ravens and dogs are embedded in social interaction, in intentionality, > in consciousness--qualitatively different than ours, maybe, but there. Dogs > don't do what you ask them to, always. When they do things, they do them > for their own intentionality, which may be to please you, or may be to do > something you never asked the dog to do, which is either inherent in its > nature, or an expression of social interactions with you or others, many of > which you and they may not be consciously aware of. The deep structure of > metaphor, the spatiotemporal relations of language that Langacker describes > as being necessary for construal, the worlds of narrativized experience, > are mostly outside of the reckoning, so far as I know (though I'm not an > expert--I could be at least partly wrong) of the current CNN/LSTM paradigm. > > My old interlocutor in thinking about my language program, Noam Chomsky, > has been a pretty sharp critic of the pattern recognition approach to > artificial intelligence. > > Here's Chomsky's take on the idea: > > http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html > > And here's Peter Norvig's response; he's a director of research at Google, > where Kurzweil is, and where, I assume, they are as close to the strong > version of artificial general intelligence as anyone out there... > > http://norvig.com/chomsky.html > > Frankly, I would be quite interested in what you think of these things. > I'm merely an Isaiah Berlin fox, chasing to and fro at all the pretty ideas > out there. But you, many of you, are, I suspect, the untapped hedgehogs > whose ideas on these things would see more readily what I dimly grasp must > be required, not just for achieving a strong AGI, but for achieving > something that we would see as an ethical, reasonable artificial mind that > expands human experience, rather than becomes a prison that reduces human > interactions to its own level. > > My own thinking is that lately, Cognitive Metaphor Theory (CMT), which I > knew more of in its earlier (now "standard model') days, is getting even > more interesting than it was. I'd done a transfer term to UC Berkeley to > study with George Lakoff, but we didn't hit it off well, perhaps I kept > asking him questions about social embeddedness, and similarities to > Vygotsky's theory of complex thought, and was too expressive about my > interest in linking out from his approach than folding in. It seems that > the idea I was rather woolily suggesting to Lakoff back then has caught on: > namely, that utterances could be explored for cultural variation and > historical embeddedness, a form ofsocial context to the narratives and > metaphors and blended spaces that underlay speech utterances and thought; > that there was a degree of social embodiment as well as physiological > embodiment through which language operated. I thought then, and it looks > like some other people now, are thinking that someone seeking to understand > utterances (as a strong AGI system would need to do) really, would need to > engage in internalizing and ventriloqusing a form of Geertz's thick > description of interactions. In such forms, words do not mean what they > say, and can have different affect that is a bit more complex than I think > temporal processing currently addresses. > > I think these are the kind of things that artificial intelligence would > need truly to advance, and that Bakhtin and Vygotsky and Leont'ev and in > the visual world, Eisenstein were addressing all along... > > And, of course, you guys. > > > > Regards, > > Douglas Willams > > > > > > > > On Tuesday, July 3, 2018, 10:35:45 AM PDT, David H Kirshner > wrote: > > > > > > The other side of the coin is that ineffable human experience is becoming > more effable. > > Computers can now look at a human brain scan and determine the degree of > subjectively experienced pain: > > > > In 2013, Tor Wager, a neuroscientist at the University of Colorado, > Boulder, took the logical next step by creating an algorithm that could > recognize pain?s distinctive patterns; today, it can pick out brains in > pain with more than ninety-five-per-cent accuracy. When the algorithm is > asked to sort activation maps by apparent intensity, its ranking matches > participants? subjective pain ratings. By analyzing neural activity, it can > tell not just whether someone is in pain but also how intense the > experience is. > > > > So, perhaps the computer can?t ?feel our pain,? but it can sure ?sense our > pain!? > > > > Here?s the full article: > > https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain > > > > David > > > > *From:* xmca-l-bounces@mailman.ucsd.edu > *On Behalf Of *Glassman, Michael > *Sent:* Tuesday, July 3, 2018 8:16 AM > *To:* eXtended Mind, Culture, Activity > > *Subject:* [Xmca-l] Re: Interesting article on robots and social learning > > > > > > > > It seems like we are still having the same argument as when robots first > came on the scene. In response to John McCarthy, who was claiming that > eventually robots can have belief systems and motivations similar to humans > through AI John Searle wrote the Chinese room. There have been a lot of > responses to the Chinese room over the years and a number of digital > philosopher claim it is no longer salient, but I don?t think anybody has > ever effectively answered his central question. > > > > Just a quick recap. You come to a closed door and know there is a person > on the other side. To communicate you decide the teacher the person on the > other side Chinese. You do this by continuously exchanging rules systems > under the door. After a while you are able to have a conversation with the > individual in perfect Chinese. But does that person actually know Chinese > just from the rule systems. I think Searle?s major point is are you really > learning if you don?t know why you?re learning, or are you just repeating. > Learning is embedded in the human condition and the reason it works so well > and is adaptable is because we understand it when we use what we learn in > the world in response to others. To put it in response to the post, does a > bomb defusion robot really learn how to defuse a bomb if it does not know > why it is doing it. It might cut the right wires at the right time but it > doesn?t understand why and therefore is not doing the task just a series of > steps it has been able to absorb. Is that the opposite of human learning? > > > > What the researcher did really isn?t that special at this point. Well I > definitely couldn?t do it and it is amazing, but it is in essence a > miniature version of Libratus (which beat experts at Texas Hold em) and > Alphago (which beat the second best Go player in the world). My guess it > is the same use of deep learning in which the program integrates new > information into what it is already capable of. If machines can learn from > interacting with other humans then they can learn from interacting with > other machines. It is the same principle (though much, much simpler in > this case). The question is what does it mean. As we defining learning > down because of the zeitgeist. Greg started his post saying a > socio-cultural theorist be interested in this research. I wonder if they > might more likely to be the ones putting on the brakes, asking questions > about it. > > > > Michael > > > > *From:* xmca-l-bounces@mailman.ucsd.edu *On > Behalf Of *Andy Blunden > *Sent:* Tuesday, July 03, 2018 7:04 AM > *To:* xmca-l@mailman.ucsd.edu > *Subject:* [Xmca-l] Re: Interesting article on robots and social learning > > > > Does a robot have "motivation"? > > andy > ------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: > > Hi Greg, > > > > What is most interesting to me about the understanding of learning which > informs most AI projects is that it seems to assume that affect is > irrelevant. The role of caring, liking, worrying etc. in social learning > seems to be almost universally overlooked because information is seen as > something that can be ?got? and ?given? more than something that is > distributed in relationships. > > > > Does anyone know about any AI projects which consider how machines might > feel about what they learn? > > > > All the best, > > > Rod > > > > *From:* xmca-l-bounces@mailman.ucsd.edu > *On Behalf Of *Greg Thompson > *Sent:* 03 July 2018 02:50 > *To:* eXtended Mind, Culture, Activity > > *Subject:* [Xmca-l] Interesting article on robots and social learning > > > > I?m ambivalent about this project but I suspect that some young CHAT > scholar out there could have a lot to contribute to a project like this one: > > https://www.sapiens.org/column/machinations/artificial-intelligence- > culture/ > > > > -Greg > > -- > > Gregory A. Thompson, Ph.D. > > Assistant Professor > > Department of Anthropology > > 880 Spencer W. Kimball Tower > > Brigham Young University > > Provo, UT 84602 > > WEBSITE: greg.a.thompson.byu.edu > http://byu.academia.edu/GregoryThompson > ------------------------------ > > > > This email and any files with it are confidential and intended solely for > the use of the recipient to whom it is addressed. If you are not the > intended recipient then copying, distribution or other use of the > information contained is strictly prohibited and you should not rely on it. > If you have received this email in error please let the sender know > immediately and delete it from your system(s). Internet emails are not > necessarily secure. While we take every care, University of Plymouth > accepts no responsibility for viruses and it is your responsibility to scan > emails and their attachments. University of Plymouth does not accept > responsibility for any changes made after it was sent. Nothing in this > email or its attachments constitutes an order for goods or services unless > accompanied by an official order form. > > > > > > > > > > > > > > > -- > > Gregory A. Thompson, Ph.D. > > Assistant Professor > > Department of Anthropology > > 880 Spencer W. Kimball Tower > > Brigham Young University > > Provo, UT 84602 > > WEBSITE: *greg.a.thompson.byu.edu* > *http://byu.academia.edu/GregoryThompson* > > -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180717/9c86323e/attachment.html From greg.a.thompson@gmail.com Mon Jul 16 10:29:22 2018 From: greg.a.thompson@gmail.com (Greg Thompson) Date: Mon, 16 Jul 2018 11:29:22 -0600 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: <3B91542B0D4F274D871B38AA48E991F953B3E5D1@CIO-KRC-D1MBX04.osuad.osu.edu> References: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> <1860198877.3850789.1531537929986@mail.yahoo.com> <7c142464-a2b2-ede1-e258-388e449e10f6@marxists.org> <3B91542B0D4F274D871B38AA48E991F953B3E4C1@CIO-KRC-D1MBX04.osuad.osu.edu> <3B91542B0D4F274D871B38AA48E991F953B3E5D1@CIO-KRC-D1MBX04.osuad.osu.edu> Message-ID: Michael G, Yes, This seems a very important point. But I?m wondering why you think CHAT would be particularly good at making this point. Any further explanation? Greg On Sun, Jul 15, 2018 at 6:25 PM Glassman, Michael wrote: > I wonder if where CHAT might be most interesting in addressing AI are on > topics of bias and oppression. I believe that there is a real danger that > AI can be used as a tool for oppression, especially from some of its early > uses. One of the things people discussing the possibilities of AI don?t > discuss near enough is that it picks up and integrates biases from the > information it receives. Sometimes this can be interesting such as the > program Libratus that beat world class poker players at Texas Hold ?em. > One of the less discussed aspects is that one of the reasons it was capable > of doing this is it picks up on the playing biases of the players it is > competing with and integrates them into its decision making process. This > I think is one of the reasons that it has to play only one player at a time > to be successful. > > > > The danger is when it integrates these biases into a larger decision > making process. There is an AI program called Northpointe used by the > justice department that uses a combination of big data and deep learning to > make decisions about whether people convicted of crimes will wind up back > in jail. This should have implications for sentencing. The program, > surprise, tends to be much harsher with Black individuals than white > individuals. Even if you keep ethnicity outside of the equation it has > enough other information to create a natural bias. There are also some of > the more advanced translation programs which tend to incorporate the biases > of the languages (e.g. mysoginistic) into the translations without those > getting the translations realizing it. AI , especially machine learning, > is in many ways a prisoner to the information it receives. Who decides > what information it receives? Much like the intelligence tests of an > earlier age people will use AI decision making as being neutral or > objective when it actually mirrors back (almost perfectly) those who are > feeding it information. > > > > Like I said I don?t see this point raised nearly enough. Perhaps CHAT is > one of the fields in a position to constantly point this out, explore the > ways that AI is culturally biases, and those that dominate information flow > can easily use it as a tool for oppression. > > > > Michael > > > > *From:* xmca-l-bounces@mailman.ucsd.edu *On > Behalf Of *Greg Thompson > *Sent:* Sunday, July 15, 2018 12:12 PM > > > *To:* eXtended Mind, Culture, Activity > *Subject:* [Xmca-l] Re: Interesting article on robots and social learning > > > > And I'm still curious if any others out there might have anything to > contribute to Doug's query regarding what CHAT theory (particularly > developmental theories) might have to offer thinking about AI? > > > > It seems an interesting question to think through even if you aren't on > board with the larger AI project... > > > > -greg > > On Sun, Jul 15, 2018 at 10:55 AM, Andy Blunden wrote: > > I think we go back to Martin's earlier ironic comment here, Michael. > > Andy > ------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 15/07/2018 9:44 AM, Glassman, Michael wrote: > > The Turing test, at least the test he wrote in his article, is actually a > big more complicated than this, and especially poignant today. Turing?s > test of whether computers are acting as human was based on an old English > game show called The Lying Game (I suppose one of the reasons for the title > of the movie on Turing, though of course it had multiple meanings. But for > some reason they never mentioned the origin of the phrase in the movie). > Anyway in the lying game the contestant had to listen to two individuals, > one of whom was telling the truth about the situation and one of whom was > lying. The way Turing describes it, it sounds quite brutal. The contestant > had to figure out who the liar was (there was a similar much milder version > years later in the US). Anyway Turing?s proposal, if I remember correctly, > was that a computer could be considered thinking like a human if the comp > the contestant was listening to was lying and he or she couldn?t tell. In > essence the computer would successfully lie. Everybody think Turing > believed that computers would eventually think like humans but my reading > of the article was that he had no idea, but as the computer stood at the > time there was no chance. > > > > The reason this is so poignant is the Mueller indictments that came down > yesterday. For those outside the U.S. or not following the news the > indictments were against Russian military leading a scheme to convince > individuals of lies about various actor in the 2016 election (also times > release of information and breaking in to voting systems). But it is the > propagation of lies by robots and people believing them that interests me. > I feel like we aren?t putting enough thought into that. Many of the people > receiving the information could not tell it was no from humans and believed > it even though in many cases it was generated by robots, passing it seems > to me Turing?s test. How and why did this happen? Of course Turing died > before the Internet so he couldn?t have known about it. But I wonder if > part of the reason the robots were successful is that they have the ability > to mine, collect and aggregate people?s biases and then reflect them back > to us. We tend to engage, believe things in the contexts of our own > biases. They say in salesmanship that the trick is figuring out what > people want to here and then couching whatever you want to see in that. > Trump is a master of reading what a group of people want to hear at the > moment, their biases, and then mirroring it back to them > > > > If we went back to the Chinese room and the person inside was able to read > our biases from our messages would they then be human. > > > > We live in a strange age. > > > > *From:* xmca-l-bounces@mailman.ucsd.edu > *On Behalf Of *Andy Blunden > *Sent:* Saturday, July 14, 2018 8:58 AM > *To:* xmca-l@mailman.ucsd.edu > *Subject:* [Xmca-l] Re: Interesting article on robots and social learning > > > > I understand that the Turing Test is one which AI people can use to > measure the success of their AI - if you can't tell the difference between > a computer and a human interaction then the computer has passed the Turing > test. I tend to rely on a kind of anti-Turing Test, that is, that if you > can tell the difference between the computer and the human interaction, > then you have passed the anti-Turing test, that is, you know something > about humans. > > Andy > ------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 14/07/2018 1:12 PM, Douglas Williams wrote: > > Hi-- > > I think I'll come out of lurking for this one. Actually, what you're > talking about with this pain algorithm system sounds like a modeling system > that someone might need to develop what Alan Turing described as a P-type > computing device. A P-type computer would receive its programming from > inputs of pleasure and pain. It was probably derived from reading some of > the behavioralist models of mind at the time. Turing thought that he was > probably pretty close to being able to develop such a computing device, > which, because its input was similar, could model human thought. The Eliza > Rogersian analysis computer program was another early idea in which the > goal was to model the patterns of human interaction, and gradually approach > closer to human thought and interaction that way. And by the 2000's, the > idea of the "singularity" was afloat, in which one could model human minds > so well as to enable a human to be uploaded into a computer, and live > forever as software (Kurzweil, 2005). But given that we barely had a > sufficient model of mind to say Boo with at the time (what is > consciousness? where does intention come from? What is the balance of > nature/nurture in motivation? Speech utterances? and so on), and you're > right, AI doesn't have much of a theory of emotion, either--the goal of > computer software modeling human thought seemed very far away to me. > > > > At someone's request, I wrote a rather whimsical paper called "What is > Artificial Intelligence?" back in 2006 about such things. My argument was > that statistical modeling of human interaction and capturing thought was > not too easy after all, precisely because of the parts of mind we don't > think of, and the social interactions that, at the time, were not a primary > focus. I mused about that in the context of my trying to write a computer > program by applying Chomsky's syntactic structures to interpret intention > of a few simple questions--without, alas, in my case, a corpus-supported > Markov chain logic to do it. Generative grammar would take care of it, > right? Wrong. > > > So as someone who had done a little primitive, incompetent attempt at > speech modeling myself, and in the light of my later-acquired knowledge of > CHAT, Burke, Bakhtin, Mead, and various other people in different fields, > and of the tendency of people to interact through the world through > cognitive biases, complexes, and embodied perceptions that were not readily > available to artificial systems, I didn't think the singularity was so near. > > The terrible thing about computer programs is that they do just what you > tell them to do, and no more. They have no drive to improve, except as > programmed. When they do improve, their creativity is limited. And the > approach now still substantially is pattern-recognition based. The current > paradigm is something called Convolutional Neural Network Long Short-Term > Memory Networks (CNN/LSTM) for speech recognition, in which the > convolutional neural networks reduce the variants of speech input into > manageable patterns, and temporal processing (temporal patterns of the real > wold phenomena to which the AI system is responding). But while such > systems combined with natural language processing can increasingly mimic > human response, and "learn" on their own, and while they are approaching > the "weak" form of artificial general intelligence (AGI), the intelligence > needed for a machine to perform any intellectual task that a human being > can, they are an awfully long way from "strong" AGI--that is, something > approaching human consciousness. I think that's because they are a long way > from capturing the kind of social embeddedness of almost all animal > behavior, and the sense in which human cognition is embedded in the messy > things, like emotion. A computer algorithm can recognize the patterns of > emotion, but that's it. An AGI system that can experience emotions, or have > motivation, is quite another thing entirely. > > I can tell you that AI confidence is still there. In raising questions > about cultural and physical embodiment in artficial intelligence > interations with someone in the field recently, he dismissed the idea as > being that relevant. His thought was that "what I find essential is that we > acknowledge that there's no obvious evidence supporting that the current > paradigm of CNN/LSTM under various reinforcement algorithms isn't enough > for A AGI and in particular for broad animal-like intelligence like that of > ravens and dogs." > > But ravens and dogs are embedded in social interaction, in intentionality, > in consciousness--qualitatively different than ours, maybe, but there. Dogs > don't do what you ask them to, always. When they do things, they do them > for their own intentionality, which may be to please you, or may be to do > something you never asked the dog to do, which is either inherent in its > nature, or an expression of social interactions with you or others, many of > which you and they may not be consciously aware of. The deep structure of > metaphor, the spatiotemporal relations of language that Langacker describes > as being necessary for construal, the worlds of narrativized experience, > are mostly outside of the reckoning, so far as I know (though I'm not an > expert--I could be at least partly wrong) of the current CNN/LSTM paradigm. > > My old interlocutor in thinking about my language program, Noam Chomsky, > has been a pretty sharp critic of the pattern recognition approach to > artificial intelligence. > > Here's Chomsky's take on the idea: > > http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html > > And here's Peter Norvig's response; he's a director of research at Google, > where Kurzweil is, and where, I assume, they are as close to the strong > version of artificial general intelligence as anyone out there... > > http://norvig.com/chomsky.html > > Frankly, I would be quite interested in what you think of these things. > I'm merely an Isaiah Berlin fox, chasing to and fro at all the pretty ideas > out there. But you, many of you, are, I suspect, the untapped hedgehogs > whose ideas on these things would see more readily what I dimly grasp must > be required, not just for achieving a strong AGI, but for achieving > something that we would see as an ethical, reasonable artificial mind that > expands human experience, rather than becomes a prison that reduces human > interactions to its own level. > > My own thinking is that lately, Cognitive Metaphor Theory (CMT), which I > knew more of in its earlier (now "standard model') days, is getting even > more interesting than it was. I'd done a transfer term to UC Berkeley to > study with George Lakoff, but we didn't hit it off well, perhaps I kept > asking him questions about social embeddedness, and similarities to > Vygotsky's theory of complex thought, and was too expressive about my > interest in linking out from his approach than folding in. It seems that > the idea I was rather woolily suggesting to Lakoff back then has caught on: > namely, that utterances could be explored for cultural variation and > historical embeddedness, a form ofsocial context to the narratives and > metaphors and blended spaces that underlay speech utterances and thought; > that there was a degree of social embodiment as well as physiological > embodiment through which language operated. I thought then, and it looks > like some other people now, are thinking that someone seeking to understand > utterances (as a strong AGI system would need to do) really, would need to > engage in internalizing and ventriloqusing a form of Geertz's thick > description of interactions. In such forms, words do not mean what they > say, and can have different affect that is a bit more complex than I think > temporal processing currently addresses. > > I think these are the kind of things that artificial intelligence would > need truly to advance, and that Bakhtin and Vygotsky and Leont'ev and in > the visual world, Eisenstein were addressing all along... > > And, of course, you guys. > > > > Regards, > > Douglas Willams > > > > > > > > On Tuesday, July 3, 2018, 10:35:45 AM PDT, David H Kirshner > wrote: > > > > > > The other side of the coin is that ineffable human experience is becoming > more effable. > > Computers can now look at a human brain scan and determine the degree of > subjectively experienced pain: > > > > In 2013, Tor Wager, a neuroscientist at the University of Colorado, > Boulder, took the logical next step by creating an algorithm that could > recognize pain?s distinctive patterns; today, it can pick out brains in > pain with more than ninety-five-per-cent accuracy. When the algorithm is > asked to sort activation maps by apparent intensity, its ranking matches > participants? subjective pain ratings. By analyzing neural activity, it can > tell not just whether someone is in pain but also how intense the > experience is. > > > > So, perhaps the computer can?t ?feel our pain,? but it can sure ?sense our > pain!? > > > > Here?s the full article: > > https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain > > > > David > > > > *From:* xmca-l-bounces@mailman.ucsd.edu > *On Behalf Of *Glassman, Michael > *Sent:* Tuesday, July 3, 2018 8:16 AM > *To:* eXtended Mind, Culture, Activity > > *Subject:* [Xmca-l] Re: Interesting article on robots and social learning > > > > > > > > It seems like we are still having the same argument as when robots first > came on the scene. In response to John McCarthy, who was claiming that > eventually robots can have belief systems and motivations similar to humans > through AI John Searle wrote the Chinese room. There have been a lot of > responses to the Chinese room over the years and a number of digital > philosopher claim it is no longer salient, but I don?t think anybody has > ever effectively answered his central question. > > > > Just a quick recap. You come to a closed door and know there is a person > on the other side. To communicate you decide the teacher the person on the > other side Chinese. You do this by continuously exchanging rules systems > under the door. After a while you are able to have a conversation with the > individual in perfect Chinese. But does that person actually know Chinese > just from the rule systems. I think Searle?s major point is are you really > learning if you don?t know why you?re learning, or are you just repeating. > Learning is embedded in the human condition and the reason it works so well > and is adaptable is because we understand it when we use what we learn in > the world in response to others. To put it in response to the post, does a > bomb defusion robot really learn how to defuse a bomb if it does not know > why it is doing it. It might cut the right wires at the right time but it > doesn?t understand why and therefore is not doing the task just a series of > steps it has been able to absorb. Is that the opposite of human learning? > > > > What the researcher did really isn?t that special at this point. Well I > definitely couldn?t do it and it is amazing, but it is in essence a > miniature version of Libratus (which beat experts at Texas Hold em) and > Alphago (which beat the second best Go player in the world). My guess it > is the same use of deep learning in which the program integrates new > information into what it is already capable of. If machines can learn from > interacting with other humans then they can learn from interacting with > other machines. It is the same principle (though much, much simpler in > this case). The question is what does it mean. As we defining learning > down because of the zeitgeist. Greg started his post saying a > socio-cultural theorist be interested in this research. I wonder if they > might more likely to be the ones putting on the brakes, asking questions > about it. > > > > Michael > > > > *From:* xmca-l-bounces@mailman.ucsd.edu *On > Behalf Of *Andy Blunden > *Sent:* Tuesday, July 03, 2018 7:04 AM > *To:* xmca-l@mailman.ucsd.edu > *Subject:* [Xmca-l] Re: Interesting article on robots and social learning > > > > Does a robot have "motivation"? > > andy > ------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: > > Hi Greg, > > > > What is most interesting to me about the understanding of learning which > informs most AI projects is that it seems to assume that affect is > irrelevant. The role of caring, liking, worrying etc. in social learning > seems to be almost universally overlooked because information is seen as > something that can be ?got? and ?given? more than something that is > distributed in relationships. > > > > Does anyone know about any AI projects which consider how machines might > feel about what they learn? > > > > All the best, > > > Rod > > > > *From:* xmca-l-bounces@mailman.ucsd.edu > *On Behalf Of *Greg Thompson > *Sent:* 03 July 2018 02:50 > *To:* eXtended Mind, Culture, Activity > > *Subject:* [Xmca-l] Interesting article on robots and social learning > > > > I?m ambivalent about this project but I suspect that some young CHAT > scholar out there could have a lot to contribute to a project like this one: > > > https://www.sapiens.org/column/machinations/artificial-intelligence-culture/ > > > > -Greg > > -- > > Gregory A. Thompson, Ph.D. > > Assistant Professor > > Department of Anthropology > > 880 Spencer W. Kimball Tower > > Brigham Young University > > Provo, UT 84602 > > WEBSITE: greg.a.thompson.byu.edu > http://byu.academia.edu/GregoryThompson > ------------------------------ > > This email and any files with it are confidential and intended solely for > the use of the recipient to whom it is addressed. If you are not the > intended recipient then copying, distribution or other use of the > information contained is strictly prohibited and you should not rely on it. > If you have received this email in error please let the sender know > immediately and delete it from your system(s). Internet emails are not > necessarily secure. While we take every care, University of Plymouth > accepts no responsibility for viruses and it is your responsibility to scan > emails and their attachments. University of Plymouth does not accept > responsibility for any changes made after it was sent. Nothing in this > email or its attachments constitutes an order for goods or services unless > accompanied by an official order form. > > > > > > > > > -- > > Gregory A. Thompson, Ph.D. > > Assistant Professor > > Department of Anthropology > > 880 Spencer W. Kimball Tower > > Brigham Young University > > Provo, UT 84602 > > WEBSITE: *greg.a.thompson.byu.edu* > *http://byu.academia.edu/GregoryThompson* > > -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180716/61451171/attachment.html From dkirsh@lsu.edu Mon Jul 16 12:15:38 2018 From: dkirsh@lsu.edu (David H Kirshner) Date: Mon, 16 Jul 2018 19:15:38 +0000 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: References: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> <1860198877.3850789.1531537929986@mail.yahoo.com> <7c142464-a2b2-ede1-e258-388e449e10f6@marxists.org> <3B91542B0D4F274D871B38AA48E991F953B3E4C1@CIO-KRC-D1MBX04.osuad.osu.edu> Message-ID: Thanks, Greg. The distributed aspect I see to Eliza is that analysands are using the program to think about their own lives, and perhaps resolve some of their internal conflicts. But that?s not a function of AI; the same as would happen if Eliza were a person. I?m just learning from your post that some of the customer support I get is from bots. It troubles me I didn?t know that. When I get a robo-call on the phone, I don?t hesitate to hang up. When I get a call from a human reading a script, I wonder about whether to treat them as part of a machine, which they are, or as human beings, which they are. I need to struggle through those decisions, for which I need knowledge of whom/what I?m speaking with. As we expand the boundary of human to include bot, don?t we also diminish human? We shouldn?t do that uncritically. David From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Greg Thompson Sent: Monday, July 16, 2018 10:29 AM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: Interesting article on robots and social learning David Ki, Interesting. It seems to me that this points to the distributed nature of cognition since the effectiveness of the therapy must have something to do with the uptake of these responses by the analysand (along these lines also consider Garfinkel's classic study of a therapy program based on yes/no questions that randomly produces answers and yet which the analysands felt was effective therapy). So I suppose one relevant question for the development of AI would be: What are the processes through which cognition can be distributed? And, how can AI develop such processes? That seems like another way of trying to figure out how AI can "leverage" the cultural context around it. [As for the Turing test, I like to play these games with customer service encounters to see if I can figure out if they are human or "bots".] -greg On Mon, Jul 16, 2018 at 2:55 AM, David H Kirshner > wrote: The Turing Test is usually considered to be a cognitive challenge for programmer and test subject: Can I design a smart enough program; can I catch the flaws in the program. But early response to Eliza, suggests otherwise. Eliza was a natural language processing computer program created in the mid-60s to simulate a psychotherapist. It?s architecture was highly simplistic. The structure of the input sentence (supplied by patient, a real human being) was analyzed, key words identified, and then converted into a question. Here?s a brief sample of interactive text from https://www.cs.umd.edu/class/fall2012/cmsc828d/oldreportfiles/guha1.pdf: Patient: I am unhappy. Eliza: Are you unhappy often? Patient: Yes. Eliza: Can you elaborate on that? Patient: Too much work. Eliza: Why do you say too much work? Patient: I am working a lot. ? As far as I know Eliza was not subjected to a Turing test, though it was the first program with sufficient linguistic ability to even be a candidate for the Turing Test. Eliza was used with people who already knew it was a computer. But that didn?t deter them from identifying with it: ?many early users were convinced of ELIZA?s intelligence and understanding, despite [program creator, Joseph] Weizenbaum?s insistence to the contrary? (https://en.wikipedia.org/wiki/ELIZA). So, the Turing Test is as much (or more) about our cultures tendencies of identification as it is about the technical practices of AI simulation. David From: xmca-l-bounces@mailman.ucsd.edu > On Behalf Of Greg Thompson Sent: Sunday, July 15, 2018 11:12 AM To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Re: Interesting article on robots and social learning And I'm still curious if any others out there might have anything to contribute to Doug's query regarding what CHAT theory (particularly developmental theories) might have to offer thinking about AI? It seems an interesting question to think through even if you aren't on board with the larger AI project... -greg On Sun, Jul 15, 2018 at 10:55 AM, Andy Blunden > wrote: I think we go back to Martin's earlier ironic comment here, Michael. Andy ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 15/07/2018 9:44 AM, Glassman, Michael wrote: The Turing test, at least the test he wrote in his article, is actually a big more complicated than this, and especially poignant today. Turing?s test of whether computers are acting as human was based on an old English game show called The Lying Game (I suppose one of the reasons for the title of the movie on Turing, though of course it had multiple meanings. But for some reason they never mentioned the origin of the phrase in the movie). Anyway in the lying game the contestant had to listen to two individuals, one of whom was telling the truth about the situation and one of whom was lying. The way Turing describes it, it sounds quite brutal. The contestant had to figure out who the liar was (there was a similar much milder version years later in the US). Anyway Turing?s proposal, if I remember correctly, was that a computer could be considered thinking like a human if the comp the contestant was listening to was lying and he or she couldn?t tell. In essence the computer would successfully lie. Everybody think Turing believed that computers would eventually think like humans but my reading of the article was that he had no idea, but as the computer stood at the time there was no chance. The reason this is so poignant is the Mueller indictments that came down yesterday. For those outside the U.S. or not following the news the indictments were against Russian military leading a scheme to convince individuals of lies about various actor in the 2016 election (also times release of information and breaking in to voting systems). But it is the propagation of lies by robots and people believing them that interests me. I feel like we aren?t putting enough thought into that. Many of the people receiving the information could not tell it was no from humans and believed it even though in many cases it was generated by robots, passing it seems to me Turing?s test. How and why did this happen? Of course Turing died before the Internet so he couldn?t have known about it. But I wonder if part of the reason the robots were successful is that they have the ability to mine, collect and aggregate people?s biases and then reflect them back to us. We tend to engage, believe things in the contexts of our own biases. They say in salesmanship that the trick is figuring out what people want to here and then couching whatever you want to see in that. Trump is a master of reading what a group of people want to hear at the moment, their biases, and then mirroring it back to them If we went back to the Chinese room and the person inside was able to read our biases from our messages would they then be human. We live in a strange age. From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Andy Blunden Sent: Saturday, July 14, 2018 8:58 AM To: xmca-l@mailman.ucsd.edu Subject: [Xmca-l] Re: Interesting article on robots and social learning I understand that the Turing Test is one which AI people can use to measure the success of their AI - if you can't tell the difference between a computer and a human interaction then the computer has passed the Turing test. I tend to rely on a kind of anti-Turing Test, that is, that if you can tell the difference between the computer and the human interaction, then you have passed the anti-Turing test, that is, you know something about humans. Andy ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 14/07/2018 1:12 PM, Douglas Williams wrote: Hi-- I think I'll come out of lurking for this one. Actually, what you're talking about with this pain algorithm system sounds like a modeling system that someone might need to develop what Alan Turing described as a P-type computing device. A P-type computer would receive its programming from inputs of pleasure and pain. It was probably derived from reading some of the behavioralist models of mind at the time. Turing thought that he was probably pretty close to being able to develop such a computing device, which, because its input was similar, could model human thought. The Eliza Rogersian analysis computer program was another early idea in which the goal was to model the patterns of human interaction, and gradually approach closer to human thought and interaction that way. And by the 2000's, the idea of the "singularity" was afloat, in which one could model human minds so well as to enable a human to be uploaded into a computer, and live forever as software (Kurzweil, 2005). But given that we barely had a sufficient model of mind to say Boo with at the time (what is consciousness? where does intention come from? What is the balance of nature/nurture in motivation? Speech utterances? and so on), and you're right, AI doesn't have much of a theory of emotion, either--the goal of computer software modeling human thought seemed very far away to me. At someone's request, I wrote a rather whimsical paper called "What is Artificial Intelligence?" back in 2006 about such things. My argument was that statistical modeling of human interaction and capturing thought was not too easy after all, precisely because of the parts of mind we don't think of, and the social interactions that, at the time, were not a primary focus. I mused about that in the context of my trying to write a computer program by applying Chomsky's syntactic structures to interpret intention of a few simple questions--without, alas, in my case, a corpus-supported Markov chain logic to do it. Generative grammar would take care of it, right? Wrong. So as someone who had done a little primitive, incompetent attempt at speech modeling myself, and in the light of my later-acquired knowledge of CHAT, Burke, Bakhtin, Mead, and various other people in different fields, and of the tendency of people to interact through the world through cognitive biases, complexes, and embodied perceptions that were not readily available to artificial systems, I didn't think the singularity was so near. The terrible thing about computer programs is that they do just what you tell them to do, and no more. They have no drive to improve, except as programmed. When they do improve, their creativity is limited. And the approach now still substantially is pattern-recognition based. The current paradigm is something called Convolutional Neural Network Long Short-Term Memory Networks (CNN/LSTM) for speech recognition, in which the convolutional neural networks reduce the variants of speech input into manageable patterns, and temporal processing (temporal patterns of the real wold phenomena to which the AI system is responding). But while such systems combined with natural language processing can increasingly mimic human response, and "learn" on their own, and while they are approaching the "weak" form of artificial general intelligence (AGI), the intelligence needed for a machine to perform any intellectual task that a human being can, they are an awfully long way from "strong" AGI--that is, something approaching human consciousness. I think that's because they are a long way from capturing the kind of social embeddedness of almost all animal behavior, and the sense in which human cognition is embedded in the messy things, like emotion. A computer algorithm can recognize the patterns of emotion, but that's it. An AGI system that can experience emotions, or have motivation, is quite another thing entirely. I can tell you that AI confidence is still there. In raising questions about cultural and physical embodiment in artficial intelligence interations with someone in the field recently, he dismissed the idea as being that relevant. His thought was that "what I find essential is that we acknowledge that there's no obvious evidence supporting that the current paradigm of CNN/LSTM under various reinforcement algorithms isn't enough for A AGI and in particular for broad animal-like intelligence like that of ravens and dogs." But ravens and dogs are embedded in social interaction, in intentionality, in consciousness--qualitatively different than ours, maybe, but there. Dogs don't do what you ask them to, always. When they do things, they do them for their own intentionality, which may be to please you, or may be to do something you never asked the dog to do, which is either inherent in its nature, or an expression of social interactions with you or others, many of which you and they may not be consciously aware of. The deep structure of metaphor, the spatiotemporal relations of language that Langacker describes as being necessary for construal, the worlds of narrativized experience, are mostly outside of the reckoning, so far as I know (though I'm not an expert--I could be at least partly wrong) of the current CNN/LSTM paradigm. My old interlocutor in thinking about my language program, Noam Chomsky, has been a pretty sharp critic of the pattern recognition approach to artificial intelligence. Here's Chomsky's take on the idea: http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html And here's Peter Norvig's response; he's a director of research at Google, where Kurzweil is, and where, I assume, they are as close to the strong version of artificial general intelligence as anyone out there... http://norvig.com/chomsky.html Frankly, I would be quite interested in what you think of these things. I'm merely an Isaiah Berlin fox, chasing to and fro at all the pretty ideas out there. But you, many of you, are, I suspect, the untapped hedgehogs whose ideas on these things would see more readily what I dimly grasp must be required, not just for achieving a strong AGI, but for achieving something that we would see as an ethical, reasonable artificial mind that expands human experience, rather than becomes a prison that reduces human interactions to its own level. My own thinking is that lately, Cognitive Metaphor Theory (CMT), which I knew more of in its earlier (now "standard model') days, is getting even more interesting than it was. I'd done a transfer term to UC Berkeley to study with George Lakoff, but we didn't hit it off well, perhaps I kept asking him questions about social embeddedness, and similarities to Vygotsky's theory of complex thought, and was too expressive about my interest in linking out from his approach than folding in. It seems that the idea I was rather woolily suggesting to Lakoff back then has caught on: namely, that utterances could be explored for cultural variation and historical embeddedness, a form ofsocial context to the narratives and metaphors and blended spaces that underlay speech utterances and thought; that there was a degree of social embodiment as well as physiological embodiment through which language operated. I thought then, and it looks like some other people now, are thinking that someone seeking to understand utterances (as a strong AGI system would need to do) really, would need to engage in internalizing and ventriloqusing a form of Geertz's thick description of interactions. In such forms, words do not mean what they say, and can have different affect that is a bit more complex than I think temporal processing currently addresses. I think these are the kind of things that artificial intelligence would need truly to advance, and that Bakhtin and Vygotsky and Leont'ev and in the visual world, Eisenstein were addressing all along... And, of course, you guys. Regards, Douglas Willams On Tuesday, July 3, 2018, 10:35:45 AM PDT, David H Kirshner wrote: The other side of the coin is that ineffable human experience is becoming more effable. Computers can now look at a human brain scan and determine the degree of subjectively experienced pain: In 2013, Tor Wager, a neuroscientist at the University of Colorado, Boulder, took the logical next step by creating an algorithm that could recognize pain?s distinctive patterns; today, it can pick out brains in pain with more than ninety-five-per-cent accuracy. When the algorithm is asked to sort activation maps by apparent intensity, its ranking matches participants? subjective pain ratings. By analyzing neural activity, it can tell not just whether someone is in pain but also how intense the experience is. So, perhaps the computer can?t ?feel our pain,? but it can sure ?sense our pain!? Here?s the full article: https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain David From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Glassman, Michael Sent: Tuesday, July 3, 2018 8:16 AM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: Interesting article on robots and social learning It seems like we are still having the same argument as when robots first came on the scene. In response to John McCarthy, who was claiming that eventually robots can have belief systems and motivations similar to humans through AI John Searle wrote the Chinese room. There have been a lot of responses to the Chinese room over the years and a number of digital philosopher claim it is no longer salient, but I don?t think anybody has ever effectively answered his central question. Just a quick recap. You come to a closed door and know there is a person on the other side. To communicate you decide the teacher the person on the other side Chinese. You do this by continuously exchanging rules systems under the door. After a while you are able to have a conversation with the individual in perfect Chinese. But does that person actually know Chinese just from the rule systems. I think Searle?s major point is are you really learning if you don?t know why you?re learning, or are you just repeating. Learning is embedded in the human condition and the reason it works so well and is adaptable is because we understand it when we use what we learn in the world in response to others. To put it in response to the post, does a bomb defusion robot really learn how to defuse a bomb if it does not know why it is doing it. It might cut the right wires at the right time but it doesn?t understand why and therefore is not doing the task just a series of steps it has been able to absorb. Is that the opposite of human learning? What the researcher did really isn?t that special at this point. Well I definitely couldn?t do it and it is amazing, but it is in essence a miniature version of Libratus (which beat experts at Texas Hold em) and Alphago (which beat the second best Go player in the world). My guess it is the same use of deep learning in which the program integrates new information into what it is already capable of. If machines can learn from interacting with other humans then they can learn from interacting with other machines. It is the same principle (though much, much simpler in this case). The question is what does it mean. As we defining learning down because of the zeitgeist. Greg started his post saying a socio-cultural theorist be interested in this research. I wonder if they might more likely to be the ones putting on the brakes, asking questions about it. Michael From: xmca-l-bounces@mailman.ucsd.edu > On Behalf Of Andy Blunden Sent: Tuesday, July 03, 2018 7:04 AM To: xmca-l@mailman.ucsd.edu Subject: [Xmca-l] Re: Interesting article on robots and social learning Does a robot have "motivation"? andy ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: Hi Greg, What is most interesting to me about the understanding of learning which informs most AI projects is that it seems to assume that affect is irrelevant. The role of caring, liking, worrying etc. in social learning seems to be almost universally overlooked because information is seen as something that can be ?got? and ?given? more than something that is distributed in relationships. Does anyone know about any AI projects which consider how machines might feel about what they learn? All the best, Rod From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Greg Thompson Sent: 03 July 2018 02:50 To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Interesting article on robots and social learning I?m ambivalent about this project but I suspect that some young CHAT scholar out there could have a lot to contribute to a project like this one: https://www.sapiens.org/column/machinations/artificial-intelligence-culture/ -Greg -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson ________________________________ This email and any files with it are confidential and intended solely for the use of the recipient to whom it is addressed. If you are not the intended recipient then copying, distribution or other use of the information contained is strictly prohibited and you should not rely on it. If you have received this email in error please let the sender know immediately and delete it from your system(s). Internet emails are not necessarily secure. While we take every care, University of Plymouth accepts no responsibility for viruses and it is your responsibility to scan emails and their attachments. University of Plymouth does not accept responsibility for any changes made after it was sent. Nothing in this email or its attachments constitutes an order for goods or services unless accompanied by an official order form. -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180716/e5fc2118/attachment-0001.html From djwdoc@yahoo.com Mon Jul 16 18:44:42 2018 From: djwdoc@yahoo.com (Douglas Williams) Date: Tue, 17 Jul 2018 01:44:42 +0000 (UTC) Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: <3B91542B0D4F274D871B38AA48E991F953B3E4C1@CIO-KRC-D1MBX04.osuad.osu.edu> References: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> <1860198877.3850789.1531537929986@mail.yahoo.com> <7c142464-a2b2-ede1-e258-388e449e10f6@marxists.org> <3B91542B0D4F274D871B38AA48E991F953B3E4C1@CIO-KRC-D1MBX04.osuad.osu.edu> Message-ID: <760181932.5151304.1531791882685@mail.yahoo.com> Hi, Michael-- You're touching on a problem that emerged with Chatbots pretty quickly with what's called "unsupervised learning." You can obtain a lot of input, and with natural language processing sort out a range of words that correspond to a common user intent (the meaning of your sentence, or in computer applications, the user intent). But without screening for the nature of the intent, responses based simply on probabilities of words in sequence would get you into trouble: Input "I am really very far from being happy on this bright sunny day."? Response: "I feel happy too on sunny days!" ?And of course humans are full of often complex or unpleasant intents, with malicious people willing to spend time to obtain malicious results. Probably the most famous case was Microsoft's Tay chatbot on Twitter, which has an overload of Pepe the frog types with time on their hands, so the inevitable happened: Twitter taught Microsoft?s friendly AI chatbot to be a racist asshole in less than a day | | | | | | | | | | | Twitter taught Microsoft?s friendly AI chatbot to be a racist asshole in... James Vincent It took less than 24 hours for Twitter to corrupt an innocent AI chatbot. Yesterday, Microsoft unveiled Tay ? a ... | | | This is the kind of thing that AI people work on quite a bit. Again, I'm not one of those people, but the approach to solve the problem is to use algorithms based on previous input examples where the user intent of the utterance is identified (by a human--that's called training), or the entities in the utterance are identified, or using sentiment analysis, which identifies key words as indicators of positive or negative valences that qualify the emotional intent in your human interlocutor's phrase, and the kinds of things you want your chatbot to pay attention to as part of the corpus that you use for your word order probability output--think of covering your child's ears from bad words... I think this is something that the AI world feels cautiously optimistic that they are addressing...but note that the best way to solve this problem right now is to define a range of intents that you will respond to, where you can do something you want to respond to, and simply not responding to things you don't want to.? For better or worse, Trump has a talent that a Chatbot can't challenge, though those who delight in making computers say bad (though incoherent, unconvincing) things can do it, without sentiment analysis.? Regards,Doug On ?Saturday?, ?July? ?14?, ?2018? ?04?:?47?:?00? ?PM? ?PDT, Glassman, Michael wrote: The Turing test, at least the test he wrote in his article, is actually a big more complicated than this, and especially poignant today.? Turing?s test of whether computers are acting as human was based on an old English game show called The Lying Game (I suppose one of the reasons for the title of the movie on Turing, though of course it had multiple meanings.? But for some reason they never mentioned the origin of the phrase in the movie).? Anyway in the lying game the contestant had to listen to two individuals, one of whom was telling the truth about the situation and one of whom was lying. The way Turing describes it, it sounds quite brutal.? The contestant had to figure out who the liar was (there was a similar much milder version years later in the US). Anyway Turing?s proposal, if I remember correctly, was that a computer could be considered thinking like a human if the comp the contestant was listening to was lying and he or she couldn?t tell. In essence the computer would successfully lie.? Everybody think Turing believed that computers would eventually think like humans but my reading of the article was that he had no idea, but as the computer stood at the time there was no chance. ? The reason this is so poignant is the Mueller indictments that came down yesterday.? For those outside the U.S. or not following the news the indictments were against Russian military leading a scheme to convince individuals of lies about various actor in the 2016 election (also times release of information and breaking in to voting systems).? But it is the propagation of lies by robots and people believing them that interests me.? I feel like we aren?t putting enough thought into that.? Many of the people receiving the information could not tell it was no from humans and believed it even though in many cases it was generated by robots, passing it seems to me Turing?s test.? How and why did this happen? Of course Turing died before the Internet so he couldn?t have known about it.? But I wonder if part of the reason the robots were successful is that they have the ability to mine, collect and aggregate people?s biases and then reflect them back to us.? We tend to engage, believe things in the contexts of our own biases.? They say in salesmanship that the trick is figuring out what people want to here and then couching whatever you want to see in that.? Trump is a master of reading what a group of people want to hear at the moment, their biases, and then mirroring it back to them ? If we went back to the Chinese room and the person inside was able to read our biases from our messages would they then be human.? ? We live in a strange age. ? From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Andy Blunden Sent: Saturday, July 14, 2018 8:58 AM To: xmca-l@mailman.ucsd.edu Subject: [Xmca-l] Re: Interesting article on robots and social learning ? I understand that the Turing Test is one which AI people can use to measure the success of their AI - if you can't tell the difference between a computer and a human interaction then the computer has passed the Turing test. I tend to rely on a kind of anti-Turing Test, that is, that if you can tell the difference between the computer and the human interaction, then you have passed the anti-Turing test, that is, you know something about humans. Andy Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 14/07/2018 1:12 PM, Douglas Williams wrote: Hi-- I think I'll come out of lurking for this one. Actually, what you're talking about with this pain algorithm system sounds like a modeling system that someone might need to develop what Alan Turing described as a P-type computing device. A P-type computer would receive its programming from inputs of pleasure and pain. It was probably derived from reading some of the behavioralist models of mind at the time. Turing thought that he was probably pretty close to being able to develop such a computing device, which, because its input was similar, could model human thought. The Eliza Rogersian analysis computer program was another early idea in which the goal was to model the patterns of human interaction, and gradually approach closer to human thought and interaction that way. And by the 2000's, the idea of the "singularity" was afloat, in which one could model human minds so well as to enable a human to be uploaded into a computer, and live forever as software (Kurzweil, 2005). But given that we barely had a sufficient model of mind to say Boo with at the time (what is consciousness? where does intention come from? What is the balance of nature/nurture in motivation? Speech utterances? and so on), and you're right, AI doesn't have much of a theory of emotion, either--the goal of computer software modeling human thought seemed very far away to me. ? At someone's request, I wrote a rather whimsical paper called "What is Artificial Intelligence?" back in 2006 about such things. My argument was that statistical modeling of human interaction and capturing thought was not too easy after all, precisely because of the parts of mind we don't think of, and the social interactions that, at the time, were not a primary focus. I mused about that in the context of my trying to write a computer program by applying Chomsky's syntactic structures to interpret intention of a few simple questions--without, alas, in my case, a corpus-supported Markov chain logic to do it. Generative grammar would take care of it, right? Wrong. So as someone who had done a little primitive, incompetent attempt at speech modeling myself, and in the light of my later-acquired knowledge of CHAT, Burke, Bakhtin, Mead, and various other people in different fields, and of the tendency of people to interact through the world through cognitive biases, complexes, and embodied perceptions that were not readily available to artificial systems, I didn't think the singularity was so near. The terrible thing about computer programs is that they do just what you tell them to do, and no more. They have no drive to improve, except as programmed. When they do improve, their creativity is limited. And the approach now still substantially is pattern-recognition based. The current paradigm is something called Convolutional Neural Network Long Short-Term Memory Networks (CNN/LSTM) for speech recognition, in which the convolutional neural networks reduce the variants of speech input into manageable patterns, and temporal processing (temporal patterns of the real wold phenomena to which the AI system is responding). But while such systems combined with natural language processing can increasingly mimic human response, and "learn" on their own, and while they are approaching the "weak" form of artificial general intelligence (AGI), the intelligence needed for a machine to perform any intellectual task that a human being can, they are an awfully long way from "strong" AGI--that is, something approaching human consciousness. I think that's because they are a long way from capturing the kind of social embeddedness of almost all animal behavior, and the sense in which human cognition is embedded in the messy things, like emotion. A computer algorithm can recognize the patterns of emotion, but that's it. An AGI system that can experience emotions, or have motivation, is quite another thing entirely. I can tell you that AI confidence is still there. In raising questions about cultural and physical embodiment in artficial intelligence interations with someone in the field recently, he dismissed the idea as being that relevant. His thought was that "what I find essential is that we acknowledge that there's no obvious evidence? supporting that the current paradigm of CNN/LSTM under various reinforcement algorithms isn't enough for A AGI and in particular for broad animal-like intelligence like that of ravens and dogs." But ravens and dogs are embedded in social interaction, in intentionality, in consciousness--qualitatively different than ours, maybe, but there. Dogs don't do what you ask them to, always. When they do things, they do them for their own intentionality, which may be to please you, or may be to do something you never asked the dog to do, which is either inherent in its nature, or an expression of social interactions with you or others, many of which you and they may not be consciously aware of. The deep structure of metaphor, the spatiotemporal relations of language that Langacker describes as being necessary for construal, the worlds of narrativized experience, are mostly outside of the reckoning, so far as I know (though I'm not an expert--I could be at least partly wrong) of the current CNN/LSTM paradigm. My old interlocutor in thinking about my language program, Noam Chomsky, has been a pretty sharp critic of the pattern recognition approach to artificial intelligence. Here's Chomsky's take on the idea: http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html And here's Peter Norvig's response; he's a director of research at Google, where Kurzweil is, and where, I assume, they are as close to the strong version of artificial general intelligence as anyone out there... http://norvig.com/chomsky.html Frankly, I would be quite interested in what you think of these things. I'm merely an Isaiah Berlin fox, chasing to and fro at all the pretty ideas out there. But you, many of you, are, I suspect, the untapped hedgehogs whose ideas on these things would see more readily what I dimly grasp must be required, not just for achieving a strong AGI, but for achieving something that we would see as an ethical, reasonable artificial mind that expands human experience, rather than becomes a prison that reduces human interactions to its own level. My own thinking is that lately, Cognitive Metaphor Theory (CMT), which I knew more of in its earlier (now "standard model') days, is getting even more interesting than it was. I'd done a transfer term to UC Berkeley to study with George Lakoff, but we didn't hit it off well, perhaps I kept asking him questions about social embeddedness, and similarities to Vygotsky's theory of complex thought, and was too expressive about my interest in linking out from his approach than folding in. It seems that the idea I was rather woolily suggesting to Lakoff back then has caught on: namely, that utterances could be explored for cultural variation and historical embeddedness, a form ofsocial context to the narratives and metaphors and blended spaces that underlay speech utterances and thought; that there was a degree of social embodiment as well as physiological embodiment through which language operated. I thought then, and it looks like some other people now, are thinking that someone seeking to understand utterances (as a strong AGI system would need to do) really, would need to engage in internalizing and ventriloqusing a form of Geertz's thick description of interactions. In such forms, words do not mean what they say, and can have different affect that is a bit more complex than I think temporal processing currently addresses. I think these are the kind of things that artificial intelligence would need truly to advance, and that Bakhtin and Vygotsky and Leont'ev and in the visual world, Eisenstein were addressing all along... And, of course, you guys. ? Regards, Douglas Willams ? ? ? On Tuesday, July 3, 2018, 10:35:45 AM PDT, David H Kirshner wrote: ? ? The other side of the coin is that ineffable human experience is becoming more effable. Computers can now look at a human brain scan and determine the degree of subjectively experienced pain: ? In 2013, Tor Wager, a neuroscientist at the University of Colorado, Boulder, took the logical next step by creating an algorithm that could recognize pain?s distinctive patterns; today, it can pick out brains in pain with more than ninety-five-per-cent accuracy. When the algorithm is asked to sort activation maps by apparent intensity, its ranking matches participants? subjective pain ratings. By analyzing neural activity, it can tell not just whether someone is in pain but also how intense the experience is. ? So, perhaps the computer can?t ?feel our pain,? but it can sure ?sense our pain!? ? Here?s the full article: https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain ? David ? From:xmca-l-bounces@mailman.ucsd.eduOn Behalf Of Glassman, Michael Sent: Tuesday, July 3, 2018 8:16 AM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: Interesting article on robots and social learning ? ? ? It seems like we are still having the same argument as when robots first came on the scene.? In response to John McCarthy, who was claiming that eventually robots can have belief systems and motivations similar to humans through AI John Searle wrote the Chinese room.? There have been a lot of responses to the Chinese room over the years and a number of digital philosopher claim it is no longer salient, but I don?t think anybody has ever effectively answered his central question. ? Just a quick recap.? You come to a closed door and know there is a person on the other side. To communicate you decide the teacher the person on the other side Chinese. You do this by continuously exchanging rules systems under the door.? After a while you are able to have a conversation with the individual in perfect Chinese. But does that person actually know Chinese just from the rule systems.? I think Searle?s major point is are you really learning if you don?t know why you?re learning, or are you just repeating. Learning is embedded in the human condition and the reason it works so well and is adaptable is because we understand it when we use what we learn in the world in response to others.? To put it in response to the post, does a bomb defusion robot really learn how to defuse a bomb if it does not know why it is doing it.? It might cut the right wires at the right time but it doesn?t understand why and therefore is not doing the task just a series of steps it has been able to absorb.? Is that the opposite of human learning? ? What the researcher did really isn?t that special at this point.? Well I definitely couldn?t do it and it is amazing, but it is in essence a miniature version of Libratus (which beat experts at Texas Hold em) and Alphago (which beat the second best Go player in the world).? My guess it is the same use of deep learning in which the program integrates new information into what it is already capable of.? If machines can learn from interacting with other humans then they can learn from interacting with other machines.? It is the same principle (though much, much simpler in this case).? The question is what does it mean.? As we defining learning down because of the zeitgeist. ?Greg started his post saying a socio-cultural theorist be interested in this research.? I wonder if they might more likely to be the ones putting on the brakes, asking questions about it. ? Michael ? From:xmca-l-bounces@mailman.ucsd.edu On Behalf Of Andy Blunden Sent: Tuesday, July 03, 2018 7:04 AM To: xmca-l@mailman.ucsd.edu Subject: [Xmca-l] Re: Interesting article on robots and social learning ? Does a robot have "motivation"? andy Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: Hi Greg, ? What is most interesting to me about the understanding of learning which informs most AI projects is that it seems to assume that affect is irrelevant. The role of caring, liking, worrying etc. in social learning seems to be almost universally overlooked because information is seen as something that can be ?got? and ?given? more than something that is distributed in relationships. ? Does anyone know about any AI projects which consider how machines might feel about what they learn? ? All the best, Rod ? From:xmca-l-bounces@mailman.ucsd.eduOn Behalf Of Greg Thompson Sent: 03 July 2018 02:50 To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Interesting article on robots and social learning ? I?m ambivalent about this project but I suspect that some young CHAT scholar out there could have a lot to contribute to a project like this one: https://www.sapiens.org/column/machinations/artificial-intelligence-culture/ ? -Greg? -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE:greg.a.thompson.byu.edu? http://byu.academia.edu/GregoryThompson This email and any files with it are confidential and intended solely for the use of the recipient to whom it is addressed. If you are not the intended recipient then copying, distribution or other use of the information contained is strictly prohibited and you should not rely on it. If you have received this email in error please let the sender know immediately and delete it from your system(s). Internet emails are not necessarily secure. While we take every care, University of Plymouth accepts no responsibility for viruses and it is your responsibility to scan emails and their attachments. University of Plymouth does not accept responsibility for any changes made after it was sent. Nothing in this email or its attachments constitutes an order for goods or services unless accompanied by an official order form. ? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180717/3e2ba615/attachment.html From djwdoc@yahoo.com Mon Jul 16 19:01:16 2018 From: djwdoc@yahoo.com (Douglas Williams) Date: Tue, 17 Jul 2018 02:01:16 +0000 (UTC) Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: <3B91542B0D4F274D871B38AA48E991F953B3E5D1@CIO-KRC-D1MBX04.osuad.osu.edu> References: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> <1860198877.3850789.1531537929986@mail.yahoo.com> <7c142464-a2b2-ede1-e258-388e449e10f6@marxists.org> <3B91542B0D4F274D871B38AA48E991F953B3E4C1@CIO-KRC-D1MBX04.osuad.osu.edu> <3B91542B0D4F274D871B38AA48E991F953B3E5D1@CIO-KRC-D1MBX04.osuad.osu.edu> Message-ID: <377811343.5150586.1531792876331@mail.yahoo.com> Hi, Michael--I think it could be, as there is certainly an interest in dealing with bias, especially once you move away from the relatively easily detectable ones in chatbots.? Frankly, I was thinking in part to check in with you guys to see what you thought, as the questions Kate Crawford poses here in the Neural Information Processing Conference keynote last year are precisely the ones of perspective and mind that I associate with CHAT. Perhaps the most useful thing I can do is to put this in front of you all for consideration: The Trouble with Bias - NIPS 2017 Keynote - Kate Crawford #NIPS2017 | | | | | | | | | | | The Trouble with Bias - NIPS 2017 Keynote - Kate Crawford #NIPS2017 Kate Crawford is a leading researcher, academic and author who has spent the last decade studying the social imp... | | | Regards,Doug On ?Sunday?, ?July? ?15?, ?2018? ?05?:?26?:?23? ?PM? ?PDT, Glassman, Michael wrote: I wonder if where CHAT might be most interesting in addressing AI are on topics of bias and oppression.? I believe that there is a real danger that AI can be used as a tool for oppression, especially from some of its early uses.? One of the things people discussing the possibilities of AI don?t discuss near enough is that it picks up and integrates biases from the information it receives.? Sometimes this can be interesting such as the program Libratus that beat world class poker players at Texas Hold ?em.? One of the less discussed aspects is that one of the reasons it was capable of doing this is it picks up on the playing biases of the players it is competing with and integrates them into its decision making process.? This I think is one of the reasons that it has to play only one player at a time to be successful. ? The danger is when it integrates these biases into a larger decision making process.? There is an AI program called Northpointe used by the justice department that uses a combination of big data and deep learning to make decisions about whether people convicted of crimes will wind up back in jail.? This should have implications for sentencing.? The program, surprise, tends to be much harsher with Black individuals than white individuals.? Even if you keep ethnicity outside of the equation it has enough other information to create a natural bias.? There are also some of the more advanced translation programs which tend to incorporate the biases of the languages (e.g. mysoginistic) into the translations without those getting the translations realizing it.? AI , especially machine learning, is in many ways a prisoner to the information it receives.? Who decides what information it receives? Much like the intelligence tests of an earlier age people will use AI decision making as being neutral or objective when it actually mirrors back (almost perfectly) those who are feeding it information. ? Like I said I don?t see this point raised nearly enough.? Perhaps CHAT is one of the fields in a position to constantly point this out, explore the ways that AI is culturally biases, and those that dominate information flow can easily use it as a tool for oppression. ? Michael ? From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Greg Thompson Sent: Sunday, July 15, 2018 12:12 PM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: Interesting article on robots and social learning ? And I'm still curious if any others out there might have anything to contribute to Doug's query regarding what CHAT theory (particularly developmental theories) might have to offer thinking about AI? ? It seems an interesting question to think through even if you aren't on board with the larger AI project... ? -greg ? On Sun, Jul 15, 2018 at 10:55 AM, Andy Blunden wrote: I think we go back to Martin's earlier ironic comment here, Michael. Andy Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 15/07/2018 9:44 AM, Glassman, Michael wrote: The Turing test, at least the test he wrote in his article, is actually a big more complicated than this, and especially poignant today.? Turing?s test of whether computers are acting as human was based on an old English game show called The Lying Game (I suppose one of the reasons for the title of the movie on Turing, though of course it had multiple meanings.? But for some reason they never mentioned the origin of the phrase in the movie).? Anyway in the lying game the contestant had to listen to two individuals, one of whom was telling the truth about the situation and one of whom was lying. The way Turing describes it, it sounds quite brutal.? The contestant had to figure out who the liar was (there was a similar much milder version years later in the US). Anyway Turing?s proposal, if I remember correctly, was that a computer could be considered thinking like a human if the comp the contestant was listening to was lying and he or she couldn?t tell. In essence the computer would successfully lie.? Everybody think Turing believed that computers would eventually think like humans but my reading of the article was that he had no idea, but as the computer stood at the time there was no chance. ? The reason this is so poignant is the Mueller indictments that came down yesterday.? For those outside the U.S. or not following the news the indictments were against Russian military leading a scheme to convince individuals of lies about various actor in the 2016 election (also times release of information and breaking in to voting systems).? But it is the propagation of lies by robots and people believing them that interests me.? I feel like we aren?t putting enough thought into that.? Many of the people receiving the information could not tell it was no from humans and believed it even though in many cases it was generated by robots, passing it seems to me Turing?s test.? How and why did this happen? Of course Turing died before the Internet so he couldn?t have known about it.? But I wonder if part of the reason the robots were successful is that they have the ability to mine, collect and aggregate people?s biases and then reflect them back to us.? We tend to engage, believe things in the contexts of our own biases.? They say in salesmanship that the trick is figuring out what people want to here and then couching whatever you want to see in that.? Trump is a master of reading what a group of people want to hear at the moment, their biases, and then mirroring it back to them ? If we went back to the Chinese room and the person inside was able to read our biases from our messages would they then be human.? ? We live in a strange age. ? From:xmca-l-bounces@mailman.ucsd.eduOn Behalf Of Andy Blunden Sent: Saturday, July 14, 2018 8:58 AM To: xmca-l@mailman.ucsd.edu Subject: [Xmca-l] Re: Interesting article on robots and social learning ? I understand that the Turing Test is one which AI people can use to measure the success of their AI - if you can't tell the difference between a computer and a human interaction then the computer has passed the Turing test. I tend to rely on a kind of anti-Turing Test, that is, that if you can tell the difference between the computer and the human interaction, then you have passed the anti-Turing test, that is, you know something about humans. Andy Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 14/07/2018 1:12 PM, Douglas Williams wrote: Hi-- I think I'll come out of lurking for this one. Actually, what you're talking about with this pain algorithm system sounds like a modeling system that someone might need to develop what Alan Turing described as a P-type computing device. A P-type computer would receive its programming from inputs of pleasure and pain. It was probably derived from reading some of the behavioralist models of mind at the time. Turing thought that he was probably pretty close to being able to develop such a computing device, which, because its input was similar, could model human thought. The Eliza Rogersian analysis computer program was another early idea in which the goal was to model the patterns of human interaction, and gradually approach closer to human thought and interaction that way. And by the 2000's, the idea of the "singularity" was afloat, in which one could model human minds so well as to enable a human to be uploaded into a computer, and live forever as software (Kurzweil, 2005). But given that we barely had a sufficient model of mind to say Boo with at the time (what is consciousness? where does intention come from? What is the balance of nature/nurture in motivation? Speech utterances? and so on), and you're right, AI doesn't have much of a theory of emotion, either--the goal of computer software modeling human thought seemed very far away to me. ? At someone's request, I wrote a rather whimsical paper called "What is Artificial Intelligence?" back in 2006 about such things. My argument was that statistical modeling of human interaction and capturing thought was not too easy after all, precisely because of the parts of mind we don't think of, and the social interactions that, at the time, were not a primary focus. I mused about that in the context of my trying to write a computer program by applying Chomsky's syntactic structures to interpret intention of a few simple questions--without, alas, in my case, a corpus-supported Markov chain logic to do it. Generative grammar would take care of it, right? Wrong. So as someone who had done a little primitive, incompetent attempt at speech modeling myself, and in the light of my later-acquired knowledge of CHAT, Burke, Bakhtin, Mead, and various other people in different fields, and of the tendency of people to interact through the world through cognitive biases, complexes, and embodied perceptions that were not readily available to artificial systems, I didn't think the singularity was so near. The terrible thing about computer programs is that they do just what you tell them to do, and no more. They have no drive to improve, except as programmed. When they do improve, their creativity is limited. And the approach now still substantially is pattern-recognition based. The current paradigm is something called Convolutional Neural Network Long Short-Term Memory Networks (CNN/LSTM) for speech recognition, in which the convolutional neural networks reduce the variants of speech input into manageable patterns, and temporal processing (temporal patterns of the real wold phenomena to which the AI system is responding). But while such systems combined with natural language processing can increasingly mimic human response, and "learn" on their own, and while they are approaching the "weak" form of artificial general intelligence (AGI), the intelligence needed for a machine to perform any intellectual task that a human being can, they are an awfully long way from "strong" AGI--that is, something approaching human consciousness. I think that's because they are a long way from capturing the kind of social embeddedness of almost all animal behavior, and the sense in which human cognition is embedded in the messy things, like emotion. A computer algorithm can recognize the patterns of emotion, but that's it. An AGI system that can experience emotions, or have motivation, is quite another thing entirely. I can tell you that AI confidence is still there. In raising questions about cultural and physical embodiment in artficial intelligence interations with someone in the field recently, he dismissed the idea as being that relevant. His thought was that "what I find essential is that we acknowledge that there's no obvious evidence? supporting that the current paradigm of CNN/LSTM under various reinforcement algorithms isn't enough for A AGI and in particular for broad animal-like intelligence like that of ravens and dogs." But ravens and dogs are embedded in social interaction, in intentionality, in consciousness--qualitatively different than ours, maybe, but there. Dogs don't do what you ask them to, always. When they do things, they do them for their own intentionality, which may be to please you, or may be to do something you never asked the dog to do, which is either inherent in its nature, or an expression of social interactions with you or others, many of which you and they may not be consciously aware of. The deep structure of metaphor, the spatiotemporal relations of language that Langacker describes as being necessary for construal, the worlds of narrativized experience, are mostly outside of the reckoning, so far as I know (though I'm not an expert--I could be at least partly wrong) of the current CNN/LSTM paradigm. My old interlocutor in thinking about my language program, Noam Chomsky, has been a pretty sharp critic of the pattern recognition approach to artificial intelligence. Here's Chomsky's take on the idea: http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html And here's Peter Norvig's response; he's a director of research at Google, where Kurzweil is, and where, I assume, they are as close to the strong version of artificial general intelligence as anyone out there... http://norvig.com/chomsky.html Frankly, I would be quite interested in what you think of these things. I'm merely an Isaiah Berlin fox, chasing to and fro at all the pretty ideas out there. But you, many of you, are, I suspect, the untapped hedgehogs whose ideas on these things would see more readily what I dimly grasp must be required, not just for achieving a strong AGI, but for achieving something that we would see as an ethical, reasonable artificial mind that expands human experience, rather than becomes a prison that reduces human interactions to its own level. My own thinking is that lately, Cognitive Metaphor Theory (CMT), which I knew more of in its earlier (now "standard model') days, is getting even more interesting than it was. I'd done a transfer term to UC Berkeley to study with George Lakoff, but we didn't hit it off well, perhaps I kept asking him questions about social embeddedness, and similarities to Vygotsky's theory of complex thought, and was too expressive about my interest in linking out from his approach than folding in. It seems that the idea I was rather woolily suggesting to Lakoff back then has caught on: namely, that utterances could be explored for cultural variation and historical embeddedness, a form ofsocial context to the narratives and metaphors and blended spaces that underlay speech utterances and thought; that there was a degree of social embodiment as well as physiological embodiment through which language operated. I thought then, and it looks like some other people now, are thinking that someone seeking to understand utterances (as a strong AGI system would need to do) really, would need to engage in internalizing and ventriloqusing a form of Geertz's thick description of interactions. In such forms, words do not mean what they say, and can have different affect that is a bit more complex than I think temporal processing currently addresses. I think these are the kind of things that artificial intelligence would need truly to advance, and that Bakhtin and Vygotsky and Leont'ev and in the visual world, Eisenstein were addressing all along... And, of course, you guys. ? Regards, Douglas Willams ? ? ? On Tuesday, July 3, 2018, 10:35:45 AM PDT, David H Kirshner wrote: ? ? The other side of the coin is that ineffable human experience is becoming more effable. Computers can now look at a human brain scan and determine the degree of subjectively experienced pain: ? In 2013, Tor Wager, a neuroscientist at the University of Colorado, Boulder, took the logical next step by creating an algorithm that could recognize pain?s distinctive patterns; today, it can pick out brains in pain with more than ninety-five-per-cent accuracy. When the algorithm is asked to sort activation maps by apparent intensity, its ranking matches participants? subjective pain ratings. By analyzing neural activity, it can tell not just whether someone is in pain but also how intense the experience is. ? So, perhaps the computer can?t ?feel our pain,? but it can sure ?sense our pain!? ? Here?s the full article: https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain ? David ? From:xmca-l-bounces@mailman.ucsd.eduOn Behalf Of Glassman, Michael Sent: Tuesday, July 3, 2018 8:16 AM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: Interesting article on robots and social learning ? ? ? It seems like we are still having the same argument as when robots first came on the scene.? In response to John McCarthy, who was claiming that eventually robots can have belief systems and motivations similar to humans through AI John Searle wrote the Chinese room.? There have been a lot of responses to the Chinese room over the years and a number of digital philosopher claim it is no longer salient, but I don?t think anybody has ever effectively answered his central question. ? Just a quick recap.? You come to a closed door and know there is a person on the other side. To communicate you decide the teacher the person on the other side Chinese. You do this by continuously exchanging rules systems under the door.? After a while you are able to have a conversation with the individual in perfect Chinese. But does that person actually know Chinese just from the rule systems.? I think Searle?s major point is are you really learning if you don?t know why you?re learning, or are you just repeating. Learning is embedded in the human condition and the reason it works so well and is adaptable is because we understand it when we use what we learn in the world in response to others.? To put it in response to the post, does a bomb defusion robot really learn how to defuse a bomb if it does not know why it is doing it.? It might cut the right wires at the right time but it doesn?t understand why and therefore is not doing the task just a series of steps it has been able to absorb.? Is that the opposite of human learning? ? What the researcher did really isn?t that special at this point.? Well I definitely couldn?t do it and it is amazing, but it is in essence a miniature version of Libratus (which beat experts at Texas Hold em) and Alphago (which beat the second best Go player in the world).? My guess it is the same use of deep learning in which the program integrates new information into what it is already capable of.? If machines can learn from interacting with other humans then they can learn from interacting with other machines.? It is the same principle (though much, much simpler in this case).? The question is what does it mean.? As we defining learning down because of the zeitgeist. ?Greg started his post saying a socio-cultural theorist be interested in this research.? I wonder if they might more likely to be the ones putting on the brakes, asking questions about it. ? Michael ? From:xmca-l-bounces@mailman.ucsd.edu On Behalf Of Andy Blunden Sent: Tuesday, July 03, 2018 7:04 AM To: xmca-l@mailman.ucsd.edu Subject: [Xmca-l] Re: Interesting article on robots and social learning ? Does a robot have "motivation"? andy Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: Hi Greg, ? What is most interesting to me about the understanding of learning which informs most AI projects is that it seems to assume that affect is irrelevant. The role of caring, liking, worrying etc. in social learning seems to be almost universally overlooked because information is seen as something that can be ?got? and ?given? more than something that is distributed in relationships. ? Does anyone know about any AI projects which consider how machines might feel about what they learn? ? All the best, Rod ? From:xmca-l-bounces@mailman.ucsd.eduOn Behalf Of Greg Thompson Sent: 03 July 2018 02:50 To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Interesting article on robots and social learning ? I?m ambivalent about this project but I suspect that some young CHAT scholar out there could have a lot to contribute to a project like this one: https://www.sapiens.org/column/machinations/artificial-intelligence-culture/ ? -Greg? -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE:greg.a.thompson.byu.edu? http://byu.academia.edu/GregoryThompson This email and any files with it are confidential and intended solely for the use of the recipient to whom it is addressed. If you are not the intended recipient then copying, distribution or other use of the information contained is strictly prohibited and you should not rely on it. If you have received this email in error please let the sender know immediately and delete it from your system(s). Internet emails are not necessarily secure. While we take every care, University of Plymouth accepts no responsibility for viruses and it is your responsibility to scan emails and their attachments. University of Plymouth does not accept responsibility for any changes made after it was sent. Nothing in this email or its attachments constitutes an order for goods or services unless accompanied by an official order form. ? ? ? ? -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE:greg.a.thompson.byu.edu? http://byu.academia.edu/GregoryThompson -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180717/27c9cb64/attachment.html From djwdoc@yahoo.com Mon Jul 16 19:25:09 2018 From: djwdoc@yahoo.com (Douglas Williams) Date: Tue, 17 Jul 2018 02:25:09 +0000 (UTC) Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: References: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> <1860198877.3850789.1531537929986@mail.yahoo.com> <7c142464-a2b2-ede1-e258-388e449e10f6@marxists.org> Message-ID: <1872661447.5150670.1531794309254@mail.yahoo.com> Hi, Greg-- Here's that whimsical paper. The thing that remains most valid to me now, knowing a little bit more, is the externality question--namely, the degree to which the tiny amount of mind that constitutes conscious activity still receives (as is inevitable, with a pragmatic application) nearly all the attention of AI research. That still is mostly the case today, and that is still a problem that would need to be understood and modeled far better than it seems to be now for Kurzweil's singularity to be near, no matter how powerful the processing is tossed at information storage and retrieval, and pattern recognition. Right now, any artificial cognitive process remains a simulacrum, more like the reflection in Plato's cave than anything resembling the things outside of the cave, with all of their movement, dynamism, pleasure, pain, and empathy--and not least, with the interaction between things and their environment, which is all invisible in the reflection in the cave.? Regards,Doug? On ?Saturday?, ?July? ?14?, ?2018? ?08?:?18?:?09? ?AM? ?PDT, Greg Thompson wrote: Andy, thanks for sending this since it alerted me to Doug's message (which seems to have not been included in this thread for me and so this is the first time I'm seeing it - not sure if the XMCA list is "playing with us" or something...) Doug,I agree with what you have pointed to here as far as the important role of embodiment and social and cultural embededdness. Would you mind sharing your whimsical paper that you mentioned? . . . -- Gregory A. Thompson, Ph.D.Assistant ProfessorDepartment of Anthropology 880 Spencer W. Kimball TowerBrigham Young UniversityProvo, UT 84602WEBSITE: greg.a.thompson.byu.edu? http://byu.academia.edu/GregoryThompson -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180717/61bfae16/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: What_Is_Artificial_Intelligence.pdf Type: application/pdf Size: 888899 bytes Desc: not available Url : http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180717/61bfae16/attachment-0001.pdf From julie.waddington@udg.edu Tue Jul 17 07:33:52 2018 From: julie.waddington@udg.edu (JULIE WADDINGTON) Date: Tue, 17 Jul 2018 16:33:52 +0200 (CEST) Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: <377811343.5150586.1531792876331@mail.yahoo.com> References: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> <1860198877.3850789.1531537929986@mail.yahoo.com> <7c142464-a2b2-ede1-e258-388e449e10f6@marxists.org> <3B91542B0D4F274D871B38AA48E991F953B3E4C1@CIO-KRC-D1MBX04.osuad.osu.edu> <3B91542B0D4F274D871B38AA48E991F953B3E5D1@CIO-KRC-D1MBX04.osuad.osu.edu> <377811343.5150586.1531792876331@mail.yahoo.com> Message-ID: <50521.79.152.174.82.1531838032.squirrel@montseny.udg.edu> Doug, Thank you for sharing the video with Kate Crawford's keynote speech. Only managed to watch half so far, but from what I've gleaned up to now, makes a strong argument for the need for SOCIO-TECHNICAL analysis which fits in with the concerns/questions being raised by everyone. Talking of bias in AI and its huge ramifications (racism, sexism, homophobia, etc.), Crawford warns that: "When we consider bias just as a technical issue, then we're already missing the (bigger?) picture. The default of all data gathered reflects the deepest structural biases of society". Sounds obvious to state that social bias always precedes biases in AI, but the examples given and discussion of them provide much food for thought. Thanks again for sharing, Julie > Hi, Michael--I think it could be, as there is certainly an interest in > dealing with bias, especially once you move away from the relatively > easily detectable ones in chatbots.?? > Frankly, I was thinking in part to check in with you guys to see what you > thought, as the questions Kate Crawford poses here in the Neural > Information Processing Conference keynote last year are precisely the ones > of perspective and mind that I associate with CHAT. Perhaps the most > useful thing I can do is to put this in front of you all for > consideration: > The Trouble with Bias - NIPS 2017 Keynote - Kate Crawford #NIPS2017 > > > | > | > | > | | | > > | > > | > | > | | > The Trouble with Bias - NIPS 2017 Keynote - Kate Crawford #NIPS2017 > > Kate Crawford is a leading researcher, academic and author who has spent > the last decade studying the social imp... > | > > | > > | > > > Regards,Doug > On ???Sunday???, ???July??? ???15???, ???2018??? > ???05???:???26???:???23??? ???PM??? ???PDT, Glassman, Michael > wrote: > > > I wonder if where CHAT might be most interesting in addressing AI are on > topics of bias and oppression.?? I believe that there is a real danger > that AI can be used as a tool for oppression, especially from some of its > early uses.?? One of the things people discussing the possibilities of AI > don???t discuss near enough is that it picks up and integrates biases from > the information it receives.?? Sometimes this can be interesting such as > the program Libratus that beat world class poker players at Texas Hold > ???em.?? One of the less discussed aspects is that one of the reasons it > was capable of doing this is it picks up on the playing biases of the > players it is competing with and integrates them into its decision making > process.?? This I think is one of the reasons that it has to play only one > player at a time to be successful. > > ?? > > The danger is when it integrates these biases into a larger decision > making process.?? There is an AI program called Northpointe used by the > justice department that uses a combination of big data and deep learning > to make decisions about whether people convicted of crimes will wind up > back in jail.?? This should have implications for sentencing.?? The > program, surprise, tends to be much harsher with Black individuals than > white individuals.?? Even if you keep ethnicity outside of the equation it > has enough other information to create a natural bias.?? There are also > some of the more advanced translation programs which tend to incorporate > the biases of the languages (e.g. mysoginistic) into the translations > without those getting the translations realizing it.?? AI , especially > machine learning, is in many ways a prisoner to the information it > receives.?? Who decides what information it receives? Much like the > intelligence tests of an earlier age people will use AI decision making as > being neutral or objective when it actually mirrors back (almost > perfectly) those who are feeding it information. > > ?? > > Like I said I don???t see this point raised nearly enough.?? Perhaps CHAT > is one of the fields in a position to constantly point this out, explore > the ways that AI is culturally biases, and those that dominate information > flow can easily use it as a tool for oppression. > > ?? > > Michael > > ?? > > From: xmca-l-bounces@mailman.ucsd.edu On > Behalf Of Greg Thompson > Sent: Sunday, July 15, 2018 12:12 PM > To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Re: Interesting article on robots and social learning > > ?? > > And I'm still curious if any others out there might have anything to > contribute to Doug's query regarding what CHAT theory (particularly > developmental theories) might have to offer thinking about AI? > > ?? > > It seems an interesting question to think through even if you aren't on > board with the larger AI project... > > ?? > > -greg > > ?? > > On Sun, Jul 15, 2018 at 10:55 AM, Andy Blunden wrote: > > > I think we go back to Martin's earlier ironic comment here, Michael. > > Andy > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 15/07/2018 9:44 AM, Glassman, Michael wrote: > > > The Turing test, at least the test he wrote in his article, is actually a > big more complicated than this, and especially poignant today.?? > Turing???s test of whether computers are acting as human was based on an > old English game show called The Lying Game (I suppose one of the reasons > for the title of the movie on Turing, though of course it had multiple > meanings.?? But for some reason they never mentioned the origin of the > phrase in the movie).?? Anyway in the lying game the contestant had to > listen to two individuals, one of whom was telling the truth about the > situation and one of whom was lying. The way Turing describes it, it > sounds quite brutal.?? The contestant had to figure out who the liar was > (there was a similar much milder version years later in the US). Anyway > Turing???s proposal, if I remember correctly, was that a computer could be > considered thinking like a human if the comp the contestant was listening > to was lying and he or she couldn???t tell. In essence the computer would > successfully lie.?? Everybody think Turing believed that computers would > eventually think like humans but my reading of the article was that he had > no idea, but as the computer stood at the time there was no chance. > > ?? > > The reason this is so poignant is the Mueller indictments that came down > yesterday.?? For those outside the U.S. or not following the news the > indictments were against Russian military leading a scheme to convince > individuals of lies about various actor in the 2016 election (also times > release of information and breaking in to voting systems).?? But it is the > propagation of lies by robots and people believing them that interests > me.?? I feel like we aren???t putting enough thought into that.?? Many of > the people receiving the information could not tell it was no from humans > and believed it even though in many cases it was generated by robots, > passing it seems to me Turing???s test.?? How and why did this happen? Of > course Turing died before the Internet so he couldn???t have known about > it.?? But I wonder if part of the reason the robots were successful is > that they have the ability to mine, collect and aggregate people???s > biases and then reflect them back to us.?? We tend to engage, believe > things in the contexts of our own biases.?? They say in salesmanship that > the trick is figuring out what people want to here and then couching > whatever you want to see in that.?? Trump is a master of reading what a > group of people want to hear at the moment, their biases, and then > mirroring it back to them > > ?? > > If we went back to the Chinese room and the person inside was able to read > our biases from our messages would they then be human.?? > > ?? > > We live in a strange age. > > ?? > > From:xmca-l-bounces@mailman.ucsd.eduOn > Behalf Of Andy Blunden > Sent: Saturday, July 14, 2018 8:58 AM > To: xmca-l@mailman.ucsd.edu > Subject: [Xmca-l] Re: Interesting article on robots and social learning > > ?? > > I understand that the Turing Test is one which AI people can use to > measure the success of their AI - if you can't tell the difference between > a computer and a human interaction then the computer has passed the Turing > test. I tend to rely on a kind of anti-Turing Test, that is, that if you > can tell the difference between the computer and the human interaction, > then you have passed the anti-Turing test, that is, you know something > about humans. > > Andy > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 14/07/2018 1:12 PM, Douglas Williams wrote: > > > Hi-- > > I think I'll come out of lurking for this one. Actually, what you're > talking about with this pain algorithm system sounds like a modeling > system that someone might need to develop what Alan Turing described as a > P-type computing device. A P-type computer would receive its programming > from inputs of pleasure and pain. It was probably derived from reading > some of the behavioralist models of mind at the time. Turing thought that > he was probably pretty close to being able to develop such a computing > device, which, because its input was similar, could model human thought. > The Eliza Rogersian analysis computer program was another early idea in > which the goal was to model the patterns of human interaction, and > gradually approach closer to human thought and interaction that way. And > by the 2000's, the idea of the "singularity" was afloat, in which one > could model human minds so well as to enable a human to be uploaded into a > computer, and live forever as software (Kurzweil, 2005). But given that we > barely had a sufficient model of mind to say Boo with at the time (what is > consciousness? where does intention come from? What is the balance of > nature/nurture in motivation? Speech utterances? and so on), and you're > right, AI doesn't have much of a theory of emotion, either--the goal of > computer software modeling human thought seemed very far away to me. > > ?? > > At someone's request, I wrote a rather whimsical paper called "What is > Artificial Intelligence?" back in 2006 about such things. My argument was > that statistical modeling of human interaction and capturing thought was > not too easy after all, precisely because of the parts of mind we don't > think of, and the social interactions that, at the time, were not a > primary focus. I mused about that in the context of my trying to write a > computer program by applying Chomsky's syntactic structures to interpret > intention of a few simple questions--without, alas, in my case, a > corpus-supported Markov chain logic to do it. Generative grammar would > take care of it, right? Wrong. > > > So as someone who had done a little primitive, incompetent attempt at > speech modeling myself, and in the light of my later-acquired knowledge of > CHAT, Burke, Bakhtin, Mead, and various other people in different fields, > and of the tendency of people to interact through the world through > cognitive biases, complexes, and embodied perceptions that were not > readily available to artificial systems, I didn't think the singularity > was so near. > > The terrible thing about computer programs is that they do just what you > tell them to do, and no more. They have no drive to improve, except as > programmed. When they do improve, their creativity is limited. And the > approach now still substantially is pattern-recognition based. The current > paradigm is something called Convolutional Neural Network Long Short-Term > Memory Networks (CNN/LSTM) for speech recognition, in which the > convolutional neural networks reduce the variants of speech input into > manageable patterns, and temporal processing (temporal patterns of the > real wold phenomena to which the AI system is responding). But while such > systems combined with natural language processing can increasingly mimic > human response, and "learn" on their own, and while they are approaching > the "weak" form of artificial general intelligence (AGI), the intelligence > needed for a machine to perform any intellectual task that a human being > can, they are an awfully long way from "strong" AGI--that is, something > approaching human consciousness. I think that's because they are a long > way from capturing the kind of social embeddedness of almost all animal > behavior, and the sense in which human cognition is embedded in the messy > things, like emotion. A computer algorithm can recognize the patterns of > emotion, but that's it. An AGI system that can experience emotions, or > have motivation, is quite another thing entirely. > > I can tell you that AI confidence is still there. In raising questions > about cultural and physical embodiment in artficial intelligence > interations with someone in the field recently, he dismissed the idea as > being that relevant. His thought was that "what I find essential is that > we acknowledge that there's no obvious evidence?? supporting that the > current paradigm of CNN/LSTM under various reinforcement algorithms isn't > enough for A AGI and in particular for broad animal-like intelligence like > that of ravens and dogs." > > But ravens and dogs are embedded in social interaction, in intentionality, > in consciousness--qualitatively different than ours, maybe, but there. > Dogs don't do what you ask them to, always. When they do things, they do > them for their own intentionality, which may be to please you, or may be > to do something you never asked the dog to do, which is either inherent in > its nature, or an expression of social interactions with you or others, > many of which you and they may not be consciously aware of. The deep > structure of metaphor, the spatiotemporal relations of language that > Langacker describes as being necessary for construal, the worlds of > narrativized experience, are mostly outside of the reckoning, so far as I > know (though I'm not an expert--I could be at least partly wrong) of the > current CNN/LSTM paradigm. > > My old interlocutor in thinking about my language program, Noam Chomsky, > has been a pretty sharp critic of the pattern recognition approach to > artificial intelligence. > > Here's Chomsky's take on the idea: > > http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html > > And here's Peter Norvig's response; he's a director of research at Google, > where Kurzweil is, and where, I assume, they are as close to the strong > version of artificial general intelligence as anyone out there... > > http://norvig.com/chomsky.html > > Frankly, I would be quite interested in what you think of these things. > I'm merely an Isaiah Berlin fox, chasing to and fro at all the pretty > ideas out there. But you, many of you, are, I suspect, the untapped > hedgehogs whose ideas on these things would see more readily what I dimly > grasp must be required, not just for achieving a strong AGI, but for > achieving something that we would see as an ethical, reasonable artificial > mind that expands human experience, rather than becomes a prison that > reduces human interactions to its own level. > > My own thinking is that lately, Cognitive Metaphor Theory (CMT), which I > knew more of in its earlier (now "standard model') days, is getting even > more interesting than it was. I'd done a transfer term to UC Berkeley to > study with George Lakoff, but we didn't hit it off well, perhaps I kept > asking him questions about social embeddedness, and similarities to > Vygotsky's theory of complex thought, and was too expressive about my > interest in linking out from his approach than folding in. It seems that > the idea I was rather woolily suggesting to Lakoff back then has caught > on: namely, that utterances could be explored for cultural variation and > historical embeddedness, a form ofsocial context to the narratives and > metaphors and blended spaces that underlay speech utterances and thought; > that there was a degree of social embodiment as well as physiological > embodiment through which language operated. I thought then, and it looks > like some other people now, are thinking that someone seeking to > understand utterances (as a strong AGI system would need to do) really, > would need to engage in internalizing and ventriloqusing a form of > Geertz's thick description of interactions. In such forms, words do not > mean what they say, and can have different affect that is a bit more > complex than I think temporal processing currently addresses. > > I think these are the kind of things that artificial intelligence would > need truly to advance, and that Bakhtin and Vygotsky and Leont'ev and in > the visual world, Eisenstein were addressing all along... > > And, of course, you guys. > > ?? > > Regards, > > Douglas Willams > > ?? > > ?? > > ?? > > On Tuesday, July 3, 2018, 10:35:45 AM PDT, David H > Kirshner wrote: > > ?? > > ?? > > The other side of the coin is that ineffable human experience is becoming > more effable. > > Computers can now look at a human brain scan and determine the degree of > subjectively experienced pain: > > ?? > > In 2013, Tor Wager, a neuroscientist at the University of Colorado, > Boulder, took the logical next step by creating an algorithm that could > recognize pain???s distinctive patterns; today, it can pick out brains in > pain with more than ninety-five-per-cent accuracy. When the algorithm is > asked to sort activation maps by apparent intensity, its ranking matches > participants??? subjective pain ratings. By analyzing neural activity, it > can tell not just whether someone is in pain but also how intense the > experience is. > > ?? > > So, perhaps the computer can???t ???feel our pain,??? but it can sure > ???sense our pain!??? > > ?? > > Here???s the full article: > > https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain > > ?? > > David > > ?? > > From:xmca-l-bounces@mailman.ucsd.eduOn > Behalf Of Glassman, Michael > Sent: Tuesday, July 3, 2018 8:16 AM > To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Re: Interesting article on robots and social learning > > ?? > > ?? > > ?? > > It seems like we are still having the same argument as when robots first > came on the scene.?? In response to John McCarthy, who was claiming that > eventually robots can have belief systems and motivations similar to > humans through AI John Searle wrote the Chinese room.?? There have been a > lot of responses to the Chinese room over the years and a number of > digital philosopher claim it is no longer salient, but I don???t think > anybody has ever effectively answered his central question. > > ?? > > Just a quick recap.?? You come to a closed door and know there is a person > on the other side. To communicate you decide the teacher the person on the > other side Chinese. You do this by continuously exchanging rules systems > under the door.?? After a while you are able to have a conversation with > the individual in perfect Chinese. But does that person actually know > Chinese just from the rule systems.?? I think Searle???s major point is > are you really learning if you don???t know why you???re learning, or are > you just repeating. Learning is embedded in the human condition and the > reason it works so well and is adaptable is because we understand it when > we use what we learn in the world in response to others.?? To put it in > response to the post, does a bomb defusion robot really learn how to > defuse a bomb if it does not know why it is doing it.?? It might cut the > right wires at the right time but it doesn???t understand why and > therefore is not doing the task just a series of steps it has been able to > absorb.?? Is that the opposite of human learning? > > ?? > > What the researcher did really isn???t that special at this point.?? Well > I definitely couldn???t do it and it is amazing, but it is in essence a > miniature version of Libratus (which beat experts at Texas Hold em) and > Alphago (which beat the second best Go player in the world).?? My guess it > is the same use of deep learning in which the program integrates new > information into what it is already capable of.?? If machines can learn > from interacting with other humans then they can learn from interacting > with other machines.?? It is the same principle (though much, much simpler > in this case).?? The question is what does it mean.?? As we defining > learning down because of the zeitgeist. ??Greg started his post saying a > socio-cultural theorist be interested in this research.?? I wonder if they > might more likely to be the ones putting on the brakes, asking questions > about it. > > ?? > > Michael > > ?? > > From:xmca-l-bounces@mailman.ucsd.edu On > Behalf Of Andy Blunden > Sent: Tuesday, July 03, 2018 7:04 AM > To: xmca-l@mailman.ucsd.edu > Subject: [Xmca-l] Re: Interesting article on robots and social learning > > ?? > > Does a robot have "motivation"? > > andy > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: > > > Hi Greg, > > ?? > > What is most interesting to me about the understanding of learning which > informs most AI projects is that it seems to assume that affect is > irrelevant. The role of caring, liking, worrying etc. in social learning > seems to be almost universally overlooked because information is seen as > something that can be ???got??? and ???given??? more than something that > is distributed in relationships. > > ?? > > Does anyone know about any AI projects which consider how machines might > feel about what they learn? > > ?? > > All the best, > > > Rod > > ?? > > From:xmca-l-bounces@mailman.ucsd.eduOn > Behalf Of Greg Thompson > Sent: 03 July 2018 02:50 > To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Interesting article on robots and social learning > > ?? > > I???m ambivalent about this project but I suspect that some young CHAT > scholar out there could have a lot to contribute to a project like this > one: > > https://www.sapiens.org/column/machinations/artificial-intelligence-culture/ > > ?? > > -Greg?? > > -- > > Gregory A. Thompson, Ph.D. > > Assistant Professor > > Department of Anthropology > > 880 Spencer W. Kimball Tower > > Brigham Young University > > Provo, UT 84602 > > WEBSITE:greg.a.thompson.byu.edu?? > http://byu.academia.edu/GregoryThompson > > > > This email and any files with it are confidential and intended solely for > the use of the recipient to whom it is addressed. If you are not the > intended recipient then copying, distribution or other use of the > information contained is strictly prohibited and you should not rely on > it. If you have received this email in error please let the sender know > immediately and delete it from your system(s). Internet emails are not > necessarily secure. While we take every care, University of Plymouth > accepts no responsibility for viruses and it is your responsibility to > scan emails and their attachments. University of Plymouth does not accept > responsibility for any changes made after it was sent. Nothing in this > email or its attachments constitutes an order for goods or services unless > accompanied by an official order form. > > > ?? > > > ?? > > > ?? > > > > > > > ?? > > -- > > Gregory A. Thompson, Ph.D. > > Assistant Professor > > Department of Anthropology > > 880 Spencer W. Kimball Tower > > Brigham Young University > > Provo, UT 84602 > > WEBSITE:greg.a.thompson.byu.edu?? > http://byu.academia.edu/GregoryThompson > Dra. Julie Waddington Departament de Did?ctiques Espec?fiques Facultat d'Educaci? i Psicologia Universitat de Girona From mcole@ucsd.edu Tue Jul 17 08:51:30 2018 From: mcole@ucsd.edu (mike cole) Date: Tue, 17 Jul 2018 08:51:30 -0700 Subject: [Xmca-l] Fwd: [commfac] [commdept] 2 Open Professor Positions - Dept of Communication at UC San Diego In-Reply-To: References: Message-ID: Ya?ll come! Mike ---------- Forwarded message --------- From: Jennifer Neri Date: Mon, Jul 16, 2018 at 1:50 PM Subject: [commfac] [commdept] 2 Open Professor Positions - Dept of Communication at UC San Diego To: comm dept Hello, Please find below and attached two current openings within the Department of Communication at UC San Diego for an *Assistant Professor of Media and Popular Culture* and an *advanced Assistant to mid-Associate Professor of Critical Journalism Studies* to begin Fall 2019. The deadline to apply is Sept 15, 2018. Feel free to circulate these ads to any individuals or listserves of interest. Thank you! --- The Department of Communication (http://communication.ucsd.edu/) within the Division of Social Sciences at the University of California, San Diego is seeking to make an appointment at the Assistant Professor level, to begin Fall, 2019 in the following area: *Media and Popular Culture: *Areas of specialization are open but might include film, television, music, video games, streaming video, social media, fashion, or cross-platform content. Successful applicants will present a research agenda that builds on one or more of the department's distinctive, interdisciplinary strengths in visual culture, material cultural, consumer culture, cultural memory, political economy, and the sociology of media and culture while also demonstrating strong methodological skills that include or combine critical theory, industrial or archival research, ethnography, and/or textual and discourse analysis. Candidates will have (or will have obtained by the July 1, 2019 start-date) a PhD in communication or related fields in the social sciences and humanities. The Department of Communication and the University of California San Diego are committed to academic excellence and diversity within the faculty, staff, and student body. We seek candidates who will maintain the highest standards of scholarship and professional activity and make a strong and meaningful contribution to the development of a campus climate that supports equality and diversity. Salary is commensurate with qualifications and based on University of California pay scales. To ensure full consideration, all application materials must be submitted electronically by September 15, 2018 at the following link: https://apol-recruit.ucsd.edu/apply/JPF01783 Application must include: a two to three page cover letter; CV; statement detailing your research interests; statement detailing how your research, teaching and service would contribute to campus diversity goals; writing sample(s); and contact information for three reference letters. UC San Diego is an Equal Opportunity/Affirmative Action Employer with a strong institutional commitment to excellence through diversity. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, age or protected veteran status. ---- The Department of Communication (http://communication.ucsd.edu/) within the Division of Social Sciences at the University of California, San Diego is seeking to make an appointment at the rank of advanced Assistant to mid-Associate Professor, to begin Fall, 2019 in the following area: *Critical Journalism Studies**:* Our ideal candidate will have an active and creative research and teaching program that focuses on the evolving nature of journalism in the age of social media. Areas of particular interest include: the shifting norms, genres, and practices of news production and consumption in today?s complex mediated environment; the changing boundaries between journalism and other narrative forms and cultural platforms for information dissemination; the political economy of news organizations and news work; the role of algorithms and infrastructure in the circulation of news; and the interactions of journalists and other actors in the production and circulation of news. Successful candidates will have strong methodological skills that augment the department?s interdisciplinary program and strengths in cultural and historical analysis, institutional analysis (including political economy), comparative analysis, ethnography and textual and discourse analysis. Candidates from a wide range of disciplinary backgrounds are encouraged to apply. The Department of Communication and the University of California San Diego are committed to academic excellence and diversity within the faculty, staff, and student body. We seek candidates who will maintain the highest standards of scholarship and professional activity and make a strong and meaningful contribution to the development of a campus climate that supports equality and diversity. Salary is commensurate with qualifications and based on University of California pay scales. To ensure full consideration, all application materials must be submitted electronically by September 15, 2018, at the following link(s): https://apol-recruit.ucsd.edu/apply/JPF01786 (Assistant level), https://apol-recruit.ucsd.edu/apply/JPF01784 (Associate level) Application must include: a two to three page cover letter; CV; statement detailing your research interests; statement detailing how your research, teaching, and service would contribute to campus diversity goals; writing sample(s); and contact information for three reference letters. UC San Diego is an Equal Opportunity/Affirmative Action Employer with a strong institutional commitment to excellence through diversity. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, age or protected veteran status. ---- Best Regards, Jennifer --- *Jennifer Neri* *Academic Personnel Analyst* Department of Communication Urban Studies and Planning Program (858) 534-0234 I MCC, Room 131 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180717/6ee045e4/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: Job Ad - Asst Prof - Media and Pop Culture - UC San Diego Dept of Comm.pdf Type: application/pdf Size: 77270 bytes Desc: not available Url : http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180717/6ee045e4/attachment.pdf -------------- next part -------------- A non-text attachment was scrubbed... Name: Job Ad - Asst Assoc Prof - Journalism Studies - UC San Diego Dept of Comm.pdf Type: application/pdf Size: 78370 bytes Desc: not available Url : http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180717/6ee045e4/attachment-0001.pdf From dcmar@ucdavis.edu Tue Jul 17 11:44:43 2018 From: dcmar@ucdavis.edu (Danny C Martinez) Date: Tue, 17 Jul 2018 18:44:43 +0000 Subject: [Xmca-l] Postdoc on Teachers as Learners @ UC Davis School of Education Message-ID: <6BB61F1E-BB0A-42CF-B2CF-F1E7B22B864F@ucdavis.edu> Our University of California, Davis Teachers as Learners (TaL) research team is looking for a postdoctoral scholar for the 2018-2019 year (with possible continuation for 1-2 additional years). Review of applications begins August 14, 2018. Please take a look and share with your networks! Extended call can be found by clicking on the link below https://recruit.ucdavis.edu/apply/JPF02290 Feel free to contact me if you have any questions, Danny ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Danny C. Martinez, Ph.D. Assistant Professor School of Education University of California, Davis One Shields Avenue Davis, CA 95616 (530) 752-9749 dcmar@ucdavis.edu [cid:A099115B-E648-45F4-93B8-1AD50158A619][cid:F589B70D-B8AD-43CD-8AB4-E99A28CDACCE] -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180717/090f50df/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: Firstgen-Email-Sticker-Blue.jpg Type: image/jpeg Size: 9852 bytes Desc: Firstgen-Email-Sticker-Blue.jpg Url : http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180717/090f50df/attachment.jpg -------------- next part -------------- A non-text attachment was scrubbed... Name: UPEIdentifierl.png Type: image/png Size: 32292 bytes Desc: UPEIdentifierl.png Url : http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180717/090f50df/attachment.png From glassman.13@osu.edu Tue Jul 17 12:42:39 2018 From: glassman.13@osu.edu (Glassman, Michael) Date: Tue, 17 Jul 2018 19:42:39 +0000 Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: <50521.79.152.174.82.1531838032.squirrel@montseny.udg.edu> References: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> <1860198877.3850789.1531537929986@mail.yahoo.com> <7c142464-a2b2-ede1-e258-388e449e10f6@marxists.org> <3B91542B0D4F274D871B38AA48E991F953B3E4C1@CIO-KRC-D1MBX04.osuad.osu.edu> <3B91542B0D4F274D871B38AA48E991F953B3E5D1@CIO-KRC-D1MBX04.osuad.osu.edu> <377811343.5150586.1531792876331@mail.yahoo.com> <50521.79.152.174.82.1531838032.squirrel@montseny.udg.edu> Message-ID: <3B91542B0D4F274D871B38AA48E991F953B3E855@CIO-KRC-D1MBX04.osuad.osu.edu> Hi David and Julie and Greg and whoever else is interested, Finally got a chance to take a look at the Kate Crawford talk and I sort of feel it represents both what is hopeful and what is not about machine learning and maybe answer Greg's question a bit about the role of CHAT (and other more participatory, process oriented social science theories) in machine learning. First what is hopeful. I think it is great that this is a topic that people seem really worried about. What I am a bit concerned about though is two things. One is the general lack of awareness of how these issues have played out in other areas. The second is what I see as the continued to commitment to centralization in machine learning, that really smart people from a few research shops are going to figure out how to fix this. So my first concern with Dr. Crawford's talk. It was like the 20th century didn't exist at all in the development of thinking about the roles that bias and classification plays in our lives and how that is being replicated by machine learning in possibly damaging ways. When Dr. Crawford starting talking about classification for instance I was hoping (against hope) that she would talk about Mead and the social psychology on classification that emerged out of the social psychology program at the University of Chicago. And/or the beginnings of action research. Or one of the other theories that see classification as a purposeful and destructive process. I was also hoping she might talk about a more modern theory like intersectionality. Instead she simply talked about pretty ancient ideas that more or less danced around the issue. I wonder if the reason for this is that a lot of people in machine learning tend to think the problem can be solved through coding (a bit more on that in a bit) rather than taking programming into the community and making a real effort to create a symbiotic relationship between machine and human activity (perhaps this is where CHAT comes in). Near the beginning Dr. Crawford talks about "socio-technical," a term that has been used so broadly that it seems to have lost most meaning. But the term socio-technical actually did or does have a meaning. It was coined by the action theorist Eric Trist who suggested that communities themselves understand best how to use technologies to serve their functions. You bring in the technology with an understanding of how it works but then you rely on the community to implement and change it to meet its needs. In some ways I feel that is what was at least partially done in the Fifth Dimension project and other CHAT projects. Maybe to keep machine learning from being destructive to communities we need to find similar uses for it in the community (I spent part of the summer talking to a bunch of Chinese students immersed in AI for education - one of the reasons this is at the forefront of my mind and we discussed this quite a bit). They aren't as committed to the whole community of computing geniuses thing as we are in this culture, at least not those students. The important issue here is decentralization of problem solving. Something that Berners Lee has been talking about a lot https://www.vanityfair.com/news/2018/07/the-man-who-created-the-world-wide-web-has-some-regrets And maybe be just as important for AI as it is for the Internet. It would mean that a lot of the work for what machine learning would look like would be on the fly, in the community. And again I think some of the work done in CHAT may work for this. Okay, just meanderings of my mind I guess. I hope some of it made sense. Michael -----Original Message----- From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of JULIE WADDINGTON Sent: Tuesday, July 17, 2018 10:34 AM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: Interesting article on robots and social learning Doug, Thank you for sharing the video with Kate Crawford's keynote speech. Only managed to watch half so far, but from what I've gleaned up to now, makes a strong argument for the need for SOCIO-TECHNICAL analysis which fits in with the concerns/questions being raised by everyone. Talking of bias in AI and its huge ramifications (racism, sexism, homophobia, etc.), Crawford warns that: "When we consider bias just as a technical issue, then we're already missing the (bigger?) picture. The default of all data gathered reflects the deepest structural biases of society". Sounds obvious to state that social bias always precedes biases in AI, but the examples given and discussion of them provide much food for thought. Thanks again for sharing, Julie > Hi, Michael--I think it could be, as there is certainly an interest > in dealing with bias, especially once you move away from the > relatively easily detectable ones in chatbots.? Frankly, I was > thinking in part to check in with you guys to see what you thought, as > the questions Kate Crawford poses here in the Neural Information > Processing Conference keynote last year are precisely the ones of > perspective and mind that I associate with CHAT. Perhaps the most > useful thing I can do is to put this in front of you all for > consideration: > The Trouble with Bias - NIPS 2017 Keynote - Kate Crawford #NIPS2017 > > > | > | > | > | | | > > | > > | > | > | | > The Trouble with Bias - NIPS 2017 Keynote - Kate Crawford #NIPS2017 > > Kate Crawford is a leading researcher, academic and author who has > spent the last decade studying the social imp... > | > > | > > | > > > Regards,Doug > On ???Sunday???, ???July??? ???15???, ???2018??? > ???05???:???26???:???23??? ???PM??? ???PDT, Glassman, Michael > wrote: > > > I wonder if where CHAT might be most interesting in addressing AI are > on topics of bias and oppression.?? I believe that there is a real > danger that AI can be used as a tool for oppression, especially from > some of its early uses.?? One of the things people discussing the > possibilities of AI don???t discuss near enough is that it picks up > and integrates biases from the information it receives.?? Sometimes > this can be interesting such as the program Libratus that beat world > class poker players at Texas Hold ???em.?? One of the less discussed > aspects is that one of the reasons it was capable of doing this is it > picks up on the playing biases of the players it is competing with and > integrates them into its decision making process.?? This I think is > one of the reasons that it has to play only one player at a time to be successful. > > ? > > The danger is when it integrates these biases into a larger decision > making process.?? There is an AI program called Northpointe used by > the justice department that uses a combination of big data and deep > learning to make decisions about whether people convicted of crimes > will wind up back in jail.?? This should have implications for > sentencing.?? The program, surprise, tends to be much harsher with > Black individuals than white individuals.?? Even if you keep ethnicity > outside of the equation it has enough other information to create a > natural bias.?? There are also some of the more advanced translation > programs which tend to incorporate the biases of the languages (e.g. > mysoginistic) into the translations without those getting the > translations realizing it.?? AI , especially machine learning, is in > many ways a prisoner to the information it receives.?? Who decides > what information it receives? Much like the intelligence tests of an > earlier age people will use AI decision making as being neutral or > objective when it actually mirrors back (almost > perfectly) those who are feeding it information. > > ? > > Like I said I don???t see this point raised nearly enough.?? Perhaps > CHAT is one of the fields in a position to constantly point this out, > explore the ways that AI is culturally biases, and those that dominate > information flow can easily use it as a tool for oppression. > > ? > > Michael > > ? > > From: xmca-l-bounces@mailman.ucsd.edu > On > Behalf Of Greg Thompson > Sent: Sunday, July 15, 2018 12:12 PM > To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Re: Interesting article on robots and social > learning > > ? > > And I'm still curious if any others out there might have anything to > contribute to Doug's query regarding what CHAT theory (particularly > developmental theories) might have to offer thinking about AI? > > ? > > It seems an interesting question to think through even if you aren't > on board with the larger AI project... > > ? > > -greg > > ? > > On Sun, Jul 15, 2018 at 10:55 AM, Andy Blunden wrote: > > > I think we go back to Martin's earlier ironic comment here, Michael. > > Andy > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 15/07/2018 9:44 AM, Glassman, Michael wrote: > > > The Turing test, at least the test he wrote in his article, is > actually a big more complicated than this, and especially poignant > today.? Turing???s test of whether computers are acting as human was > based on an old English game show called The Lying Game (I suppose one > of the reasons for the title of the movie on Turing, though of course > it had multiple meanings.?? But for some reason they never mentioned > the origin of the phrase in the movie).?? Anyway in the lying game the > contestant had to listen to two individuals, one of whom was telling > the truth about the situation and one of whom was lying. The way > Turing describes it, it sounds quite brutal.?? The contestant had to > figure out who the liar was (there was a similar much milder version > years later in the US). Anyway Turing???s proposal, if I remember > correctly, was that a computer could be considered thinking like a > human if the comp the contestant was listening to was lying and he or > she couldn???t tell. In essence the computer would successfully lie.?? > Everybody think Turing believed that computers would eventually think > like humans but my reading of the article was that he had no idea, but as the computer stood at the time there was no chance. > > ? > > The reason this is so poignant is the Mueller indictments that came > down yesterday.?? For those outside the U.S. or not following the news > the indictments were against Russian military leading a scheme to > convince individuals of lies about various actor in the 2016 election > (also times release of information and breaking in to voting > systems).?? But it is the propagation of lies by robots and people > believing them that interests me.?? I feel like we aren???t putting > enough thought into that.?? Many of the people receiving the > information could not tell it was no from humans and believed it even > though in many cases it was generated by robots, passing it seems to > me Turing???s test.?? How and why did this happen? Of course Turing > died before the Internet so he couldn???t have known about it.?? But I > wonder if part of the reason the robots were successful is that they > have the ability to mine, collect and aggregate people???s biases and > then reflect them back to us.?? We tend to engage, believe things in > the contexts of our own biases.?? They say in salesmanship that the > trick is figuring out what people want to here and then couching > whatever you want to see in that.?? Trump is a master of reading what > a group of people want to hear at the moment, their biases, and then > mirroring it back to them > > ? > > If we went back to the Chinese room and the person inside was able to > read our biases from our messages would they then be human.? > > ? > > We live in a strange age. > > ? > > From:xmca-l-bounces@mailman.ucsd.eduO > n > Behalf Of Andy Blunden > Sent: Saturday, July 14, 2018 8:58 AM > To: xmca-l@mailman.ucsd.edu > Subject: [Xmca-l] Re: Interesting article on robots and social > learning > > ? > > I understand that the Turing Test is one which AI people can use to > measure the success of their AI - if you can't tell the difference > between a computer and a human interaction then the computer has > passed the Turing test. I tend to rely on a kind of anti-Turing Test, > that is, that if you can tell the difference between the computer and > the human interaction, then you have passed the anti-Turing test, that > is, you know something about humans. > > Andy > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 14/07/2018 1:12 PM, Douglas Williams wrote: > > > Hi-- > > I think I'll come out of lurking for this one. Actually, what you're > talking about with this pain algorithm system sounds like a modeling > system that someone might need to develop what Alan Turing described > as a P-type computing device. A P-type computer would receive its > programming from inputs of pleasure and pain. It was probably derived > from reading some of the behavioralist models of mind at the time. > Turing thought that he was probably pretty close to being able to > develop such a computing device, which, because its input was similar, could model human thought. > The Eliza Rogersian analysis computer program was another early idea > in which the goal was to model the patterns of human interaction, and > gradually approach closer to human thought and interaction that way. > And by the 2000's, the idea of the "singularity" was afloat, in which > one could model human minds so well as to enable a human to be > uploaded into a computer, and live forever as software (Kurzweil, > 2005). But given that we barely had a sufficient model of mind to say > Boo with at the time (what is consciousness? where does intention come > from? What is the balance of nature/nurture in motivation? Speech > utterances? and so on), and you're right, AI doesn't have much of a > theory of emotion, either--the goal of computer software modeling human thought seemed very far away to me. > > ? > > At someone's request, I wrote a rather whimsical paper called "What is > Artificial Intelligence?" back in 2006 about such things. My argument > was that statistical modeling of human interaction and capturing > thought was not too easy after all, precisely because of the parts of > mind we don't think of, and the social interactions that, at the time, > were not a primary focus. I mused about that in the context of my > trying to write a computer program by applying Chomsky's syntactic > structures to interpret intention of a few simple questions--without, > alas, in my case, a corpus-supported Markov chain logic to do it. > Generative grammar would take care of it, right? Wrong. > > > So as someone who had done a little primitive, incompetent attempt at > speech modeling myself, and in the light of my later-acquired > knowledge of CHAT, Burke, Bakhtin, Mead, and various other people in > different fields, and of the tendency of people to interact through > the world through cognitive biases, complexes, and embodied > perceptions that were not readily available to artificial systems, I > didn't think the singularity was so near. > > The terrible thing about computer programs is that they do just what > you tell them to do, and no more. They have no drive to improve, > except as programmed. When they do improve, their creativity is > limited. And the approach now still substantially is > pattern-recognition based. The current paradigm is something called > Convolutional Neural Network Long Short-Term Memory Networks > (CNN/LSTM) for speech recognition, in which the convolutional neural > networks reduce the variants of speech input into manageable patterns, > and temporal processing (temporal patterns of the real wold phenomena > to which the AI system is responding). But while such systems combined > with natural language processing can increasingly mimic human > response, and "learn" on their own, and while they are approaching the > "weak" form of artificial general intelligence (AGI), the intelligence > needed for a machine to perform any intellectual task that a human > being can, they are an awfully long way from "strong" AGI--that is, > something approaching human consciousness. I think that's because they > are a long way from capturing the kind of social embeddedness of > almost all animal behavior, and the sense in which human cognition is > embedded in the messy things, like emotion. A computer algorithm can > recognize the patterns of emotion, but that's it. An AGI system that can experience emotions, or have motivation, is quite another thing entirely. > > I can tell you that AI confidence is still there. In raising questions > about cultural and physical embodiment in artficial intelligence > interations with someone in the field recently, he dismissed the idea > as being that relevant. His thought was that "what I find essential is > that we acknowledge that there's no obvious evidence?? supporting that > the current paradigm of CNN/LSTM under various reinforcement > algorithms isn't enough for A AGI and in particular for broad > animal-like intelligence like that of ravens and dogs." > > But ravens and dogs are embedded in social interaction, in > intentionality, in consciousness--qualitatively different than ours, maybe, but there. > Dogs don't do what you ask them to, always. When they do things, they > do them for their own intentionality, which may be to please you, or > may be to do something you never asked the dog to do, which is either > inherent in its nature, or an expression of social interactions with > you or others, many of which you and they may not be consciously aware > of. The deep structure of metaphor, the spatiotemporal relations of > language that Langacker describes as being necessary for construal, > the worlds of narrativized experience, are mostly outside of the > reckoning, so far as I know (though I'm not an expert--I could be at > least partly wrong) of the current CNN/LSTM paradigm. > > My old interlocutor in thinking about my language program, Noam > Chomsky, has been a pretty sharp critic of the pattern recognition > approach to artificial intelligence. > > Here's Chomsky's take on the idea: > > http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html > > And here's Peter Norvig's response; he's a director of research at > Google, where Kurzweil is, and where, I assume, they are as close to > the strong version of artificial general intelligence as anyone out there... > > http://norvig.com/chomsky.html > > Frankly, I would be quite interested in what you think of these things. > I'm merely an Isaiah Berlin fox, chasing to and fro at all the pretty > ideas out there. But you, many of you, are, I suspect, the untapped > hedgehogs whose ideas on these things would see more readily what I > dimly grasp must be required, not just for achieving a strong AGI, but > for achieving something that we would see as an ethical, reasonable > artificial mind that expands human experience, rather than becomes a > prison that reduces human interactions to its own level. > > My own thinking is that lately, Cognitive Metaphor Theory (CMT), which > I knew more of in its earlier (now "standard model') days, is getting > even more interesting than it was. I'd done a transfer term to UC > Berkeley to study with George Lakoff, but we didn't hit it off well, > perhaps I kept asking him questions about social embeddedness, and > similarities to Vygotsky's theory of complex thought, and was too > expressive about my interest in linking out from his approach than > folding in. It seems that the idea I was rather woolily suggesting to > Lakoff back then has caught > on: namely, that utterances could be explored for cultural variation > and historical embeddedness, a form ofsocial context to the narratives > and metaphors and blended spaces that underlay speech utterances and > thought; that there was a degree of social embodiment as well as > physiological embodiment through which language operated. I thought > then, and it looks like some other people now, are thinking that > someone seeking to understand utterances (as a strong AGI system would > need to do) really, would need to engage in internalizing and > ventriloqusing a form of Geertz's thick description of interactions. > In such forms, words do not mean what they say, and can have different > affect that is a bit more complex than I think temporal processing currently addresses. > > I think these are the kind of things that artificial intelligence > would need truly to advance, and that Bakhtin and Vygotsky and > Leont'ev and in the visual world, Eisenstein were addressing all along... > > And, of course, you guys. > > ? > > Regards, > > Douglas Willams > > ? > > ? > > ? > > On Tuesday, July 3, 2018, 10:35:45 AM PDT, David H > Kirshner wrote: > > ? > > ? > > The other side of the coin is that ineffable human experience is > becoming more effable. > > Computers can now look at a human brain scan and determine the degree > of subjectively experienced pain: > > ? > > In 2013, Tor Wager, a neuroscientist at the University of Colorado, > Boulder, took the logical next step by creating an algorithm that > could recognize pain???s distinctive patterns; today, it can pick out > brains in pain with more than ninety-five-per-cent accuracy. When the > algorithm is asked to sort activation maps by apparent intensity, its > ranking matches participants??? subjective pain ratings. By analyzing > neural activity, it can tell not just whether someone is in pain but > also how intense the experience is. > > ? > > So, perhaps the computer can???t ???feel our pain,??? but it can sure > ???sense our pain!??? > > ? > > Here???s the full article: > > https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain > > ? > > David > > ? > > From:xmca-l-bounces@mailman.ucsd.eduO > n > Behalf Of Glassman, Michael > Sent: Tuesday, July 3, 2018 8:16 AM > To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Re: Interesting article on robots and social > learning > > ? > > ? > > ? > > It seems like we are still having the same argument as when robots > first came on the scene.?? In response to John McCarthy, who was > claiming that eventually robots can have belief systems and > motivations similar to humans through AI John Searle wrote the Chinese > room.?? There have been a lot of responses to the Chinese room over > the years and a number of digital philosopher claim it is no longer > salient, but I don???t think anybody has ever effectively answered his central question. > > ? > > Just a quick recap.?? You come to a closed door and know there is a > person on the other side. To communicate you decide the teacher the > person on the other side Chinese. You do this by continuously > exchanging rules systems under the door.?? After a while you are able > to have a conversation with the individual in perfect Chinese. But > does that person actually know Chinese just from the rule systems.?? I > think Searle???s major point is are you really learning if you don???t > know why you???re learning, or are you just repeating. Learning is > embedded in the human condition and the reason it works so well and is > adaptable is because we understand it when we use what we learn in the > world in response to others.?? To put it in response to the post, does > a bomb defusion robot really learn how to defuse a bomb if it does not > know why it is doing it.?? It might cut the right wires at the right > time but it doesn???t understand why and therefore is not doing the > task just a series of steps it has been able to absorb.?? Is that the opposite of human learning? > > ? > > What the researcher did really isn???t that special at this point.?? > Well I definitely couldn???t do it and it is amazing, but it is in > essence a miniature version of Libratus (which beat experts at Texas > Hold em) and Alphago (which beat the second best Go player in the > world).?? My guess it is the same use of deep learning in which the > program integrates new information into what it is already capable > of.?? If machines can learn from interacting with other humans then > they can learn from interacting with other machines.?? It is the same > principle (though much, much simpler in this case).?? The question is > what does it mean.?? As we defining learning down because of the > zeitgeist. ??Greg started his post saying a socio-cultural theorist be > interested in this research.?? I wonder if they might more likely to > be the ones putting on the brakes, asking questions about it. > > ? > > Michael > > ? > > From:xmca-l-bounces@mailman.ucsd.edu > On > Behalf Of Andy Blunden > Sent: Tuesday, July 03, 2018 7:04 AM > To: xmca-l@mailman.ucsd.edu > Subject: [Xmca-l] Re: Interesting article on robots and social > learning > > ? > > Does a robot have "motivation"? > > andy > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: > > > Hi Greg, > > ? > > What is most interesting to me about the understanding of learning > which informs most AI projects is that it seems to assume that affect > is irrelevant. The role of caring, liking, worrying etc. in social > learning seems to be almost universally overlooked because information > is seen as something that can be ???got??? and ???given??? more than > something that is distributed in relationships. > > ? > > Does anyone know about any AI projects which consider how machines > might feel about what they learn? > > ? > > All the best, > > > Rod > > ? > > From:xmca-l-bounces@mailman.ucsd.eduO > n > Behalf Of Greg Thompson > Sent: 03 July 2018 02:50 > To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Interesting article on robots and social learning > > ? > > I???m ambivalent about this project but I suspect that some young CHAT > scholar out there could have a lot to contribute to a project like > this > one: > > https://www.sapiens.org/column/machinations/artificial-intelligence-cu > lture/ > > ? > > -Greg? > > -- > > Gregory A. Thompson, Ph.D. > > Assistant Professor > > Department of Anthropology > > 880 Spencer W. Kimball Tower > > Brigham Young University > > Provo, UT 84602 > > WEBSITE:greg.a.thompson.byu.edu? > http://byu.academia.edu/GregoryThompson > > > > This email and any files with it are confidential and intended solely > for the use of the recipient to whom it is addressed. If you are not > the intended recipient then copying, distribution or other use of the > information contained is strictly prohibited and you should not rely > on it. If you have received this email in error please let the sender > know immediately and delete it from your system(s). Internet emails > are not necessarily secure. While we take every care, University of > Plymouth accepts no responsibility for viruses and it is your > responsibility to scan emails and their attachments. University of > Plymouth does not accept responsibility for any changes made after it > was sent. Nothing in this email or its attachments constitutes an > order for goods or services unless accompanied by an official order form. > > > ? > > > ? > > > ? > > > > > > > ? > > -- > > Gregory A. Thompson, Ph.D. > > Assistant Professor > > Department of Anthropology > > 880 Spencer W. Kimball Tower > > Brigham Young University > > Provo, UT 84602 > > WEBSITE:greg.a.thompson.byu.edu? > http://byu.academia.edu/GregoryThompson > Dra. Julie Waddington Departament de Did?ctiques Espec?fiques Facultat d'Educaci? i Psicologia Universitat de Girona From hhdave15@gmail.com Tue Jul 17 21:54:11 2018 From: hhdave15@gmail.com (Harshad Dave) Date: Wed, 18 Jul 2018 10:24:11 +0530 Subject: [Xmca-l] If economics is immune from ethics, why should exploitation be a topic of discussion in economics? Message-ID: Why do we discuss on exploitation? As per Marx's views, ethics has no influence on economic processes. Does exploitation have no link with ethical feelings? The sense of exploitation is absolutely linked with our ethical feelings. If economics is immune from influence of ethics and sense of *exploitation* is founded on our ethical evaluation, then discussion on *exploitation* should not find place in the topics of economics/political economics. Harshad Dave hhdave15@gmail.com Harshad Dave ?hhdave15@gmail.com? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180718/72147c61/attachment.html From andyb@marxists.org Tue Jul 17 22:17:35 2018 From: andyb@marxists.org (Andy Blunden) Date: Wed, 18 Jul 2018 15:17:35 +1000 Subject: [Xmca-l] Re: If economics is immune from ethics, why should exploitation be a topic of discussion in economics? In-Reply-To: References: Message-ID: Harshad, According to Marx, "exploitation," as he uses the concept in /Capital/, is not an ethical concept at all; it simply means making a gain by utilising an affordance, as in "exploiting natural resources." Many "Marxist economists" today adhere to this view. However, I am one of those that hold a different view. And the legacy of Stalinism is evidence of some deficit in the legacy of Marx's writing - it was so easy for Stalin to dismiss ethics as just so much nonsense and claim the mantel of Marxism! Much as I admire Marx, he was wrong on Ethics. He was a creature of his times in this respect, or rather in endeavouring to /not/ be a creature of his times, he made an opposite error. He held all ethics in contempt as if religion had a monopoly on this topic, and it were nothing more than some kind of confidence trick to fool the masses. (Many today share this view.) In fact, contrary to his own self-consciousness, /Capital/ is a seminal work of ethics. The problem stems from Hegel and from Marx's efforts to make a positive critique of Hegel. As fine a work of Ethics as Hegel's /Philosophy of Right/ is, it had certain problems which Marx had to overcome. These included Hegel's insistence that the state alone could determine right and wrong (the state could of course make errors, but in the long run there is no extramundane source of Right beyond the state). This was something impossible for Marx to accept. And yet Hegel's idea of Ethics as something objective, contained in the evolving forms of life (rather than Pure Reason inherent in every individual as Kant held, or from God via His agents on Earth, the priesthood), Marx wished to embrace and continue. So the situation is very complex. The foremost work on Ethics was authored by a person who did not believe they wrote about Ethics at all. Here is a page with lots of resources on this question: https://www.marxists.org/subject/ethics/index.htm Andy ------------------------------------------------------------ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 18/07/2018 2:54 PM, Harshad Dave wrote: > > > Why do we discuss on exploitation? > > As per Marx's views, ethics has no influence on economic > processes. Does exploitation have no link with ethical > feelings? The sense of exploitation is absolutely linked > with our ethical feelings. If economics is immune from > influence of ethics and sense of /*exploitation*/ is > founded on our ethical evaluation, then discussion > on /*exploitation*/ should not find place in the topics of > economics/political economics. > Harshad Dave > hhdave15@gmail.com > > > > > Harshad Dave > ?hhdave15@gmail.com ? > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180718/6b4f64ed/attachment.html From hhdave15@gmail.com Tue Jul 17 23:00:31 2018 From: hhdave15@gmail.com (Harshad Dave) Date: Wed, 18 Jul 2018 11:30:31 +0530 Subject: [Xmca-l] Business association and its consequential effect on a nation. Message-ID: Should we not analyse the consequential effects of business association between two nations at large difference of their socio economic formation? Human societies at different places before industrial development were populated more or less in synchronization of its prevailing socio economic formation only. Some nations developed with industrial revolution and they entered into business association/relation with other human societies that were with assets of natural resources. The above association led to exploitation of their natural resources and they received the industrial productions that helped or instigated their population strength only, but without any appreciable development of their prevailing socio economic formation. This proves to be the most dangerous condition of a nation/society where population growth is in folds with the trifle change in its socio economic formation. Now this danger is giving its out standing results, if you sharply analyze root cause of unrest in some of the north African countries, though superficial reasons are in discussion in media/new papers and among world leading nations, but in grass root, they got extremely over populated under the flow of industrial products they received in past without any socio economic formation change in their social system. If we (developed countries) are fair, we should analyze present unrest in above perplexity. I would like to know my friends views on this with a request for the same. Harshad Dave ?Email: hhdave15@gmail.com? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180718/56d9dc1f/attachment.html From julie.waddington@udg.edu Wed Jul 18 00:11:43 2018 From: julie.waddington@udg.edu (JULIE WADDINGTON) Date: Wed, 18 Jul 2018 09:11:43 +0200 (CEST) Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: <3B91542B0D4F274D871B38AA48E991F953B3E855@CIO-KRC-D1MBX04.osuad.osu.ed u> References: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> <1860198877.3850789.1531537929986@mail.yahoo.com> <7c142464-a2b2-ede1-e258-388e449e10f6@marxists.org> <3B91542B0D4F274D871B38AA48E991F953B3E4C1@CIO-KRC-D1MBX04.osuad.osu.edu> <3B91542B0D4F274D871B38AA48E991F953B3E5D1@CIO-KRC-D1MBX04.osuad.osu.edu> <377811343.5150586.1531792876331@mail.yahoo.com> <50521.79.152.174.82.1531838032.squirrel@montseny.udg.edu> <3B91542B0D4F274D871B38AA48E991F953B3E855@CIO-KRC-D1MBX04.osuad.osu.edu> Message-ID: <50788.79.152.174.82.1531897903.squirrel@montseny.udg.edu> Hi Michael, Made much sense indeed :) Not so sure about your concern that Crawford ignores the 20th century. The examples concerning stop and frisk laws, gender bias in corporate culture, etc. seem to me to do the trick of highlighting how bias has played out and been replicated in machine learning/systems. Understand your concern about lack of awareness. I myself am coming at all this from a position of ignorance/inexperience that I find quite worrying (why is that 'we' don't all know all these things?!!!). That's probably why my response to Crawford's talk is more enthusiastic. The more talks/explanations that help highlight these issues, the better, and the more they can be incorporated into discussions with different communities. Just more meanderings... :) Julie > Hi David and Julie and Greg and whoever else is interested, > > Finally got a chance to take a look at the Kate Crawford talk and I sort > of feel it represents both what is hopeful and what is not about machine > learning and maybe answer Greg's question a bit about the role of CHAT > (and other more participatory, process oriented social science theories) > in machine learning. First what is hopeful. I think it is great that > this is a topic that people seem really worried about. What I am a bit > concerned about though is two things. One is the general lack of > awareness of how these issues have played out in other areas. The second > is what I see as the continued to commitment to centralization in machine > learning, that really smart people from a few research shops are going to > figure out how to fix this. > > So my first concern with Dr. Crawford's talk. It was like the 20th > century didn't exist at all in the development of thinking about the roles > that bias and classification plays in our lives and how that is being > replicated by machine learning in possibly damaging ways. When Dr. > Crawford starting talking about classification for instance I was hoping > (against hope) that she would talk about Mead and the social psychology on > classification that emerged out of the social psychology program at the > University of Chicago. And/or the beginnings of action research. Or one > of the other theories that see classification as a purposeful and > destructive process. I was also hoping she might talk about a more modern > theory like intersectionality. Instead she simply talked about pretty > ancient ideas that more or less danced around the issue. I wonder if the > reason for this is that a lot of people in machine learning tend to think > the problem can be solved through coding (a bit more on that in a bit) > rather than taking programming into the community and making a real effort > to create a symbiotic relationship between machine and human activity > (perhaps this is where CHAT comes in). > > Near the beginning Dr. Crawford talks about "socio-technical," a term that > has been used so broadly that it seems to have lost most meaning. But the > term socio-technical actually did or does have a meaning. It was coined > by the action theorist Eric Trist who suggested that communities > themselves understand best how to use technologies to serve their > functions. You bring in the technology with an understanding of how it > works but then you rely on the community to implement and change it to > meet its needs. In some ways I feel that is what was at least partially > done in the Fifth Dimension project and other CHAT projects. Maybe to > keep machine learning from being destructive to communities we need to > find similar uses for it in the community (I spent part of the summer > talking to a bunch of Chinese students immersed in AI for education - one > of the reasons this is at the forefront of my mind and we discussed this > quite a bit). They aren't as committed to the whole community of computing > geniuses thing as we are in this culture, at least not those students. > > The important issue here is decentralization of problem solving. > Something that Berners Lee has been talking about a lot > > https://www.vanityfair.com/news/2018/07/the-man-who-created-the-world-wide-web-has-some-regrets > > And maybe be just as important for AI as it is for the Internet. It would > mean that a lot of the work for what machine learning would look like > would be on the fly, in the community. And again I think some of the work > done in CHAT may work for this. > > Okay, just meanderings of my mind I guess. I hope some of it made sense. > > Michael > > -----Original Message----- > From: xmca-l-bounces@mailman.ucsd.edu On > Behalf Of JULIE WADDINGTON > Sent: Tuesday, July 17, 2018 10:34 AM > To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Re: Interesting article on robots and social learning > > Doug, > > Thank you for sharing the video with Kate Crawford's keynote speech. > Only managed to watch half so far, but from what I've gleaned up to now, > makes a strong argument for the need for SOCIO-TECHNICAL analysis which > fits in with the concerns/questions being raised by everyone. > > Talking of bias in AI and its huge ramifications (racism, sexism, > homophobia, etc.), Crawford warns that: "When we consider bias just as a > technical issue, then we're already missing the (bigger?) picture. The > default of all data gathered reflects the deepest structural biases of > society". > > Sounds obvious to state that social bias always precedes biases in AI, but > the examples given and discussion of them provide much food for thought. > > Thanks again for sharing, > Julie > > > >> Hi, Michael--I think it could be, as there is certainly an interest >> in dealing with bias, especially once you move away from the >> relatively easily detectable ones in chatbots.?? Frankly, I was >> thinking in part to check in with you guys to see what you thought, as >> the questions Kate Crawford poses here in the Neural Information >> Processing Conference keynote last year are precisely the ones of >> perspective and mind that I associate with CHAT. Perhaps the most >> useful thing I can do is to put this in front of you all for >> consideration: >> The Trouble with Bias - NIPS 2017 Keynote - Kate Crawford #NIPS2017 >> >> >> | >> | >> | >> | | | >> >> | >> >> | >> | >> | | >> The Trouble with Bias - NIPS 2017 Keynote - Kate Crawford #NIPS2017 >> >> Kate Crawford is a leading researcher, academic and author who has >> spent the last decade studying the social imp... >> | >> >> | >> >> | >> >> >> Regards,Doug >> On ???????Sunday???????, ???????July??????? ???????15???????, >> ???????2018??????? >> ???????05???????:???????26???????:???????23??????? ???????PM??????? >> ???????PDT, Glassman, Michael >> wrote: >> >> >> I wonder if where CHAT might be most interesting in addressing AI are >> on topics of bias and oppression.???? I believe that there is a real >> danger that AI can be used as a tool for oppression, especially from >> some of its early uses.???? One of the things people discussing the >> possibilities of AI don????????t discuss near enough is that it picks up >> and integrates biases from the information it receives.???? Sometimes >> this can be interesting such as the program Libratus that beat world >> class poker players at Texas Hold ???????em.???? One of the less >> discussed >> aspects is that one of the reasons it was capable of doing this is it >> picks up on the playing biases of the players it is competing with and >> integrates them into its decision making process.???? This I think is >> one of the reasons that it has to play only one player at a time to be >> successful. >> >> ?? >> >> The danger is when it integrates these biases into a larger decision >> making process.???? There is an AI program called Northpointe used by >> the justice department that uses a combination of big data and deep >> learning to make decisions about whether people convicted of crimes >> will wind up back in jail.???? This should have implications for >> sentencing.???? The program, surprise, tends to be much harsher with >> Black individuals than white individuals.???? Even if you keep ethnicity >> outside of the equation it has enough other information to create a >> natural bias.???? There are also some of the more advanced translation >> programs which tend to incorporate the biases of the languages (e.g. >> mysoginistic) into the translations without those getting the >> translations realizing it.???? AI , especially machine learning, is in >> many ways a prisoner to the information it receives.???? Who decides >> what information it receives? Much like the intelligence tests of an >> earlier age people will use AI decision making as being neutral or >> objective when it actually mirrors back (almost >> perfectly) those who are feeding it information. >> >> ?? >> >> Like I said I don????????t see this point raised nearly enough.???? >> Perhaps >> CHAT is one of the fields in a position to constantly point this out, >> explore the ways that AI is culturally biases, and those that dominate >> information flow can easily use it as a tool for oppression. >> >> ?? >> >> Michael >> >> ?? >> >> From: xmca-l-bounces@mailman.ucsd.edu >> On >> Behalf Of Greg Thompson >> Sent: Sunday, July 15, 2018 12:12 PM >> To: eXtended Mind, Culture, Activity >> Subject: [Xmca-l] Re: Interesting article on robots and social >> learning >> >> ?? >> >> And I'm still curious if any others out there might have anything to >> contribute to Doug's query regarding what CHAT theory (particularly >> developmental theories) might have to offer thinking about AI? >> >> ?? >> >> It seems an interesting question to think through even if you aren't >> on board with the larger AI project... >> >> ?? >> >> -greg >> >> ?? >> >> On Sun, Jul 15, 2018 at 10:55 AM, Andy Blunden >> wrote: >> >> >> I think we go back to Martin's earlier ironic comment here, Michael. >> >> Andy >> >> Andy Blunden >> http://www.ethicalpolitics.org/ablunden/index.htm >> >> On 15/07/2018 9:44 AM, Glassman, Michael wrote: >> >> >> The Turing test, at least the test he wrote in his article, is >> actually a big more complicated than this, and especially poignant >> today.?? Turing????????s test of whether computers are acting as human >> was >> based on an old English game show called The Lying Game (I suppose one >> of the reasons for the title of the movie on Turing, though of course >> it had multiple meanings.???? But for some reason they never mentioned >> the origin of the phrase in the movie).???? Anyway in the lying game the >> contestant had to listen to two individuals, one of whom was telling >> the truth about the situation and one of whom was lying. The way >> Turing describes it, it sounds quite brutal.???? The contestant had to >> figure out who the liar was (there was a similar much milder version >> years later in the US). Anyway Turing????????s proposal, if I remember >> correctly, was that a computer could be considered thinking like a >> human if the comp the contestant was listening to was lying and he or >> she couldn????????t tell. In essence the computer would successfully >> lie.???? >> Everybody think Turing believed that computers would eventually think >> like humans but my reading of the article was that he had no idea, but >> as the computer stood at the time there was no chance. >> >> ?? >> >> The reason this is so poignant is the Mueller indictments that came >> down yesterday.???? For those outside the U.S. or not following the news >> the indictments were against Russian military leading a scheme to >> convince individuals of lies about various actor in the 2016 election >> (also times release of information and breaking in to voting >> systems).???? But it is the propagation of lies by robots and people >> believing them that interests me.???? I feel like we aren????????t >> putting >> enough thought into that.???? Many of the people receiving the >> information could not tell it was no from humans and believed it even >> though in many cases it was generated by robots, passing it seems to >> me Turing????????s test.???? How and why did this happen? Of course >> Turing >> died before the Internet so he couldn????????t have known about it.???? >> But I >> wonder if part of the reason the robots were successful is that they >> have the ability to mine, collect and aggregate people????????s biases >> and >> then reflect them back to us.???? We tend to engage, believe things in >> the contexts of our own biases.???? They say in salesmanship that the >> trick is figuring out what people want to here and then couching >> whatever you want to see in that.???? Trump is a master of reading what >> a group of people want to hear at the moment, their biases, and then >> mirroring it back to them >> >> ?? >> >> If we went back to the Chinese room and the person inside was able to >> read our biases from our messages would they then be human.?? >> >> ?? >> >> We live in a strange age. >> >> ?? >> >> From:xmca-l-bounces@mailman.ucsd.eduO >> n >> Behalf Of Andy Blunden >> Sent: Saturday, July 14, 2018 8:58 AM >> To: xmca-l@mailman.ucsd.edu >> Subject: [Xmca-l] Re: Interesting article on robots and social >> learning >> >> ?? >> >> I understand that the Turing Test is one which AI people can use to >> measure the success of their AI - if you can't tell the difference >> between a computer and a human interaction then the computer has >> passed the Turing test. I tend to rely on a kind of anti-Turing Test, >> that is, that if you can tell the difference between the computer and >> the human interaction, then you have passed the anti-Turing test, that >> is, you know something about humans. >> >> Andy >> >> Andy Blunden >> http://www.ethicalpolitics.org/ablunden/index.htm >> >> On 14/07/2018 1:12 PM, Douglas Williams wrote: >> >> >> Hi-- >> >> I think I'll come out of lurking for this one. Actually, what you're >> talking about with this pain algorithm system sounds like a modeling >> system that someone might need to develop what Alan Turing described >> as a P-type computing device. A P-type computer would receive its >> programming from inputs of pleasure and pain. It was probably derived >> from reading some of the behavioralist models of mind at the time. >> Turing thought that he was probably pretty close to being able to >> develop such a computing device, which, because its input was similar, >> could model human thought. >> The Eliza Rogersian analysis computer program was another early idea >> in which the goal was to model the patterns of human interaction, and >> gradually approach closer to human thought and interaction that way. >> And by the 2000's, the idea of the "singularity" was afloat, in which >> one could model human minds so well as to enable a human to be >> uploaded into a computer, and live forever as software (Kurzweil, >> 2005). But given that we barely had a sufficient model of mind to say >> Boo with at the time (what is consciousness? where does intention come >> from? What is the balance of nature/nurture in motivation? Speech >> utterances? and so on), and you're right, AI doesn't have much of a >> theory of emotion, either--the goal of computer software modeling human >> thought seemed very far away to me. >> >> ?? >> >> At someone's request, I wrote a rather whimsical paper called "What is >> Artificial Intelligence?" back in 2006 about such things. My argument >> was that statistical modeling of human interaction and capturing >> thought was not too easy after all, precisely because of the parts of >> mind we don't think of, and the social interactions that, at the time, >> were not a primary focus. I mused about that in the context of my >> trying to write a computer program by applying Chomsky's syntactic >> structures to interpret intention of a few simple questions--without, >> alas, in my case, a corpus-supported Markov chain logic to do it. >> Generative grammar would take care of it, right? Wrong. >> >> >> So as someone who had done a little primitive, incompetent attempt at >> speech modeling myself, and in the light of my later-acquired >> knowledge of CHAT, Burke, Bakhtin, Mead, and various other people in >> different fields, and of the tendency of people to interact through >> the world through cognitive biases, complexes, and embodied >> perceptions that were not readily available to artificial systems, I >> didn't think the singularity was so near. >> >> The terrible thing about computer programs is that they do just what >> you tell them to do, and no more. They have no drive to improve, >> except as programmed. When they do improve, their creativity is >> limited. And the approach now still substantially is >> pattern-recognition based. The current paradigm is something called >> Convolutional Neural Network Long Short-Term Memory Networks >> (CNN/LSTM) for speech recognition, in which the convolutional neural >> networks reduce the variants of speech input into manageable patterns, >> and temporal processing (temporal patterns of the real wold phenomena >> to which the AI system is responding). But while such systems combined >> with natural language processing can increasingly mimic human >> response, and "learn" on their own, and while they are approaching the >> "weak" form of artificial general intelligence (AGI), the intelligence >> needed for a machine to perform any intellectual task that a human >> being can, they are an awfully long way from "strong" AGI--that is, >> something approaching human consciousness. I think that's because they >> are a long way from capturing the kind of social embeddedness of >> almost all animal behavior, and the sense in which human cognition is >> embedded in the messy things, like emotion. A computer algorithm can >> recognize the patterns of emotion, but that's it. An AGI system that can >> experience emotions, or have motivation, is quite another thing >> entirely. >> >> I can tell you that AI confidence is still there. In raising questions >> about cultural and physical embodiment in artficial intelligence >> interations with someone in the field recently, he dismissed the idea >> as being that relevant. His thought was that "what I find essential is >> that we acknowledge that there's no obvious evidence???? supporting that >> the current paradigm of CNN/LSTM under various reinforcement >> algorithms isn't enough for A AGI and in particular for broad >> animal-like intelligence like that of ravens and dogs." >> >> But ravens and dogs are embedded in social interaction, in >> intentionality, in consciousness--qualitatively different than ours, >> maybe, but there. >> Dogs don't do what you ask them to, always. When they do things, they >> do them for their own intentionality, which may be to please you, or >> may be to do something you never asked the dog to do, which is either >> inherent in its nature, or an expression of social interactions with >> you or others, many of which you and they may not be consciously aware >> of. The deep structure of metaphor, the spatiotemporal relations of >> language that Langacker describes as being necessary for construal, >> the worlds of narrativized experience, are mostly outside of the >> reckoning, so far as I know (though I'm not an expert--I could be at >> least partly wrong) of the current CNN/LSTM paradigm. >> >> My old interlocutor in thinking about my language program, Noam >> Chomsky, has been a pretty sharp critic of the pattern recognition >> approach to artificial intelligence. >> >> Here's Chomsky's take on the idea: >> >> http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html >> >> And here's Peter Norvig's response; he's a director of research at >> Google, where Kurzweil is, and where, I assume, they are as close to >> the strong version of artificial general intelligence as anyone out >> there... >> >> http://norvig.com/chomsky.html >> >> Frankly, I would be quite interested in what you think of these things. >> I'm merely an Isaiah Berlin fox, chasing to and fro at all the pretty >> ideas out there. But you, many of you, are, I suspect, the untapped >> hedgehogs whose ideas on these things would see more readily what I >> dimly grasp must be required, not just for achieving a strong AGI, but >> for achieving something that we would see as an ethical, reasonable >> artificial mind that expands human experience, rather than becomes a >> prison that reduces human interactions to its own level. >> >> My own thinking is that lately, Cognitive Metaphor Theory (CMT), which >> I knew more of in its earlier (now "standard model') days, is getting >> even more interesting than it was. I'd done a transfer term to UC >> Berkeley to study with George Lakoff, but we didn't hit it off well, >> perhaps I kept asking him questions about social embeddedness, and >> similarities to Vygotsky's theory of complex thought, and was too >> expressive about my interest in linking out from his approach than >> folding in. It seems that the idea I was rather woolily suggesting to >> Lakoff back then has caught >> on: namely, that utterances could be explored for cultural variation >> and historical embeddedness, a form ofsocial context to the narratives >> and metaphors and blended spaces that underlay speech utterances and >> thought; that there was a degree of social embodiment as well as >> physiological embodiment through which language operated. I thought >> then, and it looks like some other people now, are thinking that >> someone seeking to understand utterances (as a strong AGI system would >> need to do) really, would need to engage in internalizing and >> ventriloqusing a form of Geertz's thick description of interactions. >> In such forms, words do not mean what they say, and can have different >> affect that is a bit more complex than I think temporal processing >> currently addresses. >> >> I think these are the kind of things that artificial intelligence >> would need truly to advance, and that Bakhtin and Vygotsky and >> Leont'ev and in the visual world, Eisenstein were addressing all >> along... >> >> And, of course, you guys. >> >> ?? >> >> Regards, >> >> Douglas Willams >> >> ?? >> >> ?? >> >> ?? >> >> On Tuesday, July 3, 2018, 10:35:45 AM PDT, David H >> Kirshner wrote: >> >> ?? >> >> ?? >> >> The other side of the coin is that ineffable human experience is >> becoming more effable. >> >> Computers can now look at a human brain scan and determine the degree >> of subjectively experienced pain: >> >> ?? >> >> In 2013, Tor Wager, a neuroscientist at the University of Colorado, >> Boulder, took the logical next step by creating an algorithm that >> could recognize pain????????s distinctive patterns; today, it can pick >> out >> brains in pain with more than ninety-five-per-cent accuracy. When the >> algorithm is asked to sort activation maps by apparent intensity, its >> ranking matches participants???????? subjective pain ratings. By >> analyzing >> neural activity, it can tell not just whether someone is in pain but >> also how intense the experience is. >> >> ?? >> >> So, perhaps the computer can????????t ???????feel our pain,??????? but >> it can sure >> ???????sense our pain!??????? >> >> ?? >> >> Here????????s the full article: >> >> https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain >> >> ?? >> >> David >> >> ?? >> >> From:xmca-l-bounces@mailman.ucsd.eduO >> n >> Behalf Of Glassman, Michael >> Sent: Tuesday, July 3, 2018 8:16 AM >> To: eXtended Mind, Culture, Activity >> Subject: [Xmca-l] Re: Interesting article on robots and social >> learning >> >> ?? >> >> ?? >> >> ?? >> >> It seems like we are still having the same argument as when robots >> first came on the scene.???? In response to John McCarthy, who was >> claiming that eventually robots can have belief systems and >> motivations similar to humans through AI John Searle wrote the Chinese >> room.???? There have been a lot of responses to the Chinese room over >> the years and a number of digital philosopher claim it is no longer >> salient, but I don????????t think anybody has ever effectively answered >> his central question. >> >> ?? >> >> Just a quick recap.???? You come to a closed door and know there is a >> person on the other side. To communicate you decide the teacher the >> person on the other side Chinese. You do this by continuously >> exchanging rules systems under the door.???? After a while you are able >> to have a conversation with the individual in perfect Chinese. But >> does that person actually know Chinese just from the rule systems.???? I >> think Searle????????s major point is are you really learning if you >> don????????t >> know why you????????re learning, or are you just repeating. Learning is >> embedded in the human condition and the reason it works so well and is >> adaptable is because we understand it when we use what we learn in the >> world in response to others.???? To put it in response to the post, does >> a bomb defusion robot really learn how to defuse a bomb if it does not >> know why it is doing it.???? It might cut the right wires at the right >> time but it doesn????????t understand why and therefore is not doing the >> task just a series of steps it has been able to absorb.???? Is that the >> opposite of human learning? >> >> ?? >> >> What the researcher did really isn????????t that special at this >> point.???? >> Well I definitely couldn????????t do it and it is amazing, but it is in >> essence a miniature version of Libratus (which beat experts at Texas >> Hold em) and Alphago (which beat the second best Go player in the >> world).???? My guess it is the same use of deep learning in which the >> program integrates new information into what it is already capable >> of.???? If machines can learn from interacting with other humans then >> they can learn from interacting with other machines.???? It is the same >> principle (though much, much simpler in this case).???? The question is >> what does it mean.???? As we defining learning down because of the >> zeitgeist. ????Greg started his post saying a socio-cultural theorist be >> interested in this research.???? I wonder if they might more likely to >> be the ones putting on the brakes, asking questions about it. >> >> ?? >> >> Michael >> >> ?? >> >> From:xmca-l-bounces@mailman.ucsd.edu >> On >> Behalf Of Andy Blunden >> Sent: Tuesday, July 03, 2018 7:04 AM >> To: xmca-l@mailman.ucsd.edu >> Subject: [Xmca-l] Re: Interesting article on robots and social >> learning >> >> ?? >> >> Does a robot have "motivation"? >> >> andy >> >> Andy Blunden >> http://www.ethicalpolitics.org/ablunden/index.htm >> >> On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: >> >> >> Hi Greg, >> >> ?? >> >> What is most interesting to me about the understanding of learning >> which informs most AI projects is that it seems to assume that affect >> is irrelevant. The role of caring, liking, worrying etc. in social >> learning seems to be almost universally overlooked because information >> is seen as something that can be ???????got???????? and >> ???????given???????? more than >> something that is distributed in relationships. >> >> ?? >> >> Does anyone know about any AI projects which consider how machines >> might feel about what they learn? >> >> ?? >> >> All the best, >> >> >> Rod >> >> ?? >> >> From:xmca-l-bounces@mailman.ucsd.eduO >> n >> Behalf Of Greg Thompson >> Sent: 03 July 2018 02:50 >> To: eXtended Mind, Culture, Activity >> Subject: [Xmca-l] Interesting article on robots and social learning >> >> ?? >> >> I????????m ambivalent about this project but I suspect that some young >> CHAT >> scholar out there could have a lot to contribute to a project like >> this >> one: >> >> https://www.sapiens.org/column/machinations/artificial-intelligence-cu >> lture/ >> >> ?? >> >> -Greg?? >> >> -- >> >> Gregory A. Thompson, Ph.D. >> >> Assistant Professor >> >> Department of Anthropology >> >> 880 Spencer W. Kimball Tower >> >> Brigham Young University >> >> Provo, UT 84602 >> >> WEBSITE:greg.a.thompson.byu.edu?? >> http://byu.academia.edu/GregoryThompson >> >> >> >> This email and any files with it are confidential and intended solely >> for the use of the recipient to whom it is addressed. If you are not >> the intended recipient then copying, distribution or other use of the >> information contained is strictly prohibited and you should not rely >> on it. If you have received this email in error please let the sender >> know immediately and delete it from your system(s). Internet emails >> are not necessarily secure. While we take every care, University of >> Plymouth accepts no responsibility for viruses and it is your >> responsibility to scan emails and their attachments. University of >> Plymouth does not accept responsibility for any changes made after it >> was sent. Nothing in this email or its attachments constitutes an >> order for goods or services unless accompanied by an official order >> form. >> >> >> ?? >> >> >> ?? >> >> >> ?? >> >> >> >> >> >> >> ?? >> >> -- >> >> Gregory A. Thompson, Ph.D. >> >> Assistant Professor >> >> Department of Anthropology >> >> 880 Spencer W. Kimball Tower >> >> Brigham Young University >> >> Provo, UT 84602 >> >> WEBSITE:greg.a.thompson.byu.edu?? >> http://byu.academia.edu/GregoryThompson >> > > > Dra. Julie Waddington > Departament de Did??ctiques Espec??fiques > Facultat d'Educaci?? i Psicologia > Universitat de Girona > > > > > Dra. Julie Waddington Departament de Did?ctiques Espec?fiques Facultat d'Educaci? i Psicologia Universitat de Girona From greg.a.thompson@gmail.com Wed Jul 18 18:09:01 2018 From: greg.a.thompson@gmail.com (Greg Thompson) Date: Thu, 19 Jul 2018 01:09:01 +0000 Subject: [Xmca-l] Re: If economics is immune from ethics, why should exploitation be a topic of discussion in economics? In-Reply-To: References: Message-ID: Thanks Andy, that's very interesting/informative. Would you say that this is true for his 1844 economic and philosophical manuscripts as well? I'm thinking of the notion of "species being" as an ethical concept. This is all well over my head, but I thought I'd try the question. -greg On Wed, Jul 18, 2018 at 5:17 AM, Andy Blunden wrote: > Harshad, > > According to Marx, "exploitation," as he uses the concept in *Capital*, > is not an ethical concept at all; it simply means making a gain by > utilising an affordance, as in "exploiting natural resources." Many > "Marxist economists" today adhere to this view. However, I am one of those > that hold a different view. And the legacy of Stalinism is evidence of some > deficit in the legacy of Marx's writing - it was so easy for Stalin to > dismiss ethics as just so much nonsense and claim the mantel of Marxism! > > Much as I admire Marx, he was wrong on Ethics. He was a creature of his > times in this respect, or rather in endeavouring to *not* be a creature > of his times, he made an opposite error. He held all ethics in contempt as > if religion had a monopoly on this topic, and it were nothing more than > some kind of confidence trick to fool the masses. (Many today share this > view.) In fact, contrary to his own self-consciousness, *Capital* is a > seminal work of ethics. > > The problem stems from Hegel and from Marx's efforts to make a positive > critique of Hegel. As fine a work of Ethics as Hegel's *Philosophy of > Right* is, it had certain problems which Marx had to overcome. These > included Hegel's insistence that the state alone could determine right and > wrong (the state could of course make errors, but in the long run there is > no extramundane source of Right beyond the state). This was something > impossible for Marx to accept. And yet Hegel's idea of Ethics as something > objective, contained in the evolving forms of life (rather than Pure Reason > inherent in every individual as Kant held, or from God via His agents on > Earth, the priesthood), Marx wished to embrace and continue. > > So the situation is very complex. The foremost work on Ethics was authored > by a person who did not believe they wrote about Ethics at all. > > Here is a page with lots of resources on this question: > https://www.marxists.org/subject/ethics/index.htm > > Andy > ------------------------------ > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > On 18/07/2018 2:54 PM, Harshad Dave wrote: > > Why do we discuss on exploitation? > As per Marx's views, ethics has no influence on economic processes. Does > exploitation have no link with ethical feelings? The sense of exploitation > is absolutely linked with our ethical feelings. If economics is immune from > influence of ethics and sense of *exploitation* is founded on our ethical > evaluation, then discussion on *exploitation* should not find place in > the topics of economics/political economics. > Harshad Dave > > hhdave15@gmail.com > > > > Harshad Dave > ? hhdave15@gmail.com? > > > -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180719/129300f0/attachment.html From greg.a.thompson@gmail.com Wed Jul 18 18:12:23 2018 From: greg.a.thompson@gmail.com (Greg Thompson) Date: Thu, 19 Jul 2018 01:12:23 +0000 Subject: [Xmca-l] Re: If economics is immune from ethics, why should exploitation be a topic of discussion in economics? In-Reply-To: References: Message-ID: Sorry, I misread your post Andy. Don't think my question really makes sense in light of your meaning. (I assume that you'd agree with the sentiment of my question...). -greg On Thu, Jul 19, 2018 at 1:09 AM, Greg Thompson wrote: > Thanks Andy, that's very interesting/informative. Would you say that this > is true for his 1844 economic and philosophical manuscripts as well? I'm > thinking of the notion of "species being" as an ethical concept. > > This is all well over my head, but I thought I'd try the question. > -greg > > On Wed, Jul 18, 2018 at 5:17 AM, Andy Blunden wrote: > >> Harshad, >> >> According to Marx, "exploitation," as he uses the concept in *Capital*, >> is not an ethical concept at all; it simply means making a gain by >> utilising an affordance, as in "exploiting natural resources." Many >> "Marxist economists" today adhere to this view. However, I am one of those >> that hold a different view. And the legacy of Stalinism is evidence of some >> deficit in the legacy of Marx's writing - it was so easy for Stalin to >> dismiss ethics as just so much nonsense and claim the mantel of Marxism! >> >> Much as I admire Marx, he was wrong on Ethics. He was a creature of his >> times in this respect, or rather in endeavouring to *not* be a creature >> of his times, he made an opposite error. He held all ethics in contempt as >> if religion had a monopoly on this topic, and it were nothing more than >> some kind of confidence trick to fool the masses. (Many today share this >> view.) In fact, contrary to his own self-consciousness, *Capital* is a >> seminal work of ethics. >> >> The problem stems from Hegel and from Marx's efforts to make a positive >> critique of Hegel. As fine a work of Ethics as Hegel's *Philosophy of >> Right* is, it had certain problems which Marx had to overcome. These >> included Hegel's insistence that the state alone could determine right and >> wrong (the state could of course make errors, but in the long run there is >> no extramundane source of Right beyond the state). This was something >> impossible for Marx to accept. And yet Hegel's idea of Ethics as something >> objective, contained in the evolving forms of life (rather than Pure Reason >> inherent in every individual as Kant held, or from God via His agents on >> Earth, the priesthood), Marx wished to embrace and continue. >> >> So the situation is very complex. The foremost work on Ethics was >> authored by a person who did not believe they wrote about Ethics at all. >> >> Here is a page with lots of resources on this question: >> https://www.marxists.org/subject/ethics/index.htm >> >> Andy >> ------------------------------ >> Andy Blunden >> http://www.ethicalpolitics.org/ablunden/index.htm >> On 18/07/2018 2:54 PM, Harshad Dave wrote: >> >> Why do we discuss on exploitation? >> As per Marx's views, ethics has no influence on economic processes. Does >> exploitation have no link with ethical feelings? The sense of exploitation >> is absolutely linked with our ethical feelings. If economics is immune from >> influence of ethics and sense of *exploitation* is founded on our >> ethical evaluation, then discussion on *exploitation* should not find >> place in the topics of economics/political economics. >> Harshad Dave >> >> hhdave15@gmail.com >> >> >> >> Harshad Dave >> ? hhdave15@gmail.com? >> >> >> > > > -- > Gregory A. Thompson, Ph.D. > Assistant Professor > Department of Anthropology > 880 Spencer W. Kimball Tower > Brigham Young University > Provo, UT 84602 > WEBSITE: greg.a.thompson.byu.edu > http://byu.academia.edu/GregoryThompson > -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180719/000622d0/attachment.html From andyb@marxists.org Wed Jul 18 18:57:30 2018 From: andyb@marxists.org (Andy Blunden) Date: Thu, 19 Jul 2018 11:57:30 +1000 Subject: [Xmca-l] Re: If economics is immune from ethics, why should exploitation be a topic of discussion in economics? In-Reply-To: References: Message-ID: <5a7ac4de-a4ec-a59b-7c87-7ae551563517@marxists.org> Yes. The 1844 Manuscripts contain more obviously ethical language and ideas than /Capital/ does at first sight, but we still have the same contradiction that wherever Marx addresses Ethics he dismisses it. In the later works he seems to be advocating a "scientific objectivism" which is not so much the case with 1844. I neglected to mention in responding to Harshad, that Marx also rejected with justified contempt "emotivist" approaches to Ethics, i.e., the reduction of Ethics to feelings and preferences, which became very fashionable in the decades after his death. As you could see from that link I posted, the Social Democracy made a lot of efforts to fill this gap, but this was all swept away with the Russian Revolution and the Third International. I think it is only via Hegel that a Marxist Ethics can be recovered, but it is challenging. Andy ------------------------------------------------------------ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 19/07/2018 11:12 AM, Greg Thompson wrote: > Sorry, I misread your post Andy. Don't think my question > really makes sense in light of your meaning. (I assume > that you'd agree with the sentiment of my question...). > -greg > > On Thu, Jul 19, 2018 at 1:09 AM, Greg Thompson > > wrote: > > Thanks Andy, that's very interesting/informative. > Would you say that this is true for his 1844 economic > and philosophical manuscripts as well? I'm thinking of > the notion of "species being" as an ethical concept. > > This is all well over my head, but I thought I'd try > the question. > -greg > > On Wed, Jul 18, 2018 at 5:17 AM, Andy Blunden > > wrote: > > Harshad, > > According to Marx, "exploitation," as he uses the > concept in /Capital/, is not an ethical concept at > all; it simply means making a gain by utilising an > affordance, as in "exploiting natural resources." > Many "Marxist economists" today adhere to this > view. However, I am one of those that hold a > different view. And the legacy of Stalinism is > evidence of some deficit in the legacy of Marx's > writing - it was so easy for Stalin to dismiss > ethics as just so much nonsense and claim the > mantel of Marxism! > > Much as I admire Marx, he was wrong on Ethics. He > was a creature of his times in this respect, or > rather in endeavouring to /not/ be a creature of > his times, he made an opposite error. He held all > ethics in contempt as if religion had a monopoly > on this topic, and it were nothing more than some > kind of confidence trick to fool the masses. (Many > today share this view.) In fact, contrary to his > own self-consciousness, /Capital/ is a seminal > work of ethics. > > The problem stems from Hegel and from Marx's > efforts to make a positive critique of Hegel. As > fine a work of Ethics as Hegel's /Philosophy of > Right/ is, it had certain problems which Marx had > to overcome. These included Hegel's insistence > that the state alone could determine right and > wrong (the state could of course make errors, but > in the long run there is no extramundane source of > Right beyond the state). This was something > impossible for Marx to accept. And yet Hegel's > idea of Ethics as something objective, contained > in the evolving forms of life (rather than Pure > Reason inherent in every individual as Kant held, > or from God via His agents on Earth, the > priesthood), Marx wished to embrace and continue. > > So the situation is very complex. The foremost > work on Ethics was authored by a person who did > not believe they wrote about Ethics at all. > > Here is a page with lots of resources on this > question: > https://www.marxists.org/subject/ethics/index.htm > > > Andy > > ------------------------------------------------------------ > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 18/07/2018 2:54 PM, Harshad Dave wrote: >> >> >> Why do we discuss on exploitation? >> >> As per Marx's views, ethics has no influence on >> economic processes. Does exploitation have no >> link with ethical feelings? The sense of >> exploitation is absolutely linked with our >> ethical feelings. If economics is immune from >> influence of ethics and sense of /*exploitation*/ >> is founded on our ethical evaluation, then >> discussion on /*exploitation*/ should not find >> place in the topics of economics/political economics. >> Harshad Dave >> hhdave15@gmail.com >> >> >> >> Harshad Dave >> ?hhdave15@gmail.com ? >> > > > > > -- > Gregory A. Thompson, Ph.D. > Assistant Professor > Department of Anthropology > 880 Spencer W. Kimball Tower > Brigham Young University > Provo, UT 84602 > WEBSITE: greg.a.thompson.byu.edu > > http://byu.academia.edu/GregoryThompson > > > > > > -- > Gregory A. Thompson, Ph.D. > Assistant Professor > Department of Anthropology > 880 Spencer W. Kimball Tower > Brigham Young University > Provo, UT 84602 > WEBSITE: greg.a.thompson.byu.edu > > http://byu.academia.edu/GregoryThompson -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180719/c7aae553/attachment.html From haydizulfei@rocketmail.com Thu Jul 19 03:01:50 2018 From: haydizulfei@rocketmail.com (Haydi Zulfei) Date: Thu, 19 Jul 2018 10:01:50 +0000 (UTC) Subject: [Xmca-l] Re: If economics is immune from ethics, why should exploitation be a topic of discussion in economics? In-Reply-To: <5a7ac4de-a4ec-a59b-7c87-7ae551563517@marxists.org> References: <5a7ac4de-a4ec-a59b-7c87-7ae551563517@marxists.org> Message-ID: <1505116228.10345988.1531994510038@mail.yahoo.com> Everything begins with if we accept "Being before Thinking". "German Ideology" , "Plekhanov's Considerations" , "The Holy Family" could come to one's help. Being is not to be taken as something abstract irrespective of flesh and blood and the "contamination of any true thought with the Substantial". Being means : We as earthly bloody , fleshy , bony human beings ARE , tightly and indispensably confined and constrained by the conditions and circumstances of our material daily lives having the outside world (having pre-existed to ourselves having given rise to our very existence on ripe conditions) confronting us. Now Marx has labeled the whole of our inorganic immaterial life arising from the specific social relations based on people's conditions of life "ideology". But he has no problem with this kind of ideology. He ridicules the type of ideology Max Stirner , Brauno Bauer , Feuerbach , etc. preach which is none but inviting the German people to follow thoughts , morals , legals , philosophy , religion , etc. which have their source not in actual real conditions of their lives but in disputes over whether the horse teeth is being identified by opening his mouth and verifying the objective or speculating by pure thought in a den for the truth of the fact. One cannot study any history without considering the material conditions and relations which have given existence to those histories. As to the first kind of ideology (pinned to the life conditions) , Marx remarks that they sustain and prevail up to the point where these ideological trends and corresponding social relations thereof would not hinder and block the continuance of the advancement of the material forces of the actual lives. Now Harshad and the supporters stick to the concept of "exploitation" to justify their commitment to the immorality of Marx's worldview forgetting that Marx is consistent with his ideas. First no one capitalist to this day has given up his greed for interest , surplus , and utmost luxury and mercilessness for sake of ethics. On the contrary , the whole world is now burning in flames which corporations have created. Trump is himself a slave of his greed and co-corporations. Huge amounts of ethics will not prevent him from inflicting all kinds of maladies to the oppressed people for the sake of "America Alone", that is , not America but his and other corporations' benefits. The supremiest miraculous Prophetic Ethics would not shake him for a moment to think of real peace and compassion. Within corporations no Ethics reign. Money the embodiment of surplus saves no place even for friends (two world wars to this day) . Second , Ethics is a science , a concept , a category , AN IDEA OR THE IDEA . That is we cannot say ANY idea creates the objective world but that what passes on Earth , in our conditions of life gives rise to this or that idea of the human being. Men also create their conditions of life through their ideas but not BY their ideas (Theses of Feuerbach). Ideas AFFECT circumstances INDIRECTLY but we should add , to the threshold of non-necessities. Freedom is knowing the necessary. Weaving looms would not bring out Shuttles. The Net will not tolerate walking CHAPARS , walking postmen. Marx gives data for "exploitation". Gives data for the claim that as far as appropriation of the surplus value works , the labourer remains captive to what he produces and there's no way out , his captivity arises from the objective material conditions of his social life not from speculations or phantoms. This differs from "do this" , "do not do that" without any base in reality or through a permanent uptake of "supposed good and evil". He cannot flee like plebeians to the cities to find a random occupation in a guild to be used to make a future. He seems to be an individual but is not. He is a social miser. He does not have a say in civil society which is so much praised by Hegel. The fact that Marx's so-called Ethics!! is embodied in measurable exploitation becomes a world difference to the point where Ethics becomes disparate and non-existent. It is like Feuerbach who revolts against religion but in the end he again makes a RELIGION of whatever like LOVE , etc. Surplus value has made the current world. Thoughts and ideologies of the governing classes penetrate every big and small mind , that is , Capital exploits the oppressed both materially and spiritually and the remedy is to know the real mechanism and to devise real defense systems in the dimensions of the whole world to combat all filth and arrogance. Hegel saw everything in order therefore he created his "philosophy of right". His public are not the real individual people to make social crucial decisions of their own to the detriment of the established existing social order which highly consists of the Monarch and the Corporations. What is real is rational. On those days , this meant no more than what exists is rational. His pupils gradually revolted against him. One might say he saw everything in flux. But the peril lies in the fact that his RELATIVITY and belief in continual change led unfortunately and ultimately to the ABSOLUTE. He reached the end of the world. His great discoveries in philosophy is not to be denied. And he also came to the real society but again saw the real society as alienated creature of THE IDEA promoted to the RANK OF THE ABSOLUTE. His pacifity dictated him to accept to find the cure for all maladies in returning to THE IDEA. That was his Unique who brought out all salvation. And no surprise to the glorification on the part of the followers on matters of Ethics and Preaching. One cannot be a marxist and a Hegelian at the same time or concealing one's high sincerity to Hegel to the detriment of Marx preserving the right to the orthodoxies for Marx. Great apologies if I'm not able to continue the discussion if demanded. No pretext! Sincerely Haydi? ? On Thursday, July 19, 2018, 6:30:10 AM GMT+4:30, Andy Blunden wrote: Yes. The 1844 Manuscripts contain more obviously ethical language and ideas than Capital does at first sight, but we still have the same contradiction that wherever Marx addresses Ethics he dismisses it. In the later works he seems to be advocating a "scientific objectivism" which is not so much the case with 1844. I neglected to mention in responding to Harshad, that Marx also rejected with justified contempt "emotivist" approaches to Ethics, i.e., the reduction of Ethics to feelings and preferences, which became very fashionable in the decades after his death. As you could see from that link I posted, the Social Democracy made a lot of efforts to fill this gap, but this was all swept away with the Russian Revolution and the Third International. I think it is only via Hegel that a Marxist Ethics can be recovered, but it is challenging. Andy Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 19/07/2018 11:12 AM, Greg Thompson wrote: Sorry, I misread your post Andy. Don't think my question really makes sense in light of your meaning. (I assume that you'd agree with the sentiment of my question...). -greg On Thu, Jul 19, 2018 at 1:09 AM, Greg Thompson wrote: Thanks Andy, that's very interesting/informative. Would you say that this is true for his 1844 economic and philosophical manuscripts as well? I'm thinking of the notion of "species being" as an ethical concept. This is all well over my head, but I thought I'd try the question. -greg On Wed, Jul 18, 2018 at 5:17 AM, Andy Blunden wrote: Harshad, According to Marx, "exploitation," as he uses the concept in Capital, is not an ethical concept at all; it simply means making a gain by utilising an affordance, as in "exploiting natural resources." Many "Marxist economists" today adhere to this view. However, I am one of those that hold a different view. And the legacy of Stalinism is evidence of some deficit in the legacy of Marx's writing - it was so easy for Stalin to dismiss ethics as just so much nonsense and claim the mantel of Marxism! Much as I admire Marx, he was wrong on Ethics. He was a creature of his times in this respect, or rather in endeavouring to not be a creature of his times, he made an opposite error. He held all ethics in contempt as if religion had a monopoly on this topic, and it were nothing more than some kind of confidence trick to fool the masses. (Many today share this view.) In fact, contrary to his own self-consciousness, Capital is a seminal work of ethics. The problem stems from Hegel and from Marx's efforts to make a positive critique of Hegel. As fine a work of Ethics as Hegel's Philosophy of Right is, it had certain problems which Marx had to overcome. These included Hegel's insistence that the state alone could determine right and wrong (the state could of course make errors, but in the long run there is no extramundane source of Right beyond the state). This was something impossible for Marx to accept. And yet Hegel's idea of Ethics as something objective, contained in the evolving forms of life (rather than Pure Reason inherent in every individual as Kant held, or from God via His agents on Earth, the priesthood), Marx wished to embrace and continue. So the situation is very complex. The foremost work on Ethics was authored by a person who did not believe they wrote about Ethics at all. Here is a page with lots of resources on this question: https://www.marxists.org/subje ct/ethics/index.htm Andy Andy Blunden http://www.ethicalpolitics.org /ablunden/index.htm On 18/07/2018 2:54 PM, Harshad Dave wrote: Why do we discuss on exploitation? As per Marx's views, ethics has no influence on economic processes. Does exploitation have no link with ethical feelings? The sense of exploitation is absolutely linked with our ethical feelings. If economics is immune from influence of ethics and sense of exploitation is founded on our ethical evaluation, then discussion on?exploitation?should not find place in the topics of economics/political economics. Harshad Dave hhdave15@gmail.com Harshad Dave ?hhdave15@gmail.com? -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu? http://byu.academia.edu/ GregoryThompson -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu? http://byu.academia.edu/GregoryThompson -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180719/4e905822/attachment.html From billkerr@gmail.com Thu Jul 19 04:45:28 2018 From: billkerr@gmail.com (Bill Kerr) Date: Thu, 19 Jul 2018 19:45:28 +0800 Subject: [Xmca-l] Re: If economics is immune from ethics, why should exploitation be a topic of discussion in economics? In-Reply-To: <5a7ac4de-a4ec-a59b-7c87-7ae551563517@marxists.org> References: <5a7ac4de-a4ec-a59b-7c87-7ae551563517@marxists.org> Message-ID: It's a while since I looked at this but Vanessa Wills has her PhD thesis "Marx and Morality" on line: http://d-scholarship.pitt.edu/10867/1/VWills_ETD_2011.pdf On Thu, Jul 19, 2018 at 9:57 AM, Andy Blunden wrote: > Yes. The 1844 Manuscripts contain more obviously ethical language and > ideas than *Capital* does at first sight, but we still have the same > contradiction that wherever Marx addresses Ethics he dismisses it. In the > later works he seems to be advocating a "scientific objectivism" which is > not so much the case with 1844. I neglected to mention in responding to > Harshad, that Marx also rejected with justified contempt "emotivist" > approaches to Ethics, i.e., the reduction of Ethics to feelings and > preferences, which became very fashionable in the decades after his death. > As you could see from that link I posted, the Social Democracy made a lot > of efforts to fill this gap, but this was all swept away with the Russian > Revolution and the Third International. I think it is only via Hegel that a > Marxist Ethics can be recovered, but it is challenging. > > Andy > ------------------------------ > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > On 19/07/2018 11:12 AM, Greg Thompson wrote: > > Sorry, I misread your post Andy. Don't think my question really makes > sense in light of your meaning. (I assume that you'd agree with the > sentiment of my question...). > -greg > > On Thu, Jul 19, 2018 at 1:09 AM, Greg Thompson > wrote: > >> Thanks Andy, that's very interesting/informative. Would you say that this >> is true for his 1844 economic and philosophical manuscripts as well? I'm >> thinking of the notion of "species being" as an ethical concept. >> >> This is all well over my head, but I thought I'd try the question. >> -greg >> >> On Wed, Jul 18, 2018 at 5:17 AM, Andy Blunden < >> andyb@marxists.org> wrote: >> >>> Harshad, >>> >>> According to Marx, "exploitation," as he uses the concept in *Capital*, >>> is not an ethical concept at all; it simply means making a gain by >>> utilising an affordance, as in "exploiting natural resources." Many >>> "Marxist economists" today adhere to this view. However, I am one of those >>> that hold a different view. And the legacy of Stalinism is evidence of some >>> deficit in the legacy of Marx's writing - it was so easy for Stalin to >>> dismiss ethics as just so much nonsense and claim the mantel of Marxism! >>> >>> Much as I admire Marx, he was wrong on Ethics. He was a creature of his >>> times in this respect, or rather in endeavouring to *not* be a creature >>> of his times, he made an opposite error. He held all ethics in contempt as >>> if religion had a monopoly on this topic, and it were nothing more than >>> some kind of confidence trick to fool the masses. (Many today share this >>> view.) In fact, contrary to his own self-consciousness, *Capital* is a >>> seminal work of ethics. >>> >>> The problem stems from Hegel and from Marx's efforts to make a positive >>> critique of Hegel. As fine a work of Ethics as Hegel's *Philosophy of >>> Right* is, it had certain problems which Marx had to overcome. These >>> included Hegel's insistence that the state alone could determine right and >>> wrong (the state could of course make errors, but in the long run there is >>> no extramundane source of Right beyond the state). This was something >>> impossible for Marx to accept. And yet Hegel's idea of Ethics as something >>> objective, contained in the evolving forms of life (rather than Pure Reason >>> inherent in every individual as Kant held, or from God via His agents on >>> Earth, the priesthood), Marx wished to embrace and continue. >>> >>> So the situation is very complex. The foremost work on Ethics was >>> authored by a person who did not believe they wrote about Ethics at all. >>> >>> Here is a page with lots of resources on this question: >>> https://www.marxists.org/subject/ethics/index.htm >>> >>> Andy >>> ------------------------------ >>> Andy Blunden >>> http://www.ethicalpolitics.org/ablunden/index.htm >>> On 18/07/2018 2:54 PM, Harshad Dave wrote: >>> >>> Why do we discuss on exploitation? >>> As per Marx's views, ethics has no influence on economic processes. Does >>> exploitation have no link with ethical feelings? The sense of exploitation >>> is absolutely linked with our ethical feelings. If economics is immune from >>> influence of ethics and sense of *exploitation* is founded on our >>> ethical evaluation, then discussion on *exploitation* should not find >>> place in the topics of economics/political economics. >>> Harshad Dave >>> hhdave15@gmail.com >>> >>> >>> >>> Harshad Dave >>> ? hhdave15@gmail.com? >>> >>> >>> >> >> >> -- >> Gregory A. Thompson, Ph.D. >> Assistant Professor >> Department of Anthropology >> 880 Spencer W. Kimball Tower >> Brigham Young University >> Provo, UT 84602 >> WEBSITE: greg.a.thompson.byu.edu >> http://byu.academia.edu/GregoryThompson >> > > > > -- > Gregory A. Thompson, Ph.D. > Assistant Professor > Department of Anthropology > 880 Spencer W. Kimball Tower > Brigham Young University > Provo, UT 84602 > WEBSITE: greg.a.thompson.byu.edu > http://byu.academia.edu/GregoryThompson > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180719/b0a675d7/attachment.html From rakahu@utu.fi Thu Jul 19 07:50:39 2018 From: rakahu@utu.fi (Rauno Huttunen) Date: Thu, 19 Jul 2018 14:50:39 +0000 Subject: [Xmca-l] Re: If economics is immune from ethics, why should exploitation be a topic of discussion in economics? In-Reply-To: References: <5a7ac4de-a4ec-a59b-7c87-7ae551563517@marxists.org> Message-ID: <36b2fc772bc44bcd92b4edbeb559827a@EX13-07.utu.fi> Hello, I have reflected these issue too. Those few ethical lines in Das Kapital - its ethical frame of reference remind very much Adam Smith?s theory of moral sentimentals. Indeed those line kind of revoke Smith?s ethical sentiments of ?impartial spectator?. Greetings from very hot Finland, 32 celsius Rauno Huttunen L?hett?j?: Bill Kerr [mailto:billkerr@gmail.com] L?hetetty: torstai 19. hein?kuuta 2018 14.45 Vastaanottaja: eXtended Mind, Culture, Activity Aihe: [Xmca-l] Re: If economics is immune from ethics, why should exploitation be a topic of discussion in economics? It's a while since I looked at this but Vanessa Wills has her PhD thesis "Marx and Morality" on line: http://d-scholarship.pitt.edu/10867/1/VWills_ETD_2011.pdf On Thu, Jul 19, 2018 at 9:57 AM, Andy Blunden > wrote: Yes. The 1844 Manuscripts contain more obviously ethical language and ideas than Capital does at first sight, but we still have the same contradiction that wherever Marx addresses Ethics he dismisses it. In the later works he seems to be advocating a "scientific objectivism" which is not so much the case with 1844. I neglected to mention in responding to Harshad, that Marx also rejected with justified contempt "emotivist" approaches to Ethics, i.e., the reduction of Ethics to feelings and preferences, which became very fashionable in the decades after his death. As you could see from that link I posted, the Social Democracy made a lot of efforts to fill this gap, but this was all swept away with the Russian Revolution and the Third International. I think it is only via Hegel that a Marxist Ethics can be recovered, but it is challenging. Andy ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 19/07/2018 11:12 AM, Greg Thompson wrote: Sorry, I misread your post Andy. Don't think my question really makes sense in light of your meaning. (I assume that you'd agree with the sentiment of my question...). -greg On Thu, Jul 19, 2018 at 1:09 AM, Greg Thompson > wrote: Thanks Andy, that's very interesting/informative. Would you say that this is true for his 1844 economic and philosophical manuscripts as well? I'm thinking of the notion of "species being" as an ethical concept. This is all well over my head, but I thought I'd try the question. -greg On Wed, Jul 18, 2018 at 5:17 AM, Andy Blunden > wrote: Harshad, According to Marx, "exploitation," as he uses the concept in Capital, is not an ethical concept at all; it simply means making a gain by utilising an affordance, as in "exploiting natural resources." Many "Marxist economists" today adhere to this view. However, I am one of those that hold a different view. And the legacy of Stalinism is evidence of some deficit in the legacy of Marx's writing - it was so easy for Stalin to dismiss ethics as just so much nonsense and claim the mantel of Marxism! Much as I admire Marx, he was wrong on Ethics. He was a creature of his times in this respect, or rather in endeavouring to not be a creature of his times, he made an opposite error. He held all ethics in contempt as if religion had a monopoly on this topic, and it were nothing more than some kind of confidence trick to fool the masses. (Many today share this view.) In fact, contrary to his own self-consciousness, Capital is a seminal work of ethics. The problem stems from Hegel and from Marx's efforts to make a positive critique of Hegel. As fine a work of Ethics as Hegel's Philosophy of Right is, it had certain problems which Marx had to overcome. These included Hegel's insistence that the state alone could determine right and wrong (the state could of course make errors, but in the long run there is no extramundane source of Right beyond the state). This was something impossible for Marx to accept. And yet Hegel's idea of Ethics as something objective, contained in the evolving forms of life (rather than Pure Reason inherent in every individual as Kant held, or from God via His agents on Earth, the priesthood), Marx wished to embrace and continue. So the situation is very complex. The foremost work on Ethics was authored by a person who did not believe they wrote about Ethics at all. Here is a page with lots of resources on this question: https://www.marxists.org/subject/ethics/index.htm Andy ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 18/07/2018 2:54 PM, Harshad Dave wrote: Why do we discuss on exploitation? As per Marx's views, ethics has no influence on economic processes. Does exploitation have no link with ethical feelings? The sense of exploitation is absolutely linked with our ethical feelings. If economics is immune from influence of ethics and sense of exploitation is founded on our ethical evaluation, then discussion on exploitation should not find place in the topics of economics/political economics. Harshad Dave hhdave15@gmail.com Harshad Dave ?hhdave15@gmail.com? -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson -- Gregory A. Thompson, Ph.D. Assistant Professor Department of Anthropology 880 Spencer W. Kimball Tower Brigham Young University Provo, UT 84602 WEBSITE: greg.a.thompson.byu.edu http://byu.academia.edu/GregoryThompson -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180719/78774bb7/attachment.html From ulvi.icil@gmail.com Thu Jul 19 07:56:44 2018 From: ulvi.icil@gmail.com (=?UTF-8?B?VWx2aSDEsMOnaWw=?=) Date: Thu, 19 Jul 2018 17:56:44 +0300 Subject: [Xmca-l] Re: If economics is immune from ethics, why should exploitation be a topic of discussion in economics? In-Reply-To: References: Message-ID: Andy, what about Lenin in this issue? Ulvi 18 Tem 2018 ?ar 08:19 tarihinde Andy Blunden ?unu yazd?: > Harshad, > > According to Marx, "exploitation," as he uses the concept in *Capital*, > is not an ethical concept at all; it simply means making a gain by > utilising an affordance, as in "exploiting natural resources." Many > "Marxist economists" today adhere to this view. However, I am one of those > that hold a different view. And the legacy of Stalinism is evidence of some > deficit in the legacy of Marx's writing - it was so easy for Stalin to > dismiss ethics as just so much nonsense and claim the mantel of Marxism! > > Much as I admire Marx, he was wrong on Ethics. He was a creature of his > times in this respect, or rather in endeavouring to *not* be a creature > of his times, he made an opposite error. He held all ethics in contempt as > if religion had a monopoly on this topic, and it were nothing more than > some kind of confidence trick to fool the masses. (Many today share this > view.) In fact, contrary to his own self-consciousness, *Capital* is a > seminal work of ethics. > > The problem stems from Hegel and from Marx's efforts to make a positive > critique of Hegel. As fine a work of Ethics as Hegel's *Philosophy of > Right* is, it had certain problems which Marx had to overcome. These > included Hegel's insistence that the state alone could determine right and > wrong (the state could of course make errors, but in the long run there is > no extramundane source of Right beyond the state). This was something > impossible for Marx to accept. And yet Hegel's idea of Ethics as something > objective, contained in the evolving forms of life (rather than Pure Reason > inherent in every individual as Kant held, or from God via His agents on > Earth, the priesthood), Marx wished to embrace and continue. > > So the situation is very complex. The foremost work on Ethics was authored > by a person who did not believe they wrote about Ethics at all. > > Here is a page with lots of resources on this question: > https://www.marxists.org/subject/ethics/index.htm > > Andy > ------------------------------ > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > On 18/07/2018 2:54 PM, Harshad Dave wrote: > > Why do we discuss on exploitation? > As per Marx's views, ethics has no influence on economic processes. Does > exploitation have no link with ethical feelings? The sense of exploitation > is absolutely linked with our ethical feelings. If economics is immune from > influence of ethics and sense of *exploitation* is founded on our ethical > evaluation, then discussion on *exploitation* should not find place in > the topics of economics/political economics. > Harshad Dave > hhdave15@gmail.com > > > > > Harshad Dave > ?hhdave15@gmail.com? > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180719/b5701429/attachment.html From andyb@marxists.org Thu Jul 19 08:04:41 2018 From: andyb@marxists.org (Andy Blunden) Date: Fri, 20 Jul 2018 01:04:41 +1000 Subject: [Xmca-l] Re: If economics is immune from ethics, why should exploitation be a topic of discussion in economics? In-Reply-To: References: Message-ID: <0a81876a-919f-1993-fe79-87cbb33ddcad@marxists.org> Here's Lenin's Ethics: https://www.marxists.org/archive/lenin/works/1920/oct/02.htm Andy ------------------------------------------------------------ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 20/07/2018 12:56 AM, Ulvi ??il wrote: > Andy, what about Lenin in this issue? > > Ulvi > > 18 Tem 2018 ?ar 08:19 tarihinde Andy Blunden > > ?unu yazd?: > > Harshad, > > According to Marx, "exploitation," as he uses the > concept in /Capital/, is not an ethical concept at > all; it simply means making a gain by utilising an > affordance, as in "exploiting natural resources." Many > "Marxist economists" today adhere to this view. > However, I am one of those that hold a different view. > And the legacy of Stalinism is evidence of some > deficit in the legacy of Marx's writing - it was so > easy for Stalin to dismiss ethics as just so much > nonsense and claim the mantel of Marxism! > > Much as I admire Marx, he was wrong on Ethics. He was > a creature of his times in this respect, or rather in > endeavouring to /not/ be a creature of his times, he > made an opposite error. He held all ethics in contempt > as if religion had a monopoly on this topic, and it > were nothing more than some kind of confidence trick > to fool the masses. (Many today share this view.) In > fact, contrary to his own self-consciousness, > /Capital/ is a seminal work of ethics. > > The problem stems from Hegel and from Marx's efforts > to make a positive critique of Hegel. As fine a work > of Ethics as Hegel's /Philosophy of Right/ is, it had > certain problems which Marx had to overcome. These > included Hegel's insistence that the state alone could > determine right and wrong (the state could of course > make errors, but in the long run there is no > extramundane source of Right beyond the state). This > was something impossible for Marx to accept. And yet > Hegel's idea of Ethics as something objective, > contained in the evolving forms of life (rather than > Pure Reason inherent in every individual as Kant held, > or from God via His agents on Earth, the priesthood), > Marx wished to embrace and continue. > > So the situation is very complex. The foremost work on > Ethics was authored by a person who did not believe > they wrote about Ethics at all. > > Here is a page with lots of resources on this > question: > https://www.marxists.org/subject/ethics/index.htm > > Andy > > ------------------------------------------------------------ > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > On 18/07/2018 2:54 PM, Harshad Dave wrote: >> >> >> Why do we discuss on exploitation? >> >> As per Marx's views, ethics has no influence on >> economic processes. Does exploitation have no link >> with ethical feelings? The sense of exploitation is >> absolutely linked with our ethical feelings. If >> economics is immune from influence of ethics and >> sense of /*exploitation*/ is founded on our ethical >> evaluation, then discussion >> on /*exploitation*/ should not find place in the >> topics of economics/political economics. >> Harshad Dave >> hhdave15@gmail.com >> >> >> >> Harshad Dave >> ?hhdave15@gmail.com ? >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180720/2216799b/attachment.html From ulvi.icil@gmail.com Thu Jul 19 08:15:36 2018 From: ulvi.icil@gmail.com (=?UTF-8?B?VWx2aSDEsMOnaWw=?=) Date: Thu, 19 Jul 2018 18:15:36 +0300 Subject: [Xmca-l] Re: If economics is immune from ethics, why should exploitation be a topic of discussion in economics? In-Reply-To: <0a81876a-919f-1993-fe79-87cbb33ddcad@marxists.org> References: <0a81876a-919f-1993-fe79-87cbb33ddcad@marxists.org> Message-ID: Thank you Andy. I know this wonderful speech. 19 Tem 2018 Per 18:07 tarihinde Andy Blunden ?unu yazd?: > Here's Lenin's Ethics: > https://www.marxists.org/archive/lenin/works/1920/oct/02.htm > > Andy > ------------------------------ > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > On 20/07/2018 12:56 AM, Ulvi ??il wrote: > > Andy, what about Lenin in this issue? > > Ulvi > > 18 Tem 2018 ?ar 08:19 tarihinde Andy Blunden ?unu > yazd?: > >> Harshad, >> >> According to Marx, "exploitation," as he uses the concept in *Capital*, >> is not an ethical concept at all; it simply means making a gain by >> utilising an affordance, as in "exploiting natural resources." Many >> "Marxist economists" today adhere to this view. However, I am one of those >> that hold a different view. And the legacy of Stalinism is evidence of some >> deficit in the legacy of Marx's writing - it was so easy for Stalin to >> dismiss ethics as just so much nonsense and claim the mantel of Marxism! >> >> Much as I admire Marx, he was wrong on Ethics. He was a creature of his >> times in this respect, or rather in endeavouring to *not* be a creature >> of his times, he made an opposite error. He held all ethics in contempt as >> if religion had a monopoly on this topic, and it were nothing more than >> some kind of confidence trick to fool the masses. (Many today share this >> view.) In fact, contrary to his own self-consciousness, *Capital* is a >> seminal work of ethics. >> >> The problem stems from Hegel and from Marx's efforts to make a positive >> critique of Hegel. As fine a work of Ethics as Hegel's *Philosophy of >> Right* is, it had certain problems which Marx had to overcome. These >> included Hegel's insistence that the state alone could determine right and >> wrong (the state could of course make errors, but in the long run there is >> no extramundane source of Right beyond the state). This was something >> impossible for Marx to accept. And yet Hegel's idea of Ethics as something >> objective, contained in the evolving forms of life (rather than Pure Reason >> inherent in every individual as Kant held, or from God via His agents on >> Earth, the priesthood), Marx wished to embrace and continue. >> >> So the situation is very complex. The foremost work on Ethics was >> authored by a person who did not believe they wrote about Ethics at all. >> >> Here is a page with lots of resources on this question: >> https://www.marxists.org/subject/ethics/index.htm >> >> Andy >> ------------------------------ >> Andy Blunden >> http://www.ethicalpolitics.org/ablunden/index.htm >> On 18/07/2018 2:54 PM, Harshad Dave wrote: >> >> Why do we discuss on exploitation? >> As per Marx's views, ethics has no influence on economic processes. Does >> exploitation have no link with ethical feelings? The sense of exploitation >> is absolutely linked with our ethical feelings. If economics is immune from >> influence of ethics and sense of *exploitation* is founded on our >> ethical evaluation, then discussion on *exploitation* should not find >> place in the topics of economics/political economics. >> Harshad Dave >> hhdave15@gmail.com >> >> >> >> Harshad Dave >> ?hhdave15@gmail.com? >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180719/18881a6e/attachment.html From rslguzzo@gmail.com Fri Jul 20 04:42:24 2018 From: rslguzzo@gmail.com (Raquel Guzzo) Date: Fri, 20 Jul 2018 08:42:24 -0300 Subject: [Xmca-l] Re: If economics is immune from ethics, why should exploitation be a topic of discussion in economics? In-Reply-To: <0a81876a-919f-1993-fe79-87cbb33ddcad@marxists.org> References: <0a81876a-919f-1993-fe79-87cbb33ddcad@marxists.org> Message-ID: ------------------------------------------------------------------------------ Please be aware this email message has been modified due to the detection of one or more malicious URLs. For your protection the detected URL(s) have been rendered inoperative. Please contact the ITS service desk with any questions. ------------------------------------------------------------------------------ thank you Andy for this important point R Em 7/19/18 12:04, Andy Blunden escreveu: > > Here's Lenin's Ethics: > https://www.marxists.org/archive/lenin/works/1920/oct/02.htm > > Andy > > ------------------------------------------------------------------------ > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > On 20/07/2018 12:56 AM, Ulvi ??il wrote: >> Andy, what about Lenin in this issue? >> >> Ulvi >> >> 18 Tem 2018 ?ar 08:19 tarihinde Andy Blunden >> ?unu yazd?: >> >> Harshad, >> >> According to Marx, "exploitation," as he uses the concept in >> /Capital/, is not an ethical concept at all; it simply means >> making a gain by utilising an affordance, as in "exploiting >> natural resources." Many "Marxist economists" today adhere to >> this view. However, I am one of those that hold a different view. >> And the legacy of Stalinism is evidence of some deficit in the >> legacy of Marx's writing - it was so easy for Stalin to dismiss >> ethics as just so much nonsense and claim the mantel of Marxism! >> >> Much as I admire Marx, he was wrong on Ethics. He was a creature >> of his times in this respect, or rather in endeavouring to /not/ >> be a creature of his times, he made an opposite error. He held >> all ethics in contempt as if religion had a monopoly on this >> topic, and it were nothing more than some kind of confidence >> trick to fool the masses. (Many today share this view.) In fact, >> contrary to his own self-consciousness, /Capital/ is a seminal >> work of ethics. >> >> The problem stems from Hegel and from Marx's efforts to make a >> positive critique of Hegel. As fine a work of Ethics as Hegel's >> /Philosophy of Right/ is, it had certain problems which Marx had >> to overcome. These included Hegel's insistence that the state >> alone could determine right and wrong (the state could of course >> make errors, but in the long run there is no extramundane source >> of Right beyond the state). This was something impossible for >> Marx to accept. And yet Hegel's idea of Ethics as something >> objective, contained in the evolving forms of life (rather than >> Pure Reason inherent in every individual as Kant held, or from >> God via His agents on Earth, the priesthood), Marx wished to >> embrace and continue. >> >> So the situation is very complex. The foremost work on Ethics was >> authored by a person who did not believe they wrote about Ethics >> at all. >> >> Here is a page with lots of resources on this question: >> https://www.marxists.org/subject/ethics/index.htm >> >> Andy >> >> ------------------------------------------------------------------------ >> Andy Blunden >> http://www.ethicalpolitics.org/ablunden/index.htm >> On 18/07/2018 2:54 PM, Harshad Dave wrote: >>> >>> >>> Why do we discuss on exploitation? >>> >>> As per Marx's views, ethics has no influence on economic >>> processes. Does exploitation have no link with ethical feelings? >>> The sense of exploitation is absolutely linked with our ethical >>> feelings. If economics is immune from influence of ethics and >>> sense of /*exploitation*/ is founded on our ethical evaluation, >>> then discussion on/*exploitation*/should not find place in the >>> topics of economics/political economics. >>> Harshad Dave >>> hhdave15@gmail.com >>> >>> >>> >>> Harshad Dave >>> ?hhdave15@gmail.com? >>> >> > -- Dra. Raquel S. L. Guzzo Pos-Gradua??o em Psicologia Centro de Ci?ncias da Vida Pontif?cia Universidade Cat?lica de Campinas rguzzo@puc-campinas.edu.br rguzzo@pq.cnpq.br rslguzzo@gmail.com BLOCKEDgep-inpsi[.]orgBLOCKED https://orcid.org/0000-0002-7029-2913 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180720/02b691d7/attachment.html From djwdoc@yahoo.com Fri Jul 20 14:57:19 2018 From: djwdoc@yahoo.com (Douglas Williams) Date: Fri, 20 Jul 2018 21:57:19 +0000 (UTC) Subject: [Xmca-l] Re: Interesting article on robots and social learning In-Reply-To: <3B91542B0D4F274D871B38AA48E991F953B3E855@CIO-KRC-D1MBX04.osuad.osu.edu> References: <3B91542B0D4F274D871B38AA48E991F953B2B847@CIO-KRC-D1MBX04.osuad.osu.edu> <1860198877.3850789.1531537929986@mail.yahoo.com> <7c142464-a2b2-ede1-e258-388e449e10f6@marxists.org> <3B91542B0D4F274D871B38AA48E991F953B3E4C1@CIO-KRC-D1MBX04.osuad.osu.edu> <3B91542B0D4F274D871B38AA48E991F953B3E5D1@CIO-KRC-D1MBX04.osuad.osu.edu> <377811343.5150586.1531792876331@mail.yahoo.com> <50521.79.152.174.82.1531838032.squirrel@montseny.udg.edu> <3B91542B0D4F274D871B38AA48E991F953B3E855@CIO-KRC-D1MBX04.osuad.osu.edu> Message-ID: <822383391.37663.1532123839573@mail.yahoo.com> Hi, Michael-- I think your response is correct (or, at least it's the same one I had, and I like to think I'm correct). That's partly why I wanted to bring this to people's attention here. What I also think, and have thought for some time, is that HCI/AI is a field that could use a lot more theory and practice development that applied CHAT to these problems. So far as I can see--and keep in mind research is my hobby these days, rather than my avocation--there has been a small steady stream of such work, most notably by Bonnie Nardi and Victor Kaptelinin, and Daisy Mwanza, and some other "3rd Generation" activity theory people, but it seems more specialized a niche than it should be, and less influential than it ought to be. It always surprises me when I talk to some people in my world about theories of learning and action and agency and construal of intent, and I don't hear more about CHAT, Action Research, and Cognitive Linguistics. But I don't I see this as an example of different activity systems approaching a shared object with different rules, communities of practice and education, rules, artifacts, modes of thought, economics, politics--very different, but yet, I think, addressing some of the same objects. I think they have need of each other. I've been part of your activity system (.edu), and now I'm part of another activity system (.com), and from time to time, I throw stones in both of your parts of the pool to see if the ripples will meet. I wish they would, more often. But the little pebbles I throw are so small, and the pool is so large, and I know there is a long history of staying in one's corner of the pool (which is always full of interesting activities of its own, and dare I say it, a little xenophobic about other parts of the pool), so it is hard. In both cases, I'm more in the position of an implementer, a bricoleur with the things I'm authorized to use, or that no one stops me from using: I describe and implement technology, more than design it; I read and apply ideas, more than research and develop them; that's the role I have. In my current activity world, I position myself as a potential stakeholder in the product, and as a product consumer, I have certain use case priorities that I request should be considered in design. In your activity world, I'd suggest that here is a substantial research deficit in an area that is probably worth many, many dissertations, where there is probably a possibility to attract grantmakers and internships and placement of one's students--but only if there is more interest in breaking the distance between theory and practice, and appeal across disciplines--maybe more interdisciplinary institutes could evolve out of that practice, which could draw from several different academic areas; out of such things after all, cognitive science programs have formed. If the research you do seems relevant to the potential grantmakers for what is a substantial and growing area of practice, then these grantmakers will come, particularly if some of you are able to cross over and present papers at things like the Neural Information Processing Systems 2018 convention. If you identify problems that, until you formulated the theory and remediation, they intuited they had, but could not fully articulate, I think the interest would be there. Note that the Kate Crawford presentation on bias was a keynote speech, and bias, as well as addressing other externalities of process-oriented development, is a huge and growing area of interest. I'm too old and too ill-placed to participate much on either side of bringing these activity systems more closely in alignment with each other. But I do see that I could be a beneficiary in this way: We all have an interest in helping to ensure that the artificial intelligence systems of the future, the initial implementations of which are in production now, develop in ways that are more human-centered than transaction process-centered, more focused on inclusion and affordance, and that they ultimately advance human freedom, and human agency, rather than restrict humans to live within a world of technology whose bars, because never fully articulated, may not be fully visible. But they will be there, nonetheless. It's a little too late for 2018, but I'll put this in for reference, just as another pebble... NIPS 2018 Call for Papers | | | | NIPS 2018 Call for Papers NIPS Foundation NIPS Website | | | Regards,Doug On Tuesday, July 17, 2018, 12:45:45 PM PDT, Glassman, Michael wrote: Hi David and Julie and Greg and whoever else is interested, Finally got a chance to take a look at the Kate Crawford talk and I sort of feel it represents both what is hopeful and what is not about machine learning and maybe answer Greg's question a bit about the role of CHAT (and other more participatory, process oriented social science theories) in machine learning.? First what is hopeful.? I think it is great that this is a topic that people seem really worried about.? What I am a bit concerned about though is two things.? One is the general lack of awareness of how these issues have played out in other areas.? The second is what I see as the continued to commitment to centralization in machine learning, that really smart people from a few research shops are going to figure out how to fix this. So my first concern with Dr. Crawford's talk.? It was like the 20th century didn't exist at all in the development of thinking about the roles that bias and classification plays in our lives and how that is being replicated by machine learning in possibly damaging ways.? When Dr. Crawford starting talking about classification for instance I was hoping (against hope) that she would talk about Mead and the social psychology on classification that emerged out of the social psychology program at the University of Chicago.? And/or the beginnings of action research. Or one of the other theories that see classification as a purposeful and destructive process. I was also hoping she might talk about a more modern theory like intersectionality.? Instead she simply talked about pretty ancient ideas that more or less danced around the issue. I wonder if the reason for this is that a lot of people in machine learning tend to think the problem can be solved through coding (a bit more on that in a bit) rather than taking programming into the community and making a real effort to create a symbiotic relationship between machine and human activity (perhaps this is where CHAT comes in). Near the beginning Dr. Crawford talks about "socio-technical," a term that has been used so broadly that it seems to have lost most meaning.? But the term socio-technical actually did or does have a meaning.? It was coined by the action theorist Eric Trist who suggested that communities themselves understand best how to use technologies to serve their functions. You bring in the technology with an understanding of how it works but then you rely on the community to implement and change it to meet its needs. In some ways I feel that is what was at least partially done in the Fifth Dimension project and other CHAT projects.? Maybe to keep machine learning from being destructive to communities we need to find similar uses for it in the community (I spent part of the summer talking to a bunch of Chinese students immersed in AI for education - one of the reasons this is at the forefront of my mind and we discussed this quite a bit). They aren't as committed to the whole community of computing geniuses thing as we are in this culture, at least not those students. The important issue here is decentralization of problem solving.? Something that Berners Lee has been talking about a lot https://www.vanityfair.com/news/2018/07/the-man-who-created-the-world-wide-web-has-some-regrets And maybe be just as important for AI as it is for the Internet.? It would mean that a lot of the work for what machine learning would look like would be on the fly, in the community. And again I think some of the work done in CHAT may work for this. Okay, just meanderings of my mind I guess.? I hope some of it made sense. Michael -----Original Message----- From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of JULIE WADDINGTON Sent: Tuesday, July 17, 2018 10:34 AM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: Interesting article on robots and social learning Doug, Thank you for sharing the video with Kate Crawford's keynote speech. Only managed to watch half so far, but from what I've gleaned up to now, makes a strong argument for the need for SOCIO-TECHNICAL analysis which fits in with the concerns/questions being raised by everyone. Talking of bias in AI and its huge ramifications (racism, sexism, homophobia, etc.), Crawford warns that: "When we consider bias just as a technical issue, then we're already missing the (bigger?) picture. The default of all data gathered reflects the deepest structural biases of society". Sounds obvious to state that social bias always precedes biases in AI, but the examples given and discussion of them provide much food for thought. Thanks again for sharing, Julie >? Hi, Michael--I think it could be, as there is certainly an interest > in dealing with bias, especially once you move away from the > relatively easily detectable ones in chatbots.? Frankly, I was > thinking in part to check in with you guys to see what you thought, as > the questions Kate Crawford poses here in the Neural Information > Processing Conference keynote last year are precisely the ones of > perspective and mind that I associate with CHAT. Perhaps the most > useful thing I can do is to put this in front of you all for > consideration: > The Trouble with Bias - NIPS 2017 Keynote - Kate Crawford #NIPS2017 > > > | > | > | > |? |? | > >? | > >? | > | > |? | > The Trouble with Bias - NIPS 2017 Keynote - Kate Crawford #NIPS2017 > > Kate Crawford is a leading researcher, academic and author who has > spent the last decade studying the social imp... >? | > >? | > >? | > > > Regards,Doug >? ? On ???Sunday???, ???July??? ???15???, ???2018??? > ???05???:???26???:???23??? ???PM??? ???PDT, Glassman, Michael > wrote: > > > I wonder if where CHAT might be most interesting in addressing AI are > on topics of bias and oppression.?? I believe that there is a real > danger that AI can be used as a tool for oppression, especially from > some of its early uses.?? One of the things people discussing the > possibilities of AI don???t discuss near enough is that it picks up > and integrates biases from the information it receives.?? Sometimes > this can be interesting such as the program Libratus that beat world > class poker players at Texas Hold ???em.?? One of the less discussed > aspects is that one of the reasons it was capable of doing this is it > picks up on the playing biases of the players it is competing with and > integrates them into its decision making process.?? This I think is > one of the reasons that it has to play only one player at a time to be successful. > >? ? > > The danger is when it integrates these biases into a larger decision > making process.?? There is an AI program called Northpointe used by > the justice department that uses a combination of big data and deep > learning to make decisions about whether people convicted of crimes > will wind up back in jail.?? This should have implications for > sentencing.?? The program, surprise, tends to be much harsher with > Black individuals than white individuals.?? Even if you keep ethnicity > outside of the equation it has enough other information to create a > natural bias.?? There are also some of the more advanced translation > programs which tend to incorporate the biases of the languages (e.g. > mysoginistic) into the translations without those getting the > translations realizing it.?? AI , especially machine learning, is in > many ways a prisoner to the information it receives.?? Who decides > what information it receives? Much like the intelligence tests of an > earlier age people will use AI decision making as being neutral or > objective when it actually mirrors back (almost > perfectly) those who are feeding it information. > >? ? > > Like I said I don???t see this point raised nearly enough.?? Perhaps > CHAT is one of the fields in a position to constantly point this out, > explore the ways that AI is culturally biases, and those that dominate > information flow can easily use it as a tool for oppression. > >? ? > > Michael > >? ? > > From: xmca-l-bounces@mailman.ucsd.edu > On > Behalf Of Greg Thompson > Sent: Sunday, July 15, 2018 12:12 PM > To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Re: Interesting article on robots and social > learning > >? ? > > And I'm still curious if any others out there might have anything to > contribute to Doug's query regarding what CHAT theory (particularly > developmental theories) might have to offer thinking about AI? > >? ? > > It seems an interesting question to think through even if you aren't > on board with the larger AI project... > >? ? > > -greg > >? ? > > On Sun, Jul 15, 2018 at 10:55 AM, Andy Blunden wrote: > > > I think we go back to Martin's earlier ironic comment here, Michael. > > Andy > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 15/07/2018 9:44 AM, Glassman, Michael wrote: > > > The Turing test, at least the test he wrote in his article, is > actually a big more complicated than this, and especially poignant > today.? Turing???s test of whether computers are acting as human was > based on an old English game show called The Lying Game (I suppose one > of the reasons for the title of the movie on Turing, though of course > it had multiple meanings.?? But for some reason they never mentioned > the origin of the phrase in the movie).?? Anyway in the lying game the > contestant had to listen to two individuals, one of whom was telling > the truth about the situation and one of whom was lying. The way > Turing describes it, it sounds quite brutal.?? The contestant had to > figure out who the liar was (there was a similar much milder version > years later in the US). Anyway Turing???s proposal, if I remember > correctly, was that a computer could be considered thinking like a > human if the comp the contestant was listening to was lying and he or > she couldn???t tell. In essence the computer would successfully lie.?? > Everybody think Turing believed that computers would eventually think > like humans but my reading of the article was that he had no idea, but as the computer stood at the time there was no chance. > > ? > > The reason this is so poignant is the Mueller indictments that came > down yesterday.?? For those outside the U.S. or not following the news > the indictments were against Russian military leading a scheme to > convince individuals of lies about various actor in the 2016 election > (also times release of information and breaking in to voting > systems).?? But it is the propagation of lies by robots and people > believing them that interests me.?? I feel like we aren???t putting > enough thought into that.?? Many of the people receiving the > information could not tell it was no from humans and believed it even > though in many cases it was generated by robots, passing it seems to > me Turing???s test.?? How and why did this happen? Of course Turing > died before the Internet so he couldn???t have known about it.?? But I > wonder if part of the reason the robots were successful is that they > have the ability to mine, collect and aggregate people???s biases and > then reflect them back to us.?? We tend to engage, believe things in > the contexts of our own biases.?? They say in salesmanship that the > trick is figuring out what people want to here and then couching > whatever you want to see in that.?? Trump is a master of reading what > a group of people want to hear at the moment, their biases, and then > mirroring it back to them > > ? > > If we went back to the Chinese room and the person inside was able to > read our biases from our messages would they then be human.? > > ? > > We live in a strange age. > > ? > > From:xmca-l-bounces@mailman.ucsd.eduO > n > Behalf Of Andy Blunden > Sent: Saturday, July 14, 2018 8:58 AM > To: xmca-l@mailman.ucsd.edu > Subject: [Xmca-l] Re: Interesting article on robots and social > learning > > ? > > I understand that the Turing Test is one which AI people can use to > measure the success of their AI - if you can't tell the difference > between a computer and a human interaction then the computer has > passed the Turing test. I tend to rely on a kind of anti-Turing Test, > that is, that if you can tell the difference between the computer and > the human interaction, then you have passed the anti-Turing test, that > is, you know something about humans. > > Andy > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 14/07/2018 1:12 PM, Douglas Williams wrote: > > > Hi-- > > I think I'll come out of lurking for this one. Actually, what you're > talking about with this pain algorithm system sounds like a modeling > system that someone might need to develop what Alan Turing described > as a P-type computing device. A P-type computer would receive its > programming from inputs of pleasure and pain. It was probably derived > from reading some of the behavioralist models of mind at the time. > Turing thought that he was probably pretty close to being able to > develop such a computing device, which, because its input was similar, could model human thought. > The Eliza Rogersian analysis computer program was another early idea > in which the goal was to model the patterns of human interaction, and > gradually approach closer to human thought and interaction that way. > And by the 2000's, the idea of the "singularity" was afloat, in which > one could model human minds so well as to enable a human to be > uploaded into a computer, and live forever as software (Kurzweil, > 2005). But given that we barely had a sufficient model of mind to say > Boo with at the time (what is consciousness? where does intention come > from? What is the balance of nature/nurture in motivation? Speech > utterances? and so on), and you're right, AI doesn't have much of a > theory of emotion, either--the goal of computer software modeling human thought seemed very far away to me. > > ? > > At someone's request, I wrote a rather whimsical paper called "What is > Artificial Intelligence?" back in 2006 about such things. My argument > was that statistical modeling of human interaction and capturing > thought was not too easy after all, precisely because of the parts of > mind we don't think of, and the social interactions that, at the time, > were not a primary focus. I mused about that in the context of my > trying to write a computer program by applying Chomsky's syntactic > structures to interpret intention of a few simple questions--without, > alas, in my case, a corpus-supported Markov chain logic to do it. > Generative grammar would take care of it, right? Wrong. > > > So as someone who had done a little primitive, incompetent attempt at > speech modeling myself, and in the light of my later-acquired > knowledge of CHAT, Burke, Bakhtin, Mead, and various other people in > different fields, and of the tendency of people to interact through > the world through cognitive biases, complexes, and embodied > perceptions that were not readily available to artificial systems, I > didn't think the singularity was so near. > > The terrible thing about computer programs is that they do just what > you tell them to do, and no more. They have no drive to improve, > except as programmed. When they do improve, their creativity is > limited. And the approach now still substantially is > pattern-recognition based. The current paradigm is something called > Convolutional Neural Network Long Short-Term Memory Networks > (CNN/LSTM) for speech recognition, in which the convolutional neural > networks reduce the variants of speech input into manageable patterns, > and temporal processing (temporal patterns of the real wold phenomena > to which the AI system is responding). But while such systems combined > with natural language processing can increasingly mimic human > response, and "learn" on their own, and while they are approaching the > "weak" form of artificial general intelligence (AGI), the intelligence > needed for a machine to perform any intellectual task that a human > being can, they are an awfully long way from "strong" AGI--that is, > something approaching human consciousness. I think that's because they > are a long way from capturing the kind of social embeddedness of > almost all animal behavior, and the sense in which human cognition is > embedded in the messy things, like emotion. A computer algorithm can > recognize the patterns of emotion, but that's it. An AGI system that can experience emotions, or have motivation, is quite another thing entirely. > > I can tell you that AI confidence is still there. In raising questions > about cultural and physical embodiment in artficial intelligence > interations with someone in the field recently, he dismissed the idea > as being that relevant. His thought was that "what I find essential is > that we acknowledge that there's no obvious evidence?? supporting that > the current paradigm of CNN/LSTM under various reinforcement > algorithms isn't enough for A AGI and in particular for broad > animal-like intelligence like that of ravens and dogs." > > But ravens and dogs are embedded in social interaction, in > intentionality, in consciousness--qualitatively different than ours, maybe, but there. > Dogs don't do what you ask them to, always. When they do things, they > do them for their own intentionality, which may be to please you, or > may be to do something you never asked the dog to do, which is either > inherent in its nature, or an expression of social interactions with > you or others, many of which you and they may not be consciously aware > of. The deep structure of metaphor, the spatiotemporal relations of > language that Langacker describes as being necessary for construal, > the worlds of narrativized experience, are mostly outside of the > reckoning, so far as I know (though I'm not an expert--I could be at > least partly wrong) of the current CNN/LSTM paradigm. > > My old interlocutor in thinking about my language program, Noam > Chomsky, has been a pretty sharp critic of the pattern recognition > approach to artificial intelligence. > > Here's Chomsky's take on the idea: > > http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html > > And here's Peter Norvig's response; he's a director of research at > Google, where Kurzweil is, and where, I assume, they are as close to > the strong version of artificial general intelligence as anyone out there... > > http://norvig.com/chomsky.html > > Frankly, I would be quite interested in what you think of these things. > I'm merely an Isaiah Berlin fox, chasing to and fro at all the pretty > ideas out there. But you, many of you, are, I suspect, the untapped > hedgehogs whose ideas on these things would see more readily what I > dimly grasp must be required, not just for achieving a strong AGI, but > for achieving something that we would see as an ethical, reasonable > artificial mind that expands human experience, rather than becomes a > prison that reduces human interactions to its own level. > > My own thinking is that lately, Cognitive Metaphor Theory (CMT), which > I knew more of in its earlier (now "standard model') days, is getting > even more interesting than it was. I'd done a transfer term to UC > Berkeley to study with George Lakoff, but we didn't hit it off well, > perhaps I kept asking him questions about social embeddedness, and > similarities to Vygotsky's theory of complex thought, and was too > expressive about my interest in linking out from his approach than > folding in. It seems that the idea I was rather woolily suggesting to > Lakoff back then has caught > on: namely, that utterances could be explored for cultural variation > and historical embeddedness, a form ofsocial context to the narratives > and metaphors and blended spaces that underlay speech utterances and > thought; that there was a degree of social embodiment as well as > physiological embodiment through which language operated. I thought > then, and it looks like some other people now, are thinking that > someone seeking to understand utterances (as a strong AGI system would > need to do) really, would need to engage in internalizing and > ventriloqusing a form of Geertz's thick description of interactions. > In such forms, words do not mean what they say, and can have different > affect that is a bit more complex than I think temporal processing currently addresses. > > I think these are the kind of things that artificial intelligence > would need truly to advance, and that Bakhtin and Vygotsky and > Leont'ev and in the visual world, Eisenstein were addressing all along... > > And, of course, you guys. > > ? > > Regards, > > Douglas Willams > > ? > > ? > > ? > > On Tuesday, July 3, 2018, 10:35:45 AM PDT, David H > Kirshner wrote: > > ? > > ? > > The other side of the coin is that ineffable human experience is > becoming more effable. > > Computers can now look at a human brain scan and determine the degree > of subjectively experienced pain: > > ? > > In 2013, Tor Wager, a neuroscientist at the University of Colorado, > Boulder, took the logical next step by creating an algorithm that > could recognize pain???s distinctive patterns; today, it can pick out > brains in pain with more than ninety-five-per-cent accuracy. When the > algorithm is asked to sort activation maps by apparent intensity, its > ranking matches participants??? subjective pain ratings. By analyzing > neural activity, it can tell not just whether someone is in pain but > also how intense the experience is. > > ? > > So, perhaps the computer can???t ???feel our pain,??? but it can sure > ???sense our pain!??? > > ? > > Here???s the full article: > > https://www.newyorker.com/magazine/2018/07/02/the-neuroscience-of-pain > > ? > > David > > ? > > From:xmca-l-bounces@mailman.ucsd.eduO > n > Behalf Of Glassman, Michael > Sent: Tuesday, July 3, 2018 8:16 AM > To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Re: Interesting article on robots and social > learning > > ? > > ? > > ? > > It seems like we are still having the same argument as when robots > first came on the scene.?? In response to John McCarthy, who was > claiming that eventually robots can have belief systems and > motivations similar to humans through AI John Searle wrote the Chinese > room.?? There have been a lot of responses to the Chinese room over > the years and a number of digital philosopher claim it is no longer > salient, but I don???t think anybody has ever effectively answered his central question. > > ? > > Just a quick recap.?? You come to a closed door and know there is a > person on the other side. To communicate you decide the teacher the > person on the other side Chinese. You do this by continuously > exchanging rules systems under the door.?? After a while you are able > to have a conversation with the individual in perfect Chinese. But > does that person actually know Chinese just from the rule systems.?? I > think Searle???s major point is are you really learning if you don???t > know why you???re learning, or are you just repeating. Learning is > embedded in the human condition and the reason it works so well and is > adaptable is because we understand it when we use what we learn in the > world in response to others.?? To put it in response to the post, does > a bomb defusion robot really learn how to defuse a bomb if it does not > know why it is doing it.?? It might cut the right wires at the right > time but it doesn???t understand why and therefore is not doing the > task just a series of steps it has been able to absorb.?? Is that the opposite of human learning? > > ? > > What the researcher did really isn???t that special at this point.?? > Well I definitely couldn???t do it and it is amazing, but it is in > essence a miniature version of Libratus (which beat experts at Texas > Hold em) and Alphago (which beat the second best Go player in the > world).?? My guess it is the same use of deep learning in which the > program integrates new information into what it is already capable > of.?? If machines can learn from interacting with other humans then > they can learn from interacting with other machines.?? It is the same > principle (though much, much simpler in this case).?? The question is > what does it mean.?? As we defining learning down because of the > zeitgeist. ??Greg started his post saying a socio-cultural theorist be > interested in this research.?? I wonder if they might more likely to > be the ones putting on the brakes, asking questions about it. > > ? > > Michael > > ? > > From:xmca-l-bounces@mailman.ucsd.edu > On > Behalf Of Andy Blunden > Sent: Tuesday, July 03, 2018 7:04 AM > To: xmca-l@mailman.ucsd.edu > Subject: [Xmca-l] Re: Interesting article on robots and social > learning > > ? > > Does a robot have "motivation"? > > andy > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 3/07/2018 5:28 PM, Rod Parker-Rees wrote: > > > Hi Greg, > > ? > > What is most interesting to me about the understanding of learning > which informs most AI projects is that it seems to assume that affect > is irrelevant. The role of caring, liking, worrying etc. in social > learning seems to be almost universally overlooked because information > is seen as something that can be ???got??? and ???given??? more than > something that is distributed in relationships. > > ? > > Does anyone know about any AI projects which consider how machines > might feel about what they learn? > > ? > > All the best, > > > Rod > > ? > > From:xmca-l-bounces@mailman.ucsd.eduO > n > Behalf Of Greg Thompson > Sent: 03 July 2018 02:50 > To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Interesting article on robots and social learning > > ? > > I???m ambivalent about this project but I suspect that some young CHAT > scholar out there could have a lot to contribute to a project like > this > one: > > https://www.sapiens.org/column/machinations/artificial-intelligence-cu > lture/ > > ? > > -Greg? > > -- > > Gregory A. Thompson, Ph.D. > > Assistant Professor > > Department of Anthropology > > 880 Spencer W. Kimball Tower > > Brigham Young University > > Provo, UT 84602 > > WEBSITE:greg.a.thompson.byu.edu? > http://byu.academia.edu/GregoryThompson > > > > This email and any files with it are confidential and intended solely > for the use of the recipient to whom it is addressed. If you are not > the intended recipient then copying, distribution or other use of the > information contained is strictly prohibited and you should not rely > on it. If you have received this email in error please let the sender > know immediately and delete it from your system(s). Internet emails > are not necessarily secure. While we take every care, University of > Plymouth accepts no responsibility for viruses and it is your > responsibility to scan emails and their attachments. University of > Plymouth does not accept responsibility for any changes made after it > was sent. Nothing in this email or its attachments constitutes an > order for goods or services unless accompanied by an official order form. > > > ? > > > ? > > >? ? > > > > > > >? ? > > -- > > Gregory A. Thompson, Ph.D. > > Assistant Professor > > Department of Anthropology > > 880 Spencer W. Kimball Tower > > Brigham Young University > > Provo, UT 84602 > > WEBSITE:greg.a.thompson.byu.edu? > http://byu.academia.edu/GregoryThompson > Dra. Julie Waddington Departament de Did?ctiques Espec?fiques Facultat d'Educaci? i Psicologia Universitat de Girona -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180720/f5dde5dd/attachment.html From annalisa@unm.edu Fri Jul 20 21:16:20 2018 From: annalisa@unm.edu (Annalisa Aguilar) Date: Sat, 21 Jul 2018 04:16:20 +0000 Subject: [Xmca-l] Anniversary for Sarkharov's essay Message-ID: Hello Xmcars and venerable others, I saw this in the NYT, thought it would be the stuff of good discussion in the hear and know: https://www.nytimes.com/2018/07/20/opinion/andrei-sakharov-essay-soviet-union.html Additionally, here is the PDF of the original essay, "Thoughts on Progress, Peaceful Co-existence and Intellectual Freedom" appearing in the NYT on July 22, 1968. You will have to enlarge it to about 250% to be able to read it. By chance, who here on the list remembers the splash this essay made? Anyone? I especially enjoy this sentence: "Freedom of thought is the only guarantee of the feasibility of a scientific democratic approach to politics, economy, and culture." Kind regards, as always, Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180721/a824a8d7/attachment.html From annalisa@unm.edu Fri Jul 20 21:21:53 2018 From: annalisa@unm.edu (Annalisa Aguilar) Date: Sat, 21 Jul 2018 04:21:53 +0000 Subject: [Xmca-l] Re: Anniversary for Sakharov's essay In-Reply-To: References: Message-ID: Um, I meant "Anniversary for Sakharov's Essay" Doh! ________________________________ From: xmca-l-bounces@mailman.ucsd.edu on behalf of Annalisa Aguilar Sent: Friday, July 20, 2018 10:16:20 PM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Anniversary for Sarkharov's essay Hello Xmcars and venerable others, I saw this in the NYT, thought it would be the stuff of good discussion in the hear and know: https://www.nytimes.com/2018/07/20/opinion/andrei-sakharov-essay-soviet-union.html Additionally, here is the PDF of the original essay, "Thoughts on Progress, Peaceful Co-existence and Intellectual Freedom" appearing in the NYT on July 22, 1968. You will have to enlarge it to about 250% to be able to read it. By chance, who here on the list remembers the splash this essay made? Anyone? I especially enjoy this sentence: "Freedom of thought is the only guarantee of the feasibility of a scientific democratic approach to politics, economy, and culture." Kind regards, as always, Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180721/d1bf7afc/attachment.html From dkellogg60@gmail.com Sat Jul 21 01:27:39 2018 From: dkellogg60@gmail.com (David Kellogg) Date: Sat, 21 Jul 2018 17:27:39 +0900 Subject: [Xmca-l] Re: Anniversary for Sakharov's essay Message-ID: Yes, I remember. But I remember a good bit more than that. In 1958, the year before I was born, my father helped to confirm the existence of two large belts of ionized matter around the earth which do not cover the poles. These were named van Allen belts, after James van Allen, of the University of Iowa, who helped supervise the discovery and provided the theoretical basis for thinking they were there. My father used otherwise useless data from America's first two satellites, hastily put up in response to Sputnik, in order to confirm their existence. In 1962, my father and his colleague Ed Nye calculated that the amount of energy in the belts was roughly equal to the amount of energy released by the hydrogen bombs then being developed in the USA and the USSR. Sakharov was project scientist in the USSR and directly responsible for the largest thermonuclear device very exploded, the "Moab", or "mother of all bombs"; Teller was his equivalent in the USA). They speculated that it might be possible to destroy the van Allen belts with a hydrogen bomb exploded in space and casually mentioned the possibility to a science reporter for the New York Times. Dad realized that we didn't know what the consequences of destroying the van Allen belts would be, and when the NYT called up for follow-up he denied ever suggesting the possibility. Both men were then frog-marched to the Pentagon and told that never to mention the possibility again--because the experiment HAD ALREADY BEEN DONE, as part of the "Starfish Prime" experiments over Hawaii. Sakharov says, in his essay, that like most intellectuals he went through three phases in his journey beyond the valley of disillustionment with the Soviet system. The first was belief in his country and readiness to do everything he could to help socialism vanquish fascism. The second was "symmetry"--that is, the belief that governments everywhere were engaged in bad things, and that for every foolish bit of hubris like "Moab" there as a corresponding reckless act like "Starfish Prime". The third was that, according to Sakharov, there was neither country nor symmetry--the USSR, he says, is like a cancer cell, and the USA like a healthy one. I heard a commentator on the BBC yesterday wondering foolishly why Sergei Prokofiev would give up a promising career as a modernist in New York and return, in 1938, to write patriotic drivel like War and Peace, and Alexander Nevsky. The answer is simple. Sakharov got it right the first time. If there is no symmetry, it's only in this: over there people asked questions and over here people don't. And the cancer is not socialism, but scientism: Sakharov was the part of the disease, not part of the cure. David Kellogg Sangmyung University New in *Early Years*, co-authored with Fang Li: When three fives are thirty-five: Vygotsky in a Hallidayan idiom ? and maths in the grandmother tongue Some free e-prints available at: https://www.tandfonline.com/eprint/7I8zYW3qkEqNBA66XAwS/full On Sat, Jul 21, 2018 at 1:16 PM, Annalisa Aguilar wrote: > Hello Xmcars and venerable others, > > > I saw this in the NYT, thought it would be the stuff of good discussion in > the hear and know: > > https://www.nytimes.com/2018/07/20/opinion/andrei-sakharov-e > ssay-soviet-union.html > > > Additionally, here is the PDF of the original essay, "Thoughts on > Progress, Peaceful Co-existence and Intellectual Freedom" appearing in the > NYT on July 22, 1968. You will have to enlarge it to about 250% to be able > to read it. > > > By chance, who here on the list remembers the splash this essay made? > Anyone? > > > I especially enjoy this sentence: "Freedom of thought is the only > guarantee of the feasibility of a scientific democratic approach to > politics, economy, and culture." > > > Kind regards, as always, > > > Annalisa > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180721/7aa00753/attachment.html From robsub@ariadne.org.uk Sat Jul 21 02:45:53 2018 From: robsub@ariadne.org.uk (robsub@ariadne.org.uk) Date: Sat, 21 Jul 2018 10:45:53 +0100 Subject: [Xmca-l] Re: Anniversary for Sarkharov's essay In-Reply-To: References: Message-ID: <78078a88-bb63-a2db-d0c6-4f8d20c45777@ariadne.org.uk> There is a version of "Thoughts on Progress, Peaceful Co-existence and Intellectual Freedom" on the Sakharov Centre website: http://www.sakharov-center.ru/asfconf2009/english/node/20. I have no idea if it is the unvarnished original or an edited version. Rob P On 21/07/2018 05:16, Annalisa Aguilar wrote: > > Hello Xmcars and venerable others, > > > I saw this in the NYT, thought it would be the stuff of good > discussion in the hear and know: > > https://www.nytimes.com/2018/07/20/opinion/andrei-sakharov-essay-soviet-union.html > > > > Additionally, here is the PDF of the original essay, "Thoughts on > Progress, Peaceful Co-existence and Intellectual Freedom"? appearing > in the NYT on July 22, 1968. You will have to enlarge it to about 250% > to be able to read it. > > > By chance, who here on the list remembers the splash this essay made? > Anyone? > > > I especially enjoy this sentence: "Freedom of thought is the only > guarantee of the feasibility of a scientific democratic approach to > politics, economy, and culture." > > > Kind regards, as always, > > > Annalisa > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180721/9c4bf8aa/attachment.html From annalisa@unm.edu Sat Jul 21 12:32:47 2018 From: annalisa@unm.edu (Annalisa Aguilar) Date: Sat, 21 Jul 2018 19:32:47 +0000 Subject: [Xmca-l] Re: Anniversary for Sakharov's essay In-Reply-To: References: Message-ID: OK. Good contributions. I guess. Scientism might well be the dismissive override when faced with considering other worldviews, however asking a question is the start of any scientific endeavor. Even artists depend upon scientific method. So, perhaps it isn't that people ask no questions vs asking them, but rather asking too little vs asking too many. I can't help though but feel that spirit of the thread at its very inception has been kidnapped and this is usually considered bad netiquette (which is sort of selfish). I feel it has included namedropping and had I wanted to know this information I would have asked those kinds of questions. I suppose I have been off this list too long to recall the hazards of making a post, but I have been reminded! ? What I find to be truly assymetrical, is to gain an answer to a question that was never asked. Or, asking a question and getting anything but the answer for the reason it was asked in the first place. It's a kind of activity where I do not detect the traces of intellectual freedom, but something else. Of course there are those who believe that freedom is only for some people and not for others. I don't get the impression Sakharov felt that way. I also don't feel that he was trying to teach us a lesson, but merely reflecting his thoughts about the very things he cared about. If we are indeed people who believe in the importance of intellectual freedom, then we must walk the talk. To Rob, thanks a lot for the link to version at Sakharov's website. I noticed that there were subheadings added in the NYT version that seem placed therein to aid for readability, however with regard to the heading "Inequality of American Negroes" (appearing in the NYT article) there isn't much said either in the NYT or the Sakharov website versions that says any more than: "At this time, the white citizens of the United States are unwilling to accept even minimum sacrifices to eliminate the unequal economic and cultural position of the country's black citizens, who make up 10 percent of the population." Because there isn't much more than one sentence, I don't think it merits an entire subheading, which struck me as odd. It certainly was a fair statement to make given it was 1968, but Sakharov could have said more about it and I wondered why he didn't. Yes, the slavery question in US history was (and still is) an open and untended sore, but it almost feels like a really tired complaint when foreigners keep stating the obvious. I say this because there are many oppressed peoples in other parts of the world, and throughout history, and so for me the question isn't about the unjust after-effects of 19th century American slavery, but about how to rectify social injustice against oppressed peoples in ways that are pragmatic, fair, and most of all successful in creating open and democratic societies. It's a question that nags us to silence. I saw an article here this week: https://www.theguardian.com/world/2018/jul/19/us-modern-slavery-report-global-slavery-index indicating that there are as many as 400K people in the US who exist in "modern slavery" which seems to concern forced marriage and labor, which by the way largely affects girls and women. I was reminded of Chrissy Hynde when she wailed at a Lilith concert I attended that "woman is the [n-word] of the world." This US number in the world index measures to one-hundreth of the slavery index worldwide, where a large portion of this population is in Asia. There is a direct connection to technology assemblage being one reason for this very large number in Asia. It appears what we have accomplished in the US in our goal to "eliminate" slavery, is to export the behavior of forced labor abroad in order to lower technology production costs, but it's not as if the "low cost" is being passed to the consumer (thanks Apple!) for products like computers and smartphones, those remain expensive fashion goods. For items appearing in dollar stores, that may be well be the case. After all this, we have to leave the poor people something to buy after all, such as one-dollar buckets and brooms. In the essay, Sakharov offers his readers a solution: a 15-year taxation of 20% GDP of developed countries, which seems a bit grandiose, but then what happens to the money? (Piketty also thought something along these lines but his solution was more nominal, like a 2% income tax, if I am remembering that correctly) At least Sakharov is offering an idea rather than just complaining or describing the ills of society. It still seems naive to think these things can be applied from the top-down. Perhaps that stance is a mental habit after living in the Soviet Union. Capitalism concentrates wealth, whether by ethical means or not, but then there is the redistribution of that wealth and whether that is by ethical means or not. How to apply the ethical to something so unethical? I am still reading the essay and digesting it, for it is quite long, but I do think I agree whole-heartedly with Sakharov on one thing: intellectual freedom is the only path to any sort of salvation. If this is true, then this means *learning how* to be intellectually free. It seems that means the following, though you might extend the list: how to debate the merits, how to respect others not like you, how to have the courage to mention the elephant in the room, wherever it happens to be standing, how to face disagreement without being defensive and petty. How to develop what is ethical from all that is unethical? I would offer transparency, but this seems to produce shamelessness and a brute exhibition of power, mentality of "crucifixion as advertising," so I'm not sure what the answer is. Last night I watched the Great British Baking Show, a bourgeois vice of mine. The show's theme was about making tarts. To make a treacle tart, a contestant was using fortune cookies as an ingredient. One of the hosts opened a cookie and read the message inside. It said, "To speak is silver; to listen is golden." The gold standard is frequently missing, isn't it. Kind regards, Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180721/71336b87/attachment.html From dkellogg60@gmail.com Sat Jul 21 14:34:41 2018 From: dkellogg60@gmail.com (David Kellogg) Date: Sun, 22 Jul 2018 06:34:41 +0900 Subject: [Xmca-l] Ethics as Once and Future Discipline Message-ID: In HDHMF, Vygotsky makes the point that a good deal of our "character education" proceeds outside-in. That is, we focus on the behavior (especially the sexual behavior) of children (especially adolescents) and then we speculate about the effect this might have on their thinking, and this putative effect, more of a hope or a pious wish than a scientific fact, is called "ethics" (or "morals"). But Vygotsky says that unless the child is genuinely in control of his or her own behavior, so-called "ethical" acts are not ethical at all. There are many reasons for not bullying or beating or betraying fellow humans that have very little to do with ethics; there is nothing moral about living in fear of punishment. Treated historically, ethics is, as Mike referred to cultural psychology, a once and future discipline. In the early eighteenth century, it actually DID form part of political economy--Adam Smith taught both subjects, and he saw the latter as an offshoot of the former ("we trust our livelihood not to the generosity of the baker but to his self-interest"). As we've discussed on this list, Marx was also anxious to establish political economy as a separate science (as Durkheim was to do with sociology), and so he didn't often invoke "ethics" as such. It's important to remember, though, that Marx doesn't evoke "historical materialism" or "dialectical materialism" as such either, and that Vygotsky's own name for his cultural-historical psychology was "the historical theory of the higher psychic functions", at least according to the recently published notebooks. Names are often late-emerging in the development of any science. Yet it seems to me that there is another good reason for not invoking "ethics" as such. Having "turned the tables" on nature, human beings are able to adapt the environment to their own behavior instead of the other way around. But, like the child who must act ethically before she or he is really has any ethics worth speaking of, humans as a species are not yet able to design and plan their own economics, politics, or even their scientific and military behavior. When Sakharov detonated the Moab, there was serious worry that it might start a chain reaction that engulfed all matter in the solar system and possible the universe, but the test went ahead anyway. Similarly, the effects of Starfish Prime were completely unknown when my father's idle speculations to a New York Times reporter were actually carried out. I'm not even sure if this kind of reckless scientism should be called "experimental". But I am sure of one thing: Ethics as such is actually a long-ago yet-to-come. David Kellogg Sangmyung University New in *Early Years*, co-authored with Fang Li: When three fives are thirty-five: Vygotsky in a Hallidayan idiom ? and maths in the grandmother tongue Some free e-prints available at: https://www.tandfonline.com/eprint/7I8zYW3qkEqNBA66XAwS/full -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180722/0deba072/attachment.html From smago@uga.edu Sat Jul 21 14:45:25 2018 From: smago@uga.edu (Peter Smagorinsky) Date: Sat, 21 Jul 2018 21:45:25 +0000 Subject: [Xmca-l] Re: Ethics as Once and Future Discipline In-Reply-To: References: Message-ID: In case anyone?s interested, I studied federally funding character ed programs awhile back; this is the article version, and a book version came out a couple of years later. Smagorinsky, P., & Taxel, J. (2004). The discourse of character education: Ideology and politics in the proposal and award of federal grants. Journal of Research in Character Education, 2(2), 113-140. Available at http://www.petersmagorinsky.net/About/PDF/JRCE/JRCE2004.pdf Abstract: This study analyzes the ways in which character education has been articulated in the current character education movement. The study consists of a discourse analysis of proposals funded by the United States Department of Education?s Office of Educational Research and Improvement. This analysis identifies the discourses employed to outline states? conceptions of character and character education as revealed through the proposals. The presentation consists of two profiles from sets of states that exhibit distinct conceptions of character and character education. One profile is created from two adjacent states in the American Deep South. We argue that this conception represents the dominant perspective promoted in the United States, one based on an authoritarian conception of character in which young people are indoctrinated into the value system of presumably virtuous adults through didactic instruction. The other profile comes from two adjacent states in the American Upper Midwest. This approach springs from a well-established yet currently marginal discourse about character, one that emphasizes attention to the whole environment in which character is developed and enacted and in which reflection on morality, rather than didactic instruction in a particular notion of character, is the primary instructional approach. The analysis of the discourse of character education is concerned with identifying the ideologies behind different beliefs about character and character education. From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of David Kellogg Sent: Saturday, July 21, 2018 5:35 PM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Ethics as Once and Future Discipline In HDHMF, Vygotsky makes the point that a good deal of our "character education" proceeds outside-in. That is, we focus on the behavior (especially the sexual behavior) of children (especially adolescents) and then we speculate about the effect this might have on their thinking, and this putative effect, more of a hope or a pious wish than a scientific fact, is called "ethics" (or "morals"). But Vygotsky says that unless the child is genuinely in control of his or her own behavior, so-called "ethical" acts are not ethical at all. There are many reasons for not bullying or beating or betraying fellow humans that have very little to do with ethics; there is nothing moral about living in fear of punishment. Treated historically, ethics is, as Mike referred to cultural psychology, a once and future discipline. In the early eighteenth century, it actually DID form part of political economy--Adam Smith taught both subjects, and he saw the latter as an offshoot of the former ("we trust our livelihood not to the generosity of the baker but to his self-interest"). As we've discussed on this list, Marx was also anxious to establish political economy as a separate science (as Durkheim was to do with sociology), and so he didn't often invoke "ethics" as such. It's important to remember, though, that Marx doesn't evoke "historical materialism" or "dialectical materialism" as such either, and that Vygotsky's own name for his cultural-historical psychology was "the historical theory of the higher psychic functions", at least according to the recently published notebooks. Names are often late-emerging in the development of any science. Yet it seems to me that there is another good reason for not invoking "ethics" as such. Having "turned the tables" on nature, human beings are able to adapt the environment to their own behavior instead of the other way around. But, like the child who must act ethically before she or he is really has any ethics worth speaking of, humans as a species are not yet able to design and plan their own economics, politics, or even their scientific and military behavior. When Sakharov detonated the Moab, there was serious worry that it might start a chain reaction that engulfed all matter in the solar system and possible the universe, but the test went ahead anyway. Similarly, the effects of Starfish Prime were completely unknown when my father's idle speculations to a New York Times reporter were actually carried out. I'm not even sure if this kind of reckless scientism should be called "experimental". But I am sure of one thing: Ethics as such is actually a long-ago yet-to-come. David Kellogg Sangmyung University New in Early Years, co-authored with Fang Li: When three fives are thirty-five: Vygotsky in a Hallidayan idiom ? and maths in the grandmother tongue Some free e-prints available at: https://www.tandfonline.com/eprint/7I8zYW3qkEqNBA66XAwS/full -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180721/2fd5db07/attachment.html From annalisa@unm.edu Sat Jul 21 17:58:01 2018 From: annalisa@unm.edu (Annalisa Aguilar) Date: Sun, 22 Jul 2018 00:58:01 +0000 Subject: [Xmca-l] Re: Anniversary for Sakharov's Essay In-Reply-To: References: , Message-ID: Peter and venerable others, The name of your study reminds me of George Lakoff's "Moral Politics: How Liberals and Conservatives Think." I was poking around the University of Chicago Press website and see that it is now in its third edition (1996, 2002, and 2016) Lakoff has also written an essay on Trump which is probably included in the latest edition: http://press.uchicago.edu/books/excerpt/2016/lakoff_trump.html As a student of Vedanta, I have the view that unethical behavior has a high possibility when I see myself as separate from the environment and others around me. Separateness creates fear, and fear invokes self-protection whereby my action is against the other who is not as human, nor as valuable as me and mine. That dearness of myself is not about ego or narcissism, it is a reality in all humans, no matter the culture. What is different is how that "me-ness" is expressed. Is it as individuals? as tribes? as an element of the universe? as an entity one with nature? as an entity against nature? etc. That's likely what culture is about, how is that "me-ness" expressed and how does it come to develop to be expressed as we find it. If I see myself reflected in others and also in the environment around me, then there is something (myself) to keep me connected because I can hold less fear, and if I have less fear, then even with the fear I do hold, my behavior of self-protection includes the environment and others, they are no longer different from me, because they are me. Thus, I (have the ability to) keep hold upon my thinking mind rather than sinking into the fight/flight that comes from an intense fear whereby self-preservation becomes a mechanical reflex of survival, a human response that even the most disciplined person may not be able to rein in. Perhaps Socrates was that kind of person. It has often bothered me that in times of war that when submerging humans into that colosseum of violence, the worst of human behavior becomes irrational, sadistic, and gratuitous, and it's fairly consistent that this happens. But then it is not the norm to live in a theater of war, it is the exception. The rationalization of war is to protect something, a spectrum between an interest on the one end to the homeland upon the other; it's always (and should be) treated as a means of last resort, yet in modern times, that mandate seems to side on interests more than anything else, and to hell with diplomacy. I think that is why people (in general) are so distressed and pissed off about the state of affairs we find ourselves today. These acts of protection have to do with where the self is placed and where the self is not (considered to exist). In "Sir, No Sir!," a film documentary about the disobedience of soldiers in the field tells us that there are soldiers that refuse to fight on the ground, especially when they believe that the laying down of their lives is done frivolously by their commanders. There are multiple stories of this in modern times. It was something that gave me a lot of hope that the ordinary person is not "just" a pawn in a larger scheme. The word "dharma," as I understand it, and different from the Buddhist definition, is difficult to translate into English (and others may have a different view than mine), but the assertion as I was taught is that dharma IS the order of the creation, like an invisible force of balance. As an aside, I use the word "creation" not in the Judeo-Christian sense of a god in the clouds throwing the earth into being like a donut into hot oil, but in the sense of a universe that infinitely creates itself, forever unfolding. It's something like gravity, pervasive throughout all material subtle and gross, like a "law." Hence dharma explains the dynamics of karma and why what goes around comes around, etc. Because we are situated in this universe, we are a part of it. We are all stardust, right? Of course the law of karma is a belief system, because it can't be independently verified, but it is reasonable, it isn't irrational. How can scientific method which lies within the creation be used to understand the creation? I was in the lumberyard yesterday to have some wood cut to my specifications for a furniture project of mine. The clerk cutting my wood signaled to me to stand back, and then he said, "Watch your eyes!" and after the saw stopped, I said, "Please sir, tell me, how do I watch my eyes? Do I pull one eyeball out to look at the other one?" He laughed. Anyway, if we are in harmony of the order, then all goes well. We all know this somewhere in the fiber of our being. If we don't, we have the tendency to consider this ignorance as an aberration of some sort, the stuff of which the criminal is made. If we are not in harmony, well then participants of an act pay the price at some point in one way or another. Ethics, in this sense, is being in harmony with the order, which is ahimsa or "non-injury." Non-injury is not absence of harm, it is acting with minimal harm, and to do that requires a person who is a master of one's own mind and body, one who creates minimal disturbance and lives peacefully. I believe that this is what Spinoza was trying to sort out, what a universal order looks like. He didn't exactly pull out his eyeball, but he certainly ground up a lot of glass. As I see it, we have too many people today in positions of power who don't possess this understanding of dharma (which is not a religious idea, but a word-meaning that references something that is beyond human existence). Instead, these actors on the worldstage believe they can be strongmen and assault the liberty of others with impunity. It just can't last; it is not sustainable. Any addict eventually succumbs to one's addiction, especially an addiction to power. History is littered with these kinds of foolish people. The ordinary person from afar sees that this kind of "power" is nothing but an illusion. The material of the master and of the slave is the same material of two people who see themselves to be equal to one another. The difference is a state of mind. If state power were true power it would last for an eternity, but it doesn't. It changes, it develops, and that is why we should keep the candle of hope aflame. To what others have said about ethics recently on the list, I can understand Marx's desire to remove questions of ethics from a study of economics as a path to a clear view unsullied by religious dogma. But then he'd have to put the ethics (not dogma) back. Actually all human acts, economic or otherwise are for self-preservation alone. Darwin was clear about this. The mistake here was for Marx to think that ethics is the sole realm of the religious mind, which in time seems to become ossified dogma in its religious contexts. Perhaps Spinoza has something to say here, and that may be why Vygotsky was taken by Spinoza, as having a germ in his thinking that could develop into a real answer. I can't make an informed comment upon Andy's view that the way to return ethics into Marxist thought is through Hegel, though it is provocative. It's sort of a u-turn in the flow of historical discourse, as if Marx took the wrong freeway off-ramp and drove off the map and now we are left with no landmarks for navigation. However, if one can momentarily accept dharma as being the order in the universe in which we must play well with others and within the environment in which we live, then the answer is already available in the here and now, and it is not something for humans to invent, but it is there latent to us for us to discover, if only because dharma was here before us and will remain after we are long gone. Gravity, which we cannot deny, even if we assert its relativity, we can't deny its existence. Dharma is the same, the reason we don't like its presence is because it puts us square with ourselves, and the work which we must do to exist in harmony takes a great deal of work, not just political work, but inner work. Dharma makes *sense." If we hold power to annihilate ourselves with nuclear devastation then we get what we deserve because that order-built-into-the-universe cannot be altered and the boomerang we have thrown outward away from ourselves, will hit only us in the face at full force. The earth will just continue without us. So it means in order to survive we really do have to do some deep inner work, which boils down to seeing ourselves in the other. It means we must look at the leaders who act against dharma as if they were rehabilitative versions of ourselves, rather than as "not-me others." Our search for harmony can only be achieved by ensuring intellectual freedom, then we are free to ask the important questions and to listen to the possible answers. How else can higher functions develop? It's just common sense of our material world. More common and pervasive than we perceive, think, or feel. Kind regards, Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180722/08400af7/attachment.html From haydizulfei@rocketmail.com Sun Jul 22 00:38:36 2018 From: haydizulfei@rocketmail.com (Haydi Zulfei) Date: Sun, 22 Jul 2018 07:38:36 +0000 (UTC) Subject: [Xmca-l] Re: Anniversary for Sakharov's Essay In-Reply-To: References: Message-ID: <1674712817.12436334.1532245116440@mail.yahoo.com> Thanks Goodness!! Vanessa Christina Wills has read Marx deeply , presented a work of MUCH WORTH humanistically , coordinately and harmoniously avoiding any arbitrary , egotistic and destructive deviations. A very enjoyable informative read! Thanks Bill Kerr! And thanks trailers to the Title! In my list of specific study on the subject I forgot to name Engels' "Ludwig Feuerbach and the End of Classical German Philosophy". Gratefully Haydi On Sunday, July 22, 2018, 5:30:03 AM GMT+4:30, Annalisa Aguilar wrote: #yiv5646837034 #yiv5646837034 -- P {margin-top:0;margin-bottom:0;}#yiv5646837034 Peter and venerable others, The name of your study reminds me of George Lakoff's "Moral Politics: How Liberals and Conservatives Think." I was poking around the University of Chicago Press website and see that it is now in its third edition (1996, 2002, and 2016) Lakoff has also written an essay on Trump which is probably included in the latest edition: http://press.uchicago.edu/books/excerpt/2016/lakoff_trump.html As a student of Vedanta, I have the view that unethical behavior has a high possibility when I see myself as separate from the environment and others around me. Separateness creates fear, and fear invokes self-protection whereby my action is against the other who is not as human, nor as valuable as me and mine. That dearness of myself is not about ego or narcissism, it is a reality in all humans, no matter the culture. What is different is how that "me-ness" is expressed. Is it as individuals? as tribes? as an element of the universe? as an entity one with nature? as an entity against nature? etc. That's likely what culture is about, how is that "me-ness" expressed and how does it come to develop to be expressed as we find it. If I see myself reflected in others and also in the environment around me, then there is something (myself) to keep me connected because I can hold less fear, and if I have less fear, then even with the fear I do hold, my behavior of self-protection includes the environment and others, they are no longer different from me, because they are me. Thus, I (have the ability to) keep hold upon my thinking mind rather than sinking into the fight/flight that comes from an intense fear whereby self-preservation becomes a mechanical reflex of survival, a human response that even the most disciplined person may not be able to rein in. Perhaps Socrates was that kind of person.? It has often bothered me that in times of war that when submerging humans into that colosseum of violence, the worst of human behavior becomes irrational, sadistic, and gratuitous, and it's fairly consistent that this happens. But then it is not the norm to live in a theater of war, it is the exception. The rationalization of war is to protect something, a spectrum between an interest on the one end to the homeland upon the other; it's always (and should be) treated as a means of last resort, yet in modern times, that mandate seems to side on interests more than anything else, and to hell with diplomacy. I think that is why people (in general) are so distressed and pissed off about the state of affairs we find ourselves today. These acts of protection have to do with where the self is placed and where the self is not (considered to exist). In "Sir, No Sir!," a film documentary about the disobedience of soldiers in the field tells us that there are soldiers that refuse to fight on the ground, especially when they believe that the laying down of their lives is done frivolously by their commanders. There are multiple stories of this in modern times. It was something that gave me a lot of hope that the ordinary person is not "just" a pawn in a larger scheme. The word "dharma," as I understand it, and different from the Buddhist definition, is difficult to translate into English (and others may have a different view than mine), but the assertion as I was taught is that dharma IS the order of the creation, like an invisible force of balance. As an aside, I use the word "creation" not in the Judeo-Christian sense of a god in the clouds throwing the earth into being like a donut into hot oil, but in the sense of a universe that infinitely creates itself, forever unfolding. It's something like gravity, pervasive throughout all material subtle and gross, like a "law." Hence dharma explains the dynamics of karma and why what goes around comes around, etc. Because we are situated in this universe, we are a part of it. We are all stardust, right? Of course the law of karma is a belief system, because it can't be independently verified, but it is reasonable, it isn't irrational. How can scientific method which lies within the creation be used to understand the creation? I was in the lumberyard yesterday to have some wood cut to my specifications for a furniture project of mine. The clerk cutting my wood signaled to me to stand back, and then he said, "Watch your eyes!" and after the saw stopped, I said, "Please sir, tell me, how do I watch my eyes? Do I pull one eyeball out to look at the other one?" He laughed. Anyway, if we are in harmony of the order, then all goes well. We all know this somewhere in the fiber of our being. If we don't, we have the tendency to consider this ignorance as an aberration of some sort, the stuff of which the criminal is made.?If we are not in harmony, well then participants of an act pay the price at some point in one way or another. Ethics, in this sense, is being in harmony with the order, which is ahimsa or "non-injury." Non-injury is not absence of harm, it is acting with minimal harm, and to do that requires a person who is a master of one's own mind and body, one who creates minimal disturbance and lives peacefully. I believe that this is what Spinoza was trying to sort out, what a universal order looks like. He didn't exactly pull out his eyeball, but he certainly ground up a lot of glass. As I see it, we have too many people today in positions of power who don't possess this understanding of dharma (which is not a religious idea, but a word-meaning that references something that is beyond human existence). Instead, these actors on the worldstage believe they can be strongmen and assault the liberty of others with impunity. It just can't last; it is not sustainable. Any addict eventually succumbs to one's addiction, especially an addiction to power. History is littered with these kinds of foolish people. The ordinary person from afar sees that this kind of "power" is nothing but an illusion. The material of the master and of the slave is the same material of two people who see themselves to be equal to one another. The difference is a state of mind. If state power were true power it would last for an eternity, but it doesn't. It changes, it develops, and that is why we should keep the candle of hope aflame.? To what others have said about ethics recently on the list, I can understand Marx's desire to remove questions of ethics from a study of economics as a path to a clear view unsullied by religious dogma. But then he'd have to put the ethics (not dogma) back. Actually all human acts, economic or otherwise are for self-preservation alone. Darwin was clear about this. The mistake here was for Marx to think that ethics is the sole realm of the religious mind, which in time seems to become ossified dogma in its religious contexts. Perhaps Spinoza has something to say here, and that may be why Vygotsky was taken by Spinoza, as having a germ in his thinking that could develop into a real answer. I can't make an informed comment upon Andy's view that the way to return ethics into Marxist thought is through Hegel, though it is provocative. It's sort of a u-turn in the flow of historical discourse, as if Marx took the wrong freeway off-ramp and drove off the map and now we are left with no landmarks for navigation. However, if one can momentarily accept dharma as being the order in the universe in which we must play well with others and within the environment in which we live, then the answer is already available in the here and now, and it is not something for humans to invent, but it is there latent to us for us to discover, if only because dharma was here before us and will remain after we are long gone. Gravity, which we cannot deny, even if we assert its relativity, we can't deny its existence. Dharma is the same, the reason we don't like its presence is because it puts us square with ourselves, and the work which we must do to exist in harmony takes a great deal of work, not just political work, but inner work. Dharma makes *sense." If we hold power to annihilate ourselves with nuclear devastation then we get what we deserve because that order-built-into-the-universe cannot be altered and the boomerang we have thrown outward away from ourselves, will hit only us in the face at full force. The earth will just continue without us. So it means in order to survive we really do have to do some deep inner work, which boils down to seeing ourselves in the other. It means we must look at the leaders who act against dharma as if they were rehabilitative versions of ourselves, rather than as "not-me others." Our search for harmony can only be achieved by ensuring intellectual freedom, then we are free to ask the important questions and to listen to the possible answers. How else can higher functions develop? It's just common sense of our material world. More common and pervasive than we?perceive, think, or feel. Kind regards, Annalisa ? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180722/71b00105/attachment-0001.html From smago@uga.edu Sun Jul 22 03:27:52 2018 From: smago@uga.edu (Peter Smagorinsky) Date: Sun, 22 Jul 2018 10:27:52 +0000 Subject: [Xmca-l] Re: Anniversary for Sakharov's Essay In-Reply-To: References: , Message-ID: In case anyone's collecting titles on this subject (that is, how liberals and conservatives think), I found this book compelling, although not flawless: http://righteousmind.com/ The Righteous Mind: Why Good People are Divided by Politics and Religion is a 2012 social psychology book by Jonathan Haidt, in which the author describes human morality as it relates to politics and religion. From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Annalisa Aguilar Sent: Saturday, July 21, 2018 8:58 PM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: Anniversary for Sakharov's Essay Peter and venerable others, The name of your study reminds me of George Lakoff's "Moral Politics: How Liberals and Conservatives Think." I was poking around the University of Chicago Press website and see that it is now in its third edition (1996, 2002, and 2016) Lakoff has also written an essay on Trump which is probably included in the latest edition: http://press.uchicago.edu/books/excerpt/2016/lakoff_trump.html As a student of Vedanta, I have the view that unethical behavior has a high possibility when I see myself as separate from the environment and others around me. Separateness creates fear, and fear invokes self-protection whereby my action is against the other who is not as human, nor as valuable as me and mine. That dearness of myself is not about ego or narcissism, it is a reality in all humans, no matter the culture. What is different is how that "me-ness" is expressed. Is it as individuals? as tribes? as an element of the universe? as an entity one with nature? as an entity against nature? etc. That's likely what culture is about, how is that "me-ness" expressed and how does it come to develop to be expressed as we find it. If I see myself reflected in others and also in the environment around me, then there is something (myself) to keep me connected because I can hold less fear, and if I have less fear, then even with the fear I do hold, my behavior of self-protection includes the environment and others, they are no longer different from me, because they are me. Thus, I (have the ability to) keep hold upon my thinking mind rather than sinking into the fight/flight that comes from an intense fear whereby self-preservation becomes a mechanical reflex of survival, a human response that even the most disciplined person may not be able to rein in. Perhaps Socrates was that kind of person. It has often bothered me that in times of war that when submerging humans into that colosseum of violence, the worst of human behavior becomes irrational, sadistic, and gratuitous, and it's fairly consistent that this happens. But then it is not the norm to live in a theater of war, it is the exception. The rationalization of war is to protect something, a spectrum between an interest on the one end to the homeland upon the other; it's always (and should be) treated as a means of last resort, yet in modern times, that mandate seems to side on interests more than anything else, and to hell with diplomacy. I think that is why people (in general) are so distressed and pissed off about the state of affairs we find ourselves today. These acts of protection have to do with where the self is placed and where the self is not (considered to exist). In "Sir, No Sir!," a film documentary about the disobedience of soldiers in the field tells us that there are soldiers that refuse to fight on the ground, especially when they believe that the laying down of their lives is done frivolously by their commanders. There are multiple stories of this in modern times. It was something that gave me a lot of hope that the ordinary person is not "just" a pawn in a larger scheme. The word "dharma," as I understand it, and different from the Buddhist definition, is difficult to translate into English (and others may have a different view than mine), but the assertion as I was taught is that dharma IS the order of the creation, like an invisible force of balance. As an aside, I use the word "creation" not in the Judeo-Christian sense of a god in the clouds throwing the earth into being like a donut into hot oil, but in the sense of a universe that infinitely creates itself, forever unfolding. It's something like gravity, pervasive throughout all material subtle and gross, like a "law." Hence dharma explains the dynamics of karma and why what goes around comes around, etc. Because we are situated in this universe, we are a part of it. We are all stardust, right? Of course the law of karma is a belief system, because it can't be independently verified, but it is reasonable, it isn't irrational. How can scientific method which lies within the creation be used to understand the creation? I was in the lumberyard yesterday to have some wood cut to my specifications for a furniture project of mine. The clerk cutting my wood signaled to me to stand back, and then he said, "Watch your eyes!" and after the saw stopped, I said, "Please sir, tell me, how do I watch my eyes? Do I pull one eyeball out to look at the other one?" He laughed. Anyway, if we are in harmony of the order, then all goes well. We all know this somewhere in the fiber of our being. If we don't, we have the tendency to consider this ignorance as an aberration of some sort, the stuff of which the criminal is made. If we are not in harmony, well then participants of an act pay the price at some point in one way or another. Ethics, in this sense, is being in harmony with the order, which is ahimsa or "non-injury." Non-injury is not absence of harm, it is acting with minimal harm, and to do that requires a person who is a master of one's own mind and body, one who creates minimal disturbance and lives peacefully. I believe that this is what Spinoza was trying to sort out, what a universal order looks like. He didn't exactly pull out his eyeball, but he certainly ground up a lot of glass. As I see it, we have too many people today in positions of power who don't possess this understanding of dharma (which is not a religious idea, but a word-meaning that references something that is beyond human existence). Instead, these actors on the worldstage believe they can be strongmen and assault the liberty of others with impunity. It just can't last; it is not sustainable. Any addict eventually succumbs to one's addiction, especially an addiction to power. History is littered with these kinds of foolish people. The ordinary person from afar sees that this kind of "power" is nothing but an illusion. The material of the master and of the slave is the same material of two people who see themselves to be equal to one another. The difference is a state of mind. If state power were true power it would last for an eternity, but it doesn't. It changes, it develops, and that is why we should keep the candle of hope aflame. To what others have said about ethics recently on the list, I can understand Marx's desire to remove questions of ethics from a study of economics as a path to a clear view unsullied by religious dogma. But then he'd have to put the ethics (not dogma) back. Actually all human acts, economic or otherwise are for self-preservation alone. Darwin was clear about this. The mistake here was for Marx to think that ethics is the sole realm of the religious mind, which in time seems to become ossified dogma in its religious contexts. Perhaps Spinoza has something to say here, and that may be why Vygotsky was taken by Spinoza, as having a germ in his thinking that could develop into a real answer. I can't make an informed comment upon Andy's view that the way to return ethics into Marxist thought is through Hegel, though it is provocative. It's sort of a u-turn in the flow of historical discourse, as if Marx took the wrong freeway off-ramp and drove off the map and now we are left with no landmarks for navigation. However, if one can momentarily accept dharma as being the order in the universe in which we must play well with others and within the environment in which we live, then the answer is already available in the here and now, and it is not something for humans to invent, but it is there latent to us for us to discover, if only because dharma was here before us and will remain after we are long gone. Gravity, which we cannot deny, even if we assert its relativity, we can't deny its existence. Dharma is the same, the reason we don't like its presence is because it puts us square with ourselves, and the work which we must do to exist in harmony takes a great deal of work, not just political work, but inner work. Dharma makes *sense." If we hold power to annihilate ourselves with nuclear devastation then we get what we deserve because that order-built-into-the-universe cannot be altered and the boomerang we have thrown outward away from ourselves, will hit only us in the face at full force. The earth will just continue without us. So it means in order to survive we really do have to do some deep inner work, which boils down to seeing ourselves in the other. It means we must look at the leaders who act against dharma as if they were rehabilitative versions of ourselves, rather than as "not-me others." Our search for harmony can only be achieved by ensuring intellectual freedom, then we are free to ask the important questions and to listen to the possible answers. How else can higher functions develop? It's just common sense of our material world. More common and pervasive than we perceive, think, or feel. Kind regards, Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180722/84e0b7f1/attachment.html From annalisa@unm.edu Sun Jul 22 07:33:37 2018 From: annalisa@unm.edu (Annalisa Aguilar) Date: Sun, 22 Jul 2018 14:33:37 +0000 Subject: [Xmca-l] Re: Anniversary for Sakharov's Essay In-Reply-To: References: , , Message-ID: Out of curiosity, I looked up the definitions of the words "morals" and "ethics" and apparently ethics is the system-word to organize morals, the particular-word. It was useful to me, because I had always considered morals as a religiously informed and somewhat arbitrary. Ethics to me always had a more-grounded meaning and I thought more scientific in terms of application. This is just sharing my own projections upon the words themselves. Now, what if there were a system of laws that have nothing to do with being human, but with being itself? Something pervasive to us and yet beyond us, at the same time? What if any human construct of morality/ethics (religious, humanistic, bohemian, etc) were in some fashion made of this material system of cause and effect, in a analogous manner that gold can be shaped as a watch, as a coin, a ring, a tooth filling, or an electronic conductor on the motherboard of a computer? We would say then, if we did not know that these objects have anything in common (that they are made of gold), that these objects are *essentially* different and have nothing to do with each other, because they have different applications and purposes, and consequently, to extend the metaphor, there would be an appearance of instances of morality and ethics being arbitrary and separate, and their values being solely conditioned by culture, history, and so forth, and not determined by something more essential or basic. There might be overlap (some objects relate to one another because they are jewelry), but that also has an appearance of happenstance, arising from historical coupling and human habits of appropriation and borrowing. And yet, if we were to take these two very different explanations of how a system of ethics/morals is produced or manifests, our perception of them would be identical in the way that to observe a clay pot, looking at the pot and looking at the clay, we are looking at the same objects (pot and clay, plate and clay, vase and clay) in the same lociis (what is the plural of loci? My Latin grammar fails me). In a religious system of morals, there is an explanation offered that to follow the system has a goal (that is assumed that everyone shares) and this end goal to be closer to god-ness, whether that means as a reward for winning a deity's favor with our good behavior, or as a way of appeasing those in our tribe that we are successfully socialized to perform our duty as a participant with minimal conflict or punishment, exile or banishment. If we take god-ness out of the equation, and we possess no motivation except to get along with others in order to maintain fitness and survivability, how does it look any different? It's still cause and effect. Kind regards, Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180722/8c715451/attachment.html From andyb@marxists.org Sun Jul 22 07:42:40 2018 From: andyb@marxists.org (Andy Blunden) Date: Mon, 23 Jul 2018 00:42:40 +1000 Subject: [Xmca-l] Re: Anniversary for Sakharov's Essay In-Reply-To: References: Message-ID: <0a896261-94cc-b2ad-21b4-6927e0ec4d74@marxists.org> There are different definitions, Annalisa, according to which current of thinking is relevant. "Ethics" is derived from the Greek roots for the same words from which "moral" is derived from Latin roots. So in philosophy, the two words were interchangeable until Hegel. Kant's Moral Philosophy was his Ethical theory. Hegel gave the two words distinct meanings. In short (!), morals are rules one makes for oneself and ethics are rules created by society. Nothing to do with sex or religion of course. So far as I know, after Hegel, all philosophy in that tradition incorporates Hegel's distinction. I couldn't answer for how the terms are used in analytical philosophy, and in the common language the two words have taken on different connotations. Andy ------------------------------------------------------------ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 23/07/2018 12:33 AM, Annalisa Aguilar wrote: > > Out of curiosity, I looked up the definitions of the words > "morals" and "ethics" and apparently ethics is the > system-word to organize morals, the particular-word. It > was useful to me, because I had always considered morals > as a religiously informed and somewhat arbitrary. Ethics > to me always had a more-grounded meaning and I thought > more scientific in terms of application. This is just > sharing my own projections upon the words themselves. > > > Now, what if there were a system of laws that have nothing > to do with being human, but with being itself? Something > pervasive to us and yet beyond us, at the same time? > > > What if any human construct of morality/ethics (religious, > humanistic, bohemian, etc) were in some fashion made of > this material system of cause and effect, in a analogous > manner that gold can be shaped as a watch, as a coin, a > ring, a tooth filling, or an electronic conductor on the > motherboard of a computer? > > > We would say then, if we did not know that these objects > have anything in common (that they are made of gold), that > these objects are *essentially* different and have nothing > to do with each other, because they have different > applications and purposes, and consequently, to extend the > metaphor, there would be an appearance of instances of > morality and ethics being arbitrary and separate, and > their values being solely conditioned by culture, history, > and so forth, and not determined by something more > essential or basic. There might be overlap (some objects > relate to one another because they are jewelry), but that > also has an appearance of happenstance, arising from > historical coupling and human habits of appropriation and > borrowing. > > > And yet, if we were to take these two very different > explanations of how a system of ethics/morals is produced > or manifests, our perception of them would be identical in > the way that to observe a clay pot, looking at the pot and > looking at the clay, we are looking at the same objects > (pot and clay, plate and clay, vase and clay) in the same > lociis (what is the plural of loci? My Latin grammar fails > me). > > > In a religious system of morals, there is an explanation > offered that to follow the system has a goal (that is > assumed that everyone shares) and this end goal to be > closer to god-ness, whether that means as a reward for > winning a deity's favor with our good behavior, or as a > way of appeasing those in our tribe that we are > successfully socialized to perform our duty as a > participant with minimal conflict or punishment, exile or > banishment. > > > If we take god-ness out of the equation, and we possess no > motivation except to get along with others in order to > maintain fitness and survivability, how does it look any > different? It's still cause and effect. > > > Kind regards, > > > Annalisa > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180723/22284103/attachment.html From annalisa@unm.edu Sun Jul 22 08:12:23 2018 From: annalisa@unm.edu (Annalisa Aguilar) Date: Sun, 22 Jul 2018 15:12:23 +0000 Subject: [Xmca-l] Re: Anniversary for Sakharov's Essay In-Reply-To: <0a896261-94cc-b2ad-21b4-6927e0ec4d74@marxists.org> References: , <0a896261-94cc-b2ad-21b4-6927e0ec4d74@marxists.org> Message-ID: Thanks Andy! OK. I accept that there is a historical conditioning to these words "moral" and "ethics", arising from language roots, Latin and Greek respectively, and then also that their meanings (connotations) shift over time. Sure, that's fine. But what are meanings to which the words point? and what do those meanings have in common? What is their substrate? What you have shared doesn't alter my assertion (with my assistance provided by a dictionary) that "moral" is particular and "ethics" is systemic. Regardless, what are these words pointing to that were you to pull out conditioning by human society, by history, what would be left? It would be cause and effect. No different than if I throw a ball up into the air, the ball will eventually come down to earth. If I plant a seed and provide it the right conditions of good soil, sunshine and water, the sprout emerges. If I do X then Y will happen. If you do X then Y will happen too. and this necessarily sets up an argument particularly if you say If you do X then Y will happen, but if I do X then Z will happen. The distraction is what letters you set into the places where X, Y, and Z stand in the above sentences. The truth of these statements (what they have essentially in common) is cause and effect. This is so pervasive that we take it for granted. It is like watching our eyes. We understand this connection between cause and effect intrinsically, at the same time, this understanding is not because we are human, even if the way we make sense of it is through mind and language, which are very much (human) tools at our disposal. Kind regards, Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180722/719f2a64/attachment.html From andyb@marxists.org Sun Jul 22 08:16:11 2018 From: andyb@marxists.org (Andy Blunden) Date: Mon, 23 Jul 2018 01:16:11 +1000 Subject: [Xmca-l] Re: Anniversary for Sakharov's Essay In-Reply-To: References: <0a896261-94cc-b2ad-21b4-6927e0ec4d74@marxists.org> Message-ID: I happy to say, Annalisa, that if you "pulled out conditioning by human society" nothing would be left of ethics and morals at all. In a sense, ethics and moral are what is not cause and effect. Andy ------------------------------------------------------------ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 23/07/2018 1:12 AM, Annalisa Aguilar wrote: > > Thanks Andy! > > > OK. I accept that there is a historical conditioning to > these words "moral" and "ethics", arising from language > roots, Latin and Greek respectively, and then also that > their meanings (connotations) shift over time. Sure, > that's fine. > > > But what are meanings to which the words point? and what > do those meanings have in common? What is their substrate? > > > What you have shared doesn't alter my assertion (with my > assistance provided by a dictionary) that "moral" is > particular and "ethics" is systemic. > > > Regardless, what are these words pointing to that were you > to pull out conditioning by human society, by history, > what would be left? It would be cause and effect. No > different than if I throw a ball up into the air, the ball > will eventually come down to earth. If I plant a seed and > provide it the right conditions of good soil, sunshine and > water, the sprout emerges. > > > If I do X then Y will happen. If you do X then Y will > happen too. > > > and this necessarily sets up an argument particularly if > you say > > > If you do X then Y will happen, but if I do X then Z will > happen. > > > The distraction is what letters you set into the places > where X, Y, and Z stand in the above sentences. The truth > of these statements (what they have essentially in common) > is cause and effect. > > > This is so pervasive that we take it for granted. It is > like watching our eyes. > > > We understand this connection between cause and effect > intrinsically, at the same time, this understanding is not > because we are human, even if the way we make sense of it > is through mind and language, which are very much (human) > tools at our disposal. > > > Kind regards, > > Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180723/3ffd81ab/attachment.html From annalisa@unm.edu Sun Jul 22 08:32:10 2018 From: annalisa@unm.edu (Annalisa Aguilar) Date: Sun, 22 Jul 2018 15:32:10 +0000 Subject: [Xmca-l] Re: Anniversary for Sakharov's Essay In-Reply-To: References: <0a896261-94cc-b2ad-21b4-6927e0ec4d74@marxists.org> , Message-ID: Actually Andy, we are a little bit in agreement, maybe even more than a little. There is the word and the meaning. Word is different than meaning. Meaning is not word. To point out your sentence: "In a sense, ethics and moral are what is not cause and effect." "Sense" points to what is human. Sense distinguishes in a field of perception. Cause and effect are independent of sense. The words "ethics" and "morals" and their meanings to which they point are how we "make sense" of cause and effect, that is, to make cause and effect human. The pot is clay, but clay is not the pot (because clay can be not-pot, i.e. a plate), but also if the pot shatters the pot is no more, the clay remains. Kind regards, Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180722/f3e674fe/attachment.html From haydizulfei@rocketmail.com Mon Jul 23 00:38:35 2018 From: haydizulfei@rocketmail.com (Haydi Zulfei) Date: Mon, 23 Jul 2018 07:38:35 +0000 (UTC) Subject: [Xmca-l] Re: Anniversary for Sakharov's Essay In-Reply-To: References: <0a896261-94cc-b2ad-21b4-6927e0ec4d74@marxists.org> Message-ID: <196756186.13013028.1532331516270@mail.yahoo.com> Who practically defends relatedness , one's being related to Nature and the Social Community , to the other to avoid selfishness , egoism and narcissistic madness , say , related to Otherness?? In this message and in this borrowing , I have no specific creature? addressed just intending elucidating some obscurities : Quotes by Vanessa : "[Under capitalism] the productive forces appear as a world for themselves, quite independent ofand divorced from the individuals, alongside the individuals; the reason for this is that the individuals, whose forces they are, exist split up and in opposition to one another, whilst, on the other hand, these forces are only real forces in the intercourse and association of these individuals. Thus, on the one hand, we have a totality of productive forces, which have, as it were, taken on a material form and are for the individuals themselves no longer the forces of the individuals but of private property, and hence of the individuals only insofar as they are owners of private property. [?] On the other hand, standing against these productive forces, we have the majority of the individuals from whom these forces have been wrested away, and who, robbed thus of all real lifecontent, have become abstract individuals, who are, however, by this very fact put into a position to enter into relation with one another as individuals. [?] Things have now come to such a pass that the individuals must appropriate the existing totality of productive forces, not only to achieve self-activity, but, also, merely to safeguard their very existence. (The German Ideology, MECW 5: 86-7)" Criticizing Democritus as against Epicurus : "Speaking exactly and in the prosaic sense, the members of civil society are not atoms. The specificproperty of the atom is that it has no properties and is therefore not connected with beings outside it by any relationship determined by its own natural necessity. The atom has no needs, it is selfsufficient; the world outside it is an absolute vacuum, i.e., it is contentless, senseless, meaningless, just because the atom has all fullness in itself. The egoistic individual in civil society may in his non-sensuous imagination and lifeless abstraction inflate himself into an atom, i.e., into an unrelated, self-sufficient, wantless, absolutely full, blessed being. Unblessed sensuous reality does not bother about his imagination, each of his senses compels him to believe in the existence of the world and of individuals outside him, and even his profane stomach reminds him every day that the world outside him is not empty, but is what really fills. Every activity and property of his being, every one of his vital urges, becomes a need, a necessity, which his self-seeking transforms into seeking for other things and human beings outside him. [?] It is therefore not the state that holds the atoms of civil society together, but the fact that they are atoms only in imagination, in the heaven of their fancy, but in reality beings tremendously different from atoms, in other words, not divine egoists, but egoistic human beings. (The Holy Family, MECW 4:120-1)" And many others. And if time allows , knowing who's the one Marx criticizes and what's the former up to ? Attached. Gratefully Haydi On Sunday, July 22, 2018, 8:05:02 PM GMT+4:30, Annalisa Aguilar wrote: #yiv8909097479 #yiv8909097479 -- P {margin-top:0;margin-bottom:0;}#yiv8909097479 Actually Andy, we are a little bit in agreement, maybe even more than a little. There is the word and the meaning. Word is different than meaning. Meaning is not word. To point out your sentence: "In a sense, ethics and moral are what is not cause and effect." "Sense" points to what is human. Sense distinguishes in a field of perception. Cause and effect are independent of sense. The words "ethics" and "morals" and their meanings to which they point?are how we "make sense" of cause and effect, that is, to make cause and effect human. The pot is clay, but clay is not the pot (because clay can be not-pot, i.e. a plate), but also if the pot shatters the pot is no more, the clay remains. Kind regards, Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180723/638e8b79/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: The Ego and His Own - Max Stirner.pdf Type: application/pdf Size: 2043213 bytes Desc: not available Url : http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180723/638e8b79/attachment-0001.pdf From michakonto@googlemail.com Mon Jul 23 04:30:07 2018 From: michakonto@googlemail.com (michael) Date: Mon, 23 Jul 2018 14:30:07 +0300 Subject: [Xmca-l] Post-Doc Research Fellow in Childhood and Youth, University of Leeds, UK Message-ID: Dear colleagues, I am soon going to work as a Professor in Global Childhood and Youth Studies at the University of Leeds, UK. In this context, I am searching for a Post-Doc Research Fellow to (a) Develop and support research funding applications in collaboration with colleagues within the Centre for Childhood, Education and Social Justice, (b) Support existing research as well as conduct new primary research, which may involve relevant fields such as education, psychology, sociology and anthropology of childhood as well as inter- and trans-disciplinary collaboration. This is an excellent opportunity for an early career researcher who is committed to research excellence as well as to achieving broader societal impact with respect to contemporary challenges children and young people are facing locally and/or globally. We would welcome applications from researchers in the broader field of socio-cultural-historical psychology. This position requires full time presence at the University of Leeds. This is a fixed-term position, which in principle we aim to extend in due time, depending on the profile and contributions of the potential applicant. The closing date is the 17th of Aug 2018. To view the advert: http://jobs.leeds.ac.uk/ESLED1056 Could you please draw this to the attention of your networks and any suitable potential applicants? Many thanks, Michalis Kontopodis Global :: Youth :: Crises http://mkontopodis.wordpress.com Email: michaliskonto@googlemail.com Twitter: @m_kontopodis -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180723/dbe9e9d1/attachment.html From apbrcortez@yahoo.com.br Mon Jul 23 05:05:21 2018 From: apbrcortez@yahoo.com.br (Ana Paula B. R. Cortez) Date: Mon, 23 Jul 2018 12:05:21 +0000 (UTC) Subject: [Xmca-l] Any PhD opening in the Houston area References: <1398561751.936349.1532347521954.ref@mail.yahoo.com> Message-ID: <1398561751.936349.1532347521954@mail.yahoo.com> Dear all, I'd like to know if any of you knows of a Ph.D. opportunity in Houston, TX. I'd really appreciate if you could send me information about that.Warm regards, Ana Paula Cortez -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180723/2e020b88/attachment.html From anamshane@gmail.com Tue Jul 24 14:59:48 2018 From: anamshane@gmail.com (Ana Marjanovic-Shane) Date: Tue, 24 Jul 2018 21:59:48 +0000 Subject: [Xmca-l] A new and enlarged edition of "Vygotsky and Creativity" has been published Message-ID: Dear friends and colleagues, We happy to let you all know that, after a long wait and many unexpected delays, the second edition was finally published, of ?Vygotsky and Creativity: A Cultural-historical Approach to Play, Meaning Making, and the Arts? ? by M. Cathrene Connery, Vera John-Steiner and Ana Marjanovic-Shane!! Check out the flyer I am sending! The new edition has four more chapters ? including now the following authors: Vera John-Steiner, M. Cathrene Connery, Ana Marjanovic-Shane, Anna Stetsenko, Lois Holzman, Biljana C. Fredriksen, Patricia St. John, Artin G?ncu?, Barry Oreck, Jessica Nicoll, Peter Smagorinsky, Seana Moran, Beth Ferholt, Carrie Lobman, Michelle Zoss, and Larry and Francine Smolucha. See the attached Table of Contents! Take care, Ana and Cathrene -- Ana Marjanovic-Shane Phone: 267-334-2905 Email: anamshane@gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180724/2d35cfab/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: Vygotsky and Creativity II, Table of Contens.pdf Type: application/pdf Size: 91573 bytes Desc: Vygotsky and Creativity II, Table of Contens.pdf Url : http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180724/2d35cfab/attachment-0002.pdf -------------- next part -------------- A non-text attachment was scrubbed... Name: Vygotsky and Creativity 2ndEdition, 2018 - Flyer.pdf Type: application/pdf Size: 1085163 bytes Desc: Vygotsky and Creativity 2ndEdition, 2018 - Flyer.pdf Url : http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180724/2d35cfab/attachment-0003.pdf From Dana.Walker@unco.edu Tue Jul 24 15:52:14 2018 From: Dana.Walker@unco.edu (Walker, Dana) Date: Tue, 24 Jul 2018 22:52:14 +0000 Subject: [Xmca-l] Re: A new and enlarged edition of "Vygotsky and Creativity" has been published In-Reply-To: References: Message-ID: <82E5CFCF-4BFE-4081-958F-CDE7CB2E3E78@unco.edu> Congratulations Ana and Catherine! Dana Walker University of Northern Colorado Dana.walker@unco.edu On Jul 24, 2018, at 3:03 PM, Ana Marjanovic-Shane > wrote: Dear friends and colleagues, We happy to let you all know that, after a long wait and many unexpected delays, the second edition was finally published, of ?Vygotsky and Creativity: A Cultural-historical Approach to Play, Meaning Making, and the Arts? ? by M. Cathrene Connery, Vera John-Steiner and Ana Marjanovic-Shane!! Check out the flyer I am sending! The new edition has four more chapters ? including now the following authors: Vera John-Steiner, M. Cathrene Connery, Ana Marjanovic-Shane, Anna Stetsenko, Lois Holzman, Biljana C. Fredriksen, Patricia St. John, Artin G?ncu?, Barry Oreck, Jessica Nicoll, Peter Smagorinsky, Seana Moran, Beth Ferholt, Carrie Lobman, Michelle Zoss, and Larry and Francine Smolucha. See the attached Table of Contents! Take care, Ana and Cathrene -- Ana Marjanovic-Shane Phone: 267-334-2905 Email: anamshane@gmail.com **This message originated from outside UNC. Please use caution when opening attachments or following links. Do not enter your UNC credentials when prompted by external links.** -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180724/a16a3520/attachment.html From mcole@ucsd.edu Tue Jul 24 21:01:22 2018 From: mcole@ucsd.edu (mike cole) Date: Tue, 24 Jul 2018 21:01:22 -0700 Subject: [Xmca-l] Re: A new and enlarged edition of "Vygotsky and Creativity" has been published In-Reply-To: <82E5CFCF-4BFE-4081-958F-CDE7CB2E3E78@unco.edu> References: <82E5CFCF-4BFE-4081-958F-CDE7CB2E3E78@unco.edu> Message-ID: Totally cool! Please have someone sent a copy to Beth Ferholt for review in MCA. mike On Tue, Jul 24, 2018 at 3:52 PM, Walker, Dana wrote: > Congratulations Ana and Catherine! > > Dana Walker > University of Northern Colorado > Dana.walker@unco.edu > > On Jul 24, 2018, at 3:03 PM, Ana Marjanovic-Shane > wrote: > > Dear friends and colleagues, > > > > We happy to let you all know that, after a long wait and many unexpected > delays, the second edition was finally published, of ?Vygotsky and > Creativity: A Cultural-historical Approach to Play, Meaning Making, and the > Arts? ? by M. Cathrene Connery, Vera John-Steiner and Ana > Marjanovic-Shane!! > > > > Check out the flyer I am sending! > > > > The new edition has four more chapters ? including now the following > authors: > > > > Vera John-Steiner, M. Cathrene Connery, Ana Marjanovic-Shane, Anna > Stetsenko, Lois Holzman, Biljana C. Fredriksen, Patricia St. John, Artin > G?ncu?, Barry Oreck, Jessica Nicoll, Peter Smagorinsky, Seana Moran, Beth > Ferholt, Carrie Lobman, Michelle Zoss, and Larry and Francine Smolucha. See > the attached Table of Contents! > > > > Take care, > > > > Ana and Cathrene > > > > > > -- > > *Ana Marjanovic-Shane* > > Phone: 267-334-2905 > > Email: anamshane@gmail.com > > > **This message originated from outside UNC. Please use caution when > opening attachments or following links. Do not enter your UNC credentials > when prompted by external links.** > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180724/53d55116/attachment.html From anamshane@gmail.com Tue Jul 24 21:11:12 2018 From: anamshane@gmail.com (Ana Marjanovic-Shane) Date: Wed, 25 Jul 2018 04:11:12 +0000 Subject: [Xmca-l] Re: A new and enlarged edition of "Vygotsky and Creativity" has been published In-Reply-To: References: <82E5CFCF-4BFE-4081-958F-CDE7CB2E3E78@unco.edu> Message-ID: Dear Mike, Thanks! Beth already has a copy as one of the authors in the book. Maybe someone else, who is not a co-author should do a review for MCA? What do you think? Ana -- Ana Marjanovic-Shane Phone: 267-334-2905 Email: anamshane@gmail.com From: "xmca-l-bounces@mailman.ucsd.edu" on behalf of Mike Cole Reply-To: "eXtended Mind, Culture, Activity" Date: Wednesday, July 25, 2018 at 12:03 AM To: "eXtended Mind, Culture, Activity" Cc: "M. Cathrene Connery" Subject: [Xmca-l] Re: A new and enlarged edition of "Vygotsky and Creativity" has been published Totally cool! Please have someone sent a copy to Beth Ferholt for review in MCA. mike On Tue, Jul 24, 2018 at 3:52 PM, Walker, Dana > wrote: Congratulations Ana and Catherine! Dana Walker University of Northern Colorado Dana.walker@unco.edu On Jul 24, 2018, at 3:03 PM, Ana Marjanovic-Shane > wrote: Dear friends and colleagues, We happy to let you all know that, after a long wait and many unexpected delays, the second edition was finally published, of ?Vygotsky and Creativity: A Cultural-historical Approach to Play, Meaning Making, and the Arts? ? by M. Cathrene Connery, Vera John-Steiner and Ana Marjanovic-Shane!! Check out the flyer I am sending! The new edition has four more chapters ? including now the following authors: Vera John-Steiner, M. Cathrene Connery, Ana Marjanovic-Shane, Anna Stetsenko, Lois Holzman, Biljana C. Fredriksen, Patricia St. John, Artin G?ncu?, Barry Oreck, Jessica Nicoll, Peter Smagorinsky, Seana Moran, Beth Ferholt, Carrie Lobman, Michelle Zoss, and Larry and Francine Smolucha. See the attached Table of Contents! Take care, Ana and Cathrene -- Ana Marjanovic-Shane Phone: 267-334-2905 Email: anamshane@gmail.com **This message originated from outside UNC. Please use caution when opening attachments or following links. Do not enter your UNC credentials when prompted by external links.** -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180725/b242d671/attachment.html From mcole@ucsd.edu Tue Jul 24 21:33:33 2018 From: mcole@ucsd.edu (mike cole) Date: Tue, 24 Jul 2018 21:33:33 -0700 Subject: [Xmca-l] Re: A new and enlarged edition of "Vygotsky and Creativity" has been published In-Reply-To: References: <82E5CFCF-4BFE-4081-958F-CDE7CB2E3E78@unco.edu> Message-ID: Beth is the book review editor, Ana. She assigns reviewers. mike On Tue, Jul 24, 2018 at 9:11 PM, Ana Marjanovic-Shane wrote: > Dear Mike, > > > > Thanks! Beth already has a copy as one of the authors in the book. > > Maybe someone else, who is not a co-author should do a review for MCA? > > What do you think? > > > > Ana > > > > -- > > *Ana Marjanovic-Shane* > > Phone: 267-334-2905 > > Email: anamshane@gmail.com > > > > > > *From: *"xmca-l-bounces@mailman.ucsd.edu" > on behalf of Mike Cole > *Reply-To: *"eXtended Mind, Culture, Activity" > *Date: *Wednesday, July 25, 2018 at 12:03 AM > *To: *"eXtended Mind, Culture, Activity" > *Cc: *"M. Cathrene Connery" > *Subject: *[Xmca-l] Re: A new and enlarged edition of "Vygotsky and > Creativity" has been published > > > > Totally cool! > > > > Please have someone sent a copy to Beth Ferholt for review in MCA. > > > > mike > > > > On Tue, Jul 24, 2018 at 3:52 PM, Walker, Dana > wrote: > > Congratulations Ana and Catherine! > > Dana Walker > > University of Northern Colorado > > Dana.walker@unco.edu > > > On Jul 24, 2018, at 3:03 PM, Ana Marjanovic-Shane > wrote: > > Dear friends and colleagues, > > > > We happy to let you all know that, after a long wait and many unexpected > delays, the second edition was finally published, of ?Vygotsky and > Creativity: A Cultural-historical Approach to Play, Meaning Making, and the > Arts? ? by M. Cathrene Connery, Vera John-Steiner and Ana > Marjanovic-Shane!! > > > > Check out the flyer I am sending! > > > > The new edition has four more chapters ? including now the following > authors: > > > > Vera John-Steiner, M. Cathrene Connery, Ana Marjanovic-Shane, Anna > Stetsenko, Lois Holzman, Biljana C. Fredriksen, Patricia St. John, Artin > G?ncu?, Barry Oreck, Jessica Nicoll, Peter Smagorinsky, Seana Moran, Beth > Ferholt, Carrie Lobman, Michelle Zoss, and Larry and Francine Smolucha. See > the attached Table of Contents! > > > > Take care, > > > > Ana and Cathrene > > > > > > -- > > *Ana Marjanovic-Shane* > > Phone: 267-334-2905 > > Email: anamshane@gmail.com > > > > **This message originated from outside UNC. Please use caution when > opening attachments or following links. Do not enter your UNC credentials > when prompted by external links.** > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180724/ad5d7d65/attachment.html From kplakits@gmail.com Tue Jul 24 23:24:32 2018 From: kplakits@gmail.com (Katerina Plakitsi) Date: Wed, 25 Jul 2018 09:24:32 +0300 Subject: [Xmca-l] Re: A new and enlarged edition of "Vygotsky and Creativity" has been published In-Reply-To: References: Message-ID: Congrats!!! This has to be announced on the ISCAR website as well!!! Katerina Plakitsi ISCAR President ???? ???, 25 ???? 2018 ???? 01:02 ? ??????? Ana Marjanovic-Shane < anamshane@gmail.com> ??????: > Dear friends and colleagues, > > > > We happy to let you all know that, after a long wait and many unexpected > delays, the second edition was finally published, of ?Vygotsky and > Creativity: A Cultural-historical Approach to Play, Meaning Making, and the > Arts? ? by M. Cathrene Connery, Vera John-Steiner and Ana > Marjanovic-Shane!! > > > > Check out the flyer I am sending! > > > > The new edition has four more chapters ? including now the following > authors: > > > > Vera John-Steiner, M. Cathrene Connery, Ana Marjanovic-Shane, Anna > Stetsenko, Lois Holzman, Biljana C. Fredriksen, Patricia St. John, Artin > G?ncu?, Barry Oreck, Jessica Nicoll, Peter Smagorinsky, Seana Moran, Beth > Ferholt, Carrie Lobman, Michelle Zoss, and Larry and Francine Smolucha. See > the attached Table of Contents! > > > > Take care, > > > > Ana and Cathrene > > > > > > -- > > *Ana Marjanovic-Shane* > > Phone: 267-334-2905 > > Email: anamshane@gmail.com > > > -- Katerina Plakitsi *ISCAR President* *Professor of Science Education* *Head of the Dept. of E**arly Childhood Education* *School of Education * *University of Ioannina, Greece* *tel. +302651005771* *fax. +302651005842* *mobile.phone +306972898463* *Skype name: katerina.plakitsi3* https://www.iscar.org/ http://users.uoi.gr/kplakits www.epoque-project.eu http://bdfprojects.wixsite.com/mindset http://www.lib.uoi.gr/serp https://www.youtube.com/watch?v=isZAbefnRmo&t=7s -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180725/d620e301/attachment.html From julian.williams@manchester.ac.uk Wed Jul 25 05:07:51 2018 From: julian.williams@manchester.ac.uk (Julian Williams) Date: Wed, 25 Jul 2018 12:07:51 +0000 Subject: [Xmca-l] Re: Swedish activist Elon Ersson wins the day Message-ID: I think you and xmca may like this: https://www.theguardian.com/world/2018/jul/25/swedish-student-plane-protest-stops-mans-deportation-afghanistan ? Julian -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180725/57f24e81/attachment.html From andyb@marxists.org Wed Jul 25 05:10:25 2018 From: andyb@marxists.org (Andy Blunden) Date: Wed, 25 Jul 2018 22:10:25 +1000 Subject: [Xmca-l] Re: Swedish activist Elon Ersson wins the day In-Reply-To: References: Message-ID: <1f25b17a-9f22-c24b-7098-dfc8d8658ecc@marxists.org> Yes, you can see the stress on his young women's face and she stands strong under enormous pressure and she wins. Wonderful! andy ------------------------------------------------------------ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 25/07/2018 10:07 PM, Julian Williams wrote: > > I think you and xmca may like this: > > > > https://www.theguardian.com/world/2018/jul/25/swedish-student-plane-protest-stops-mans-deportation-afghanistan > > > > ? > > > > Julian > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180725/9d040dd3/attachment.html From julian.williams@manchester.ac.uk Wed Jul 25 05:18:48 2018 From: julian.williams@manchester.ac.uk (Julian Williams) Date: Wed, 25 Jul 2018 12:18:48 +0000 Subject: [Xmca-l] Re: Swedish activist Elon Ersson wins the day In-Reply-To: <1f25b17a-9f22-c24b-7098-dfc8d8658ecc@marxists.org> References: <1f25b17a-9f22-c24b-7098-dfc8d8658ecc@marxists.org> Message-ID: Andy She wins and yet she doesn?t ? the guy she went to ?rescue? was deported on another flight, but she got the support of people on the plane (some even joined her protest) and is being applauded by millions worldwide now: this is a growing aspect of resistance activism, losing and winning. And the battle against deportations, and indeed fascism, in Sweden and elsewhere continues?. Julian From: on behalf of Andy Blunden Reply-To: "eXtended Mind, Culture, Activity" Date: Wednesday, 25 July 2018 at 13:11 To: "xmca-l@mailman.ucsd.edu" Subject: [Xmca-l] Re: Swedish activist Elon Ersson wins the day Yes, you can see the stress on his young women's face and she stands strong under enormous pressure and she wins. Wonderful! andy ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 25/07/2018 10:07 PM, Julian Williams wrote: I think you and xmca may like this: https://www.theguardian.com/world/2018/jul/25/swedish-student-plane-protest-stops-mans-deportation-afghanistan ? Julian -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180725/3a4bb113/attachment.html From andyb@marxists.org Wed Jul 25 05:21:37 2018 From: andyb@marxists.org (Andy Blunden) Date: Wed, 25 Jul 2018 22:21:37 +1000 Subject: [Xmca-l] Re: Swedish activist Elon Ersson wins the day In-Reply-To: References: <1f25b17a-9f22-c24b-7098-dfc8d8658ecc@marxists.org> Message-ID: <23cdea4a-7907-f3d5-3661-1a502fb4d569@marxists.org> She achieved her goal. Her object will take longer to realise. Important to recognise the difference. Andy ------------------------------------------------------------ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 25/07/2018 10:18 PM, Julian Williams wrote: > > Andy > > > > She wins and yet she doesn?t ? the guy she went to > ?rescue? was deported on another flight, but she got the > support of people on the plane (some even joined her > protest) and is being applauded by millions worldwide now: > this is a growing aspect of resistance activism, losing > and winning. > > > > And the battle against deportations, and indeed fascism, > in Sweden and elsewhere continues?. > > > > Julian > > > > *From: * on behalf of > Andy Blunden > *Reply-To: *"eXtended Mind, Culture, Activity" > > *Date: *Wednesday, 25 July 2018 at 13:11 > *To: *"xmca-l@mailman.ucsd.edu" > *Subject: *[Xmca-l] Re: Swedish activist Elon Ersson wins > the day > > > > Yes, you can see the stress on his young women's face and > she stands strong under enormous pressure and she wins. > Wonderful! > > andy > > ------------------------------------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 25/07/2018 10:07 PM, Julian Williams wrote: > > I think you and xmca may like this: > > > > https://www.theguardian.com/world/2018/jul/25/swedish-student-plane-protest-stops-mans-deportation-afghanistan > > > > ? > > > > Julian > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180725/43916c54/attachment.html From kplakits@gmail.com Wed Jul 25 08:03:43 2018 From: kplakits@gmail.com (Katerina Plakitsi) Date: Wed, 25 Jul 2018 18:03:43 +0300 Subject: [Xmca-l] Re: A new and enlarged edition of "Vygotsky and Creativity" has been published In-Reply-To: <4E2B59C9-03B0-471D-A43C-78744FC5C7D3@yahoo.com> References: <4E2B59C9-03B0-471D-A43C-78744FC5C7D3@yahoo.com> Message-ID: You're welcome! ???? ???, 25 ???? 2018 ???? 16:22 ? ??????? Cathrene Connery < cathrene.connery@yahoo.com> ??????: > Dear Katerina: > Many thanks for your kind support. > Best wishes, > > Dr. M. Cathrene Connery > > > "Be the change you wish to see in the world." ~Gandhi~ > > "If you think you are too small to make a difference, try sleeping with a > mosquito."~ H.H.D.L.~ > > On Jul 25, 2018, at 2:24 AM, Katerina Plakitsi wrote: > > Congrats!!! > This has to be announced on the ISCAR website as well!!! > Katerina Plakitsi > ISCAR President > > ???? ???, 25 ???? 2018 ???? 01:02 ? ??????? Ana Marjanovic-Shane < > anamshane@gmail.com> ??????: > >> Dear friends and colleagues, >> >> >> >> We happy to let you all know that, after a long wait and many unexpected >> delays, the second edition was finally published, of ?Vygotsky and >> Creativity: A Cultural-historical Approach to Play, Meaning Making, and the >> Arts? ? by M. Cathrene Connery, Vera John-Steiner and Ana >> Marjanovic-Shane!! >> >> >> >> Check out the flyer I am sending! >> >> >> >> The new edition has four more chapters ? including now the following >> authors: >> >> >> >> Vera John-Steiner, M. Cathrene Connery, Ana Marjanovic-Shane, Anna >> Stetsenko, Lois Holzman, Biljana C. Fredriksen, Patricia St. John, Artin >> G?ncu?, Barry Oreck, Jessica Nicoll, Peter Smagorinsky, Seana Moran, Beth >> Ferholt, Carrie Lobman, Michelle Zoss, and Larry and Francine Smolucha. See >> the attached Table of Contents! >> >> >> >> Take care, >> >> >> >> Ana and Cathrene >> >> >> >> >> >> -- >> >> *Ana Marjanovic-Shane* >> >> Phone: 267-334-2905 >> >> Email: anamshane@gmail.com >> >> >> > -- > > > Katerina Plakitsi > *ISCAR President* > *Professor of Science Education* > *Head of the Dept. of E**arly Childhood Education* > *School of Education * > *University of Ioannina, Greece* > *tel. +302651005771* > *fax. +302651005842* > *mobile.phone +306972898463* > *Skype name: katerina.plakitsi3* > > https://www.iscar.org/ > http://users.uoi.gr/kplakits > www.epoque-project.eu > http://bdfprojects.wixsite.com/mindset > http://www.lib.uoi.gr/serp > https://www.youtube.com/watch?v=isZAbefnRmo&t=7s > > > > > -- Katerina Plakitsi *ISCAR President* *Professor of Science Education* *Head of the Dept. of E**arly Childhood Education* *School of Education * *University of Ioannina, Greece* *tel. +302651005771* *fax. +302651005842* *mobile.phone +306972898463* *Skype name: katerina.plakitsi3* https://www.iscar.org/ http://users.uoi.gr/kplakits www.epoque-project.eu http://bdfprojects.wixsite.com/mindset http://www.lib.uoi.gr/serp https://www.youtube.com/watch?v=isZAbefnRmo&t=7s -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180725/15ea184e/attachment.html From mcole@ucsd.edu Wed Jul 25 08:20:16 2018 From: mcole@ucsd.edu (mike cole) Date: Wed, 25 Jul 2018 08:20:16 -0700 Subject: [Xmca-l] Re: A new and enlarged edition of "Vygotsky and Creativity" has been published In-Reply-To: <6E32CCD4-0B53-49E9-A866-F89EED6A8939@yahoo.com> References: <82E5CFCF-4BFE-4081-958F-CDE7CB2E3E78@unco.edu> <6E32CCD4-0B53-49E9-A866-F89EED6A8939@yahoo.com> Message-ID: ALL-- If you would like to propose a book for review in MCA you should contact Beth Ferholt. She is the book review editor of MCA. Her email is bferholt at g mail dot com We have missed a lot of good books owing to your collective modesty and reticence.!! mike On Wed, Jul 25, 2018 at 6:25 AM, Cathrene Connery < cathrene.connery@yahoo.com> wrote: > Dear Ana and Mike: > Many thanks for your enthusiasm and support. Is there a formal process by > which Beth can be notified regarding the potential for a book review? > Best wishes, > Cathrene > > Dr. M. Cathrene Connery > > > "Be the change you wish to see in the world." ~Gandhi~ > > "If you think you are too small to make a difference, try sleeping with a > mosquito."~ H.H.D.L.~ > > On Jul 25, 2018, at 12:33 AM, mike cole wrote: > > Beth is the book review editor, Ana. > She assigns reviewers. > mike > > On Tue, Jul 24, 2018 at 9:11 PM, Ana Marjanovic-Shane > wrote: > >> Dear Mike, >> >> >> >> Thanks! Beth already has a copy as one of the authors in the book. >> >> Maybe someone else, who is not a co-author should do a review for MCA? >> >> What do you think? >> >> >> >> Ana >> >> >> >> -- >> >> *Ana Marjanovic-Shane* >> >> Phone: 267-334-2905 >> >> Email: anamshane@gmail.com >> >> >> >> >> >> *From: *"xmca-l-bounces@mailman.ucsd.edu" > du> on behalf of Mike Cole >> *Reply-To: *"eXtended Mind, Culture, Activity" >> *Date: *Wednesday, July 25, 2018 at 12:03 AM >> *To: *"eXtended Mind, Culture, Activity" >> *Cc: *"M. Cathrene Connery" >> *Subject: *[Xmca-l] Re: A new and enlarged edition of "Vygotsky and >> Creativity" has been published >> >> >> >> Totally cool! >> >> >> >> Please have someone sent a copy to Beth Ferholt for review in MCA. >> >> >> >> mike >> >> >> >> On Tue, Jul 24, 2018 at 3:52 PM, Walker, Dana >> wrote: >> >> Congratulations Ana and Catherine! >> >> Dana Walker >> >> University of Northern Colorado >> >> Dana.walker@unco.edu >> >> >> On Jul 24, 2018, at 3:03 PM, Ana Marjanovic-Shane >> wrote: >> >> Dear friends and colleagues, >> >> >> >> We happy to let you all know that, after a long wait and many unexpected >> delays, the second edition was finally published, of ?Vygotsky and >> Creativity: A Cultural-historical Approach to Play, Meaning Making, and the >> Arts? ? by M. Cathrene Connery, Vera John-Steiner and Ana >> Marjanovic-Shane!! >> >> >> >> Check out the flyer I am sending! >> >> >> >> The new edition has four more chapters ? including now the following >> authors: >> >> >> >> Vera John-Steiner, M. Cathrene Connery, Ana Marjanovic-Shane, Anna >> Stetsenko, Lois Holzman, Biljana C. Fredriksen, Patricia St. John, Artin >> G?ncu?, Barry Oreck, Jessica Nicoll, Peter Smagorinsky, Seana Moran, Beth >> Ferholt, Carrie Lobman, Michelle Zoss, and Larry and Francine Smolucha. See >> the attached Table of Contents! >> >> >> >> Take care, >> >> >> >> Ana and Cathrene >> >> >> >> >> >> -- >> >> *Ana Marjanovic-Shane* >> >> Phone: 267-334-2905 >> >> Email: anamshane@gmail.com >> >> >> >> **This message originated from outside UNC. Please use caution when >> opening attachments or following links. Do not enter your UNC credentials >> when prompted by external links.** >> >> >> >> >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180725/af5e4130/attachment.html From kplakits@gmail.com Wed Jul 25 08:25:10 2018 From: kplakits@gmail.com (Katerina Plakitsi) Date: Wed, 25 Jul 2018 18:25:10 +0300 Subject: [Xmca-l] Re: A new and enlarged edition of "Vygotsky and Creativity" has been published In-Reply-To: References: <82E5CFCF-4BFE-4081-958F-CDE7CB2E3E78@unco.edu> <6E32CCD4-0B53-49E9-A866-F89EED6A8939@yahoo.com> Message-ID: Thanks! ???? ???, 25 ???? 2018 ???? 18:23 ? ??????? mike cole ??????: > ALL-- > > If you would like to propose a book for review in MCA you should contact > Beth Ferholt. She is the book review > editor of MCA. Her email is bferholt at g mail dot com > > We have missed a lot of good books owing to your collective modesty and > reticence.!! > > mike > > On Wed, Jul 25, 2018 at 6:25 AM, Cathrene Connery < > cathrene.connery@yahoo.com> wrote: > >> Dear Ana and Mike: >> Many thanks for your enthusiasm and support. Is there a formal process by >> which Beth can be notified regarding the potential for a book review? >> Best wishes, >> Cathrene >> > >> >> Dr. M. Cathrene Connery >> >> >> "Be the change you wish to see in the world." ~Gandhi~ >> >> "If you think you are too small to make a difference, try sleeping with a >> mosquito."~ H.H.D.L.~ >> > >> On Jul 25, 2018, at 12:33 AM, mike cole wrote: >> >> Beth is the book review editor, Ana. >> She assigns reviewers. >> mike >> >> On Tue, Jul 24, 2018 at 9:11 PM, Ana Marjanovic-Shane < >> anamshane@gmail.com> wrote: >> >>> Dear Mike, >>> >>> >>> >>> Thanks! Beth already has a copy as one of the authors in the book. >>> >>> Maybe someone else, who is not a co-author should do a review for MCA? >>> >>> What do you think? >>> >>> >>> >>> Ana >>> >>> >>> >>> -- >>> >>> *Ana Marjanovic-Shane* >>> >>> Phone: 267-334-2905 >>> >>> Email: anamshane@gmail.com >>> >>> >>> >>> >>> >>> *From: *"xmca-l-bounces@mailman.ucsd.edu" < >>> xmca-l-bounces@mailman.ucsd.edu> on behalf of Mike Cole >>> *Reply-To: *"eXtended Mind, Culture, Activity" >>> *Date: *Wednesday, July 25, 2018 at 12:03 AM >>> *To: *"eXtended Mind, Culture, Activity" >>> *Cc: *"M. Cathrene Connery" >>> *Subject: *[Xmca-l] Re: A new and enlarged edition of "Vygotsky and >>> Creativity" has been published >>> >>> >>> >>> Totally cool! >>> >>> >>> >>> Please have someone sent a copy to Beth Ferholt for review in MCA. >>> >>> >>> >>> mike >>> >>> >>> >>> On Tue, Jul 24, 2018 at 3:52 PM, Walker, Dana >>> wrote: >>> >>> Congratulations Ana and Catherine! >>> >>> Dana Walker >>> >>> University of Northern Colorado >>> >>> Dana.walker@unco.edu >>> >>> >>> On Jul 24, 2018, at 3:03 PM, Ana Marjanovic-Shane >>> wrote: >>> >>> Dear friends and colleagues, >>> >>> >>> >>> We happy to let you all know that, after a long wait and many unexpected >>> delays, the second edition was finally published, of ?Vygotsky and >>> Creativity: A Cultural-historical Approach to Play, Meaning Making, and the >>> Arts? ? by M. Cathrene Connery, Vera John-Steiner and Ana >>> Marjanovic-Shane!! >>> >>> >>> >>> Check out the flyer I am sending! >>> >>> >>> >>> The new edition has four more chapters ? including now the following >>> authors: >>> >>> >>> >>> Vera John-Steiner, M. Cathrene Connery, Ana Marjanovic-Shane, Anna >>> Stetsenko, Lois Holzman, Biljana C. Fredriksen, Patricia St. John, Artin >>> G?ncu?, Barry Oreck, Jessica Nicoll, Peter Smagorinsky, Seana Moran, Beth >>> Ferholt, Carrie Lobman, Michelle Zoss, and Larry and Francine Smolucha. See >>> the attached Table of Contents! >>> >>> >>> >>> Take care, >>> >>> >>> >>> Ana and Cathrene >>> >>> >>> >>> >>> >>> -- >>> >>> *Ana Marjanovic-Shane* >>> >>> Phone: 267-334-2905 >>> >>> Email: anamshane@gmail.com >>> >>> >>> >>> **This message originated from outside UNC. Please use caution when >>> opening attachments or following links. Do not enter your UNC credentials >>> when prompted by external links.** >>> >>> >>> >>> >>> >>> >>> >> >> -- Katerina Plakitsi *ISCAR President* *Professor of Science Education* *Head of the Dept. of E**arly Childhood Education* *School of Education * *University of Ioannina, Greece* *tel. +302651005771* *fax. +302651005842* *mobile.phone +306972898463* *Skype name: katerina.plakitsi3* https://www.iscar.org/ http://users.uoi.gr/kplakits www.epoque-project.eu http://bdfprojects.wixsite.com/mindset http://www.lib.uoi.gr/serp https://www.youtube.com/watch?v=isZAbefnRmo&t=7s -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180725/4eb80675/attachment.html From anamshane@gmail.com Wed Jul 25 08:27:27 2018 From: anamshane@gmail.com (Ana Marjanovic-Shane) Date: Wed, 25 Jul 2018 11:27:27 -0400 Subject: [Xmca-l] Re: A new and enlarged edition of "Vygotsky and Creativity" has been published In-Reply-To: References: <82E5CFCF-4BFE-4081-958F-CDE7CB2E3E78@unco.edu> <6E32CCD4-0B53-49E9-A866-F89EED6A8939@yahoo.com> Message-ID: Thanks, Mike! I'll make sure Beth gets a coppy of the book for review (and does not have to give up her own). Ana On Wed, Jul 25, 2018 at 11:20 AM mike cole wrote: > ALL-- > > If you would like to propose a book for review in MCA you should contact > Beth Ferholt. She is the book review > editor of MCA. Her email is bferholt at g mail dot com > > We have missed a lot of good books owing to your collective modesty and > reticence.!! > > mike > > On Wed, Jul 25, 2018 at 6:25 AM, Cathrene Connery < > cathrene.connery@yahoo.com> wrote: > >> Dear Ana and Mike: >> Many thanks for your enthusiasm and support. Is there a formal process by >> which Beth can be notified regarding the potential for a book review? >> Best wishes, >> Cathrene >> >> Dr. M. Cathrene Connery >> >> >> "Be the change you wish to see in the world." ~Gandhi~ >> >> "If you think you are too small to make a difference, try sleeping with a >> mosquito."~ H.H.D.L.~ >> >> On Jul 25, 2018, at 12:33 AM, mike cole wrote: >> >> Beth is the book review editor, Ana. >> She assigns reviewers. >> mike >> >> On Tue, Jul 24, 2018 at 9:11 PM, Ana Marjanovic-Shane < >> anamshane@gmail.com> wrote: >> >>> Dear Mike, >>> >>> >>> >>> Thanks! Beth already has a copy as one of the authors in the book. >>> >>> Maybe someone else, who is not a co-author should do a review for MCA? >>> >>> What do you think? >>> >>> >>> >>> Ana >>> >>> >>> >>> -- >>> >>> *Ana Marjanovic-Shane* >>> >>> Phone: 267-334-2905 <(267)%20334-2905> >>> >>> Email: anamshane@gmail.com >>> >>> >>> >>> >>> >>> *From: *"xmca-l-bounces@mailman.ucsd.edu" < >>> xmca-l-bounces@mailman.ucsd.edu> on behalf of Mike Cole >>> *Reply-To: *"eXtended Mind, Culture, Activity" >>> *Date: *Wednesday, July 25, 2018 at 12:03 AM >>> *To: *"eXtended Mind, Culture, Activity" >>> *Cc: *"M. Cathrene Connery" >>> *Subject: *[Xmca-l] Re: A new and enlarged edition of "Vygotsky and >>> Creativity" has been published >>> >>> >>> >>> Totally cool! >>> >>> >>> >>> Please have someone sent a copy to Beth Ferholt for review in MCA. >>> >>> >>> >>> mike >>> >>> >>> >>> On Tue, Jul 24, 2018 at 3:52 PM, Walker, Dana >>> wrote: >>> >>> Congratulations Ana and Catherine! >>> >>> Dana Walker >>> >>> University of Northern Colorado >>> >>> Dana.walker@unco.edu >>> >>> >>> On Jul 24, 2018, at 3:03 PM, Ana Marjanovic-Shane >>> wrote: >>> >>> Dear friends and colleagues, >>> >>> >>> >>> We happy to let you all know that, after a long wait and many unexpected >>> delays, the second edition was finally published, of ?Vygotsky and >>> Creativity: A Cultural-historical Approach to Play, Meaning Making, and the >>> Arts? ? by M. Cathrene Connery, Vera John-Steiner and Ana >>> Marjanovic-Shane!! >>> >>> >>> >>> Check out the flyer I am sending! >>> >>> >>> >>> The new edition has four more chapters ? including now the following >>> authors: >>> >>> >>> >>> Vera John-Steiner, M. Cathrene Connery, Ana Marjanovic-Shane, Anna >>> Stetsenko, Lois Holzman, Biljana C. Fredriksen, Patricia St. John, Artin >>> G?ncu?, Barry Oreck, Jessica Nicoll, Peter Smagorinsky, Seana Moran, Beth >>> Ferholt, Carrie Lobman, Michelle Zoss, and Larry and Francine Smolucha. See >>> the attached Table of Contents! >>> >>> >>> >>> Take care, >>> >>> >>> >>> Ana and Cathrene >>> >>> >>> >>> >>> >>> -- >>> >>> *Ana Marjanovic-Shane* >>> >>> Phone: 267-334-2905 <(267)%20334-2905> >>> >>> Email: anamshane@gmail.com >>> >>> >>> >>> **This message originated from outside UNC. Please use caution when >>> opening attachments or following links. Do not enter your UNC credentials >>> when prompted by external links.** >>> >>> >>> >>> >>> >>> >>> >> >> > -- *Ana Marjanovic-Shane, Ph.D.* Dialogic Pedagogy Journal, deputy Editor-in-Chief (dpj.pitt.edu) Independent Scholar, Professor of Education e-mail: anamshane@gmail.com Phone: +1 267-334-2905 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180725/83601321/attachment.html From andyb@marxists.org Wed Jul 25 18:54:01 2018 From: andyb@marxists.org (Andy Blunden) Date: Thu, 26 Jul 2018 11:54:01 +1000 Subject: [Xmca-l] Re: Swedish activist Elon Ersson wins the day In-Reply-To: <23cdea4a-7907-f3d5-3661-1a502fb4d569@marxists.org> References: <1f25b17a-9f22-c24b-7098-dfc8d8658ecc@marxists.org> <23cdea4a-7907-f3d5-3661-1a502fb4d569@marxists.org> Message-ID: ... to continue this dialogue on winning and losing, a now-departed friend who was a writer once commented to me after we had together watched an inspiring play performed by Melbourne Workers' Theatre, that for the working class *every* struggle, every story of victory, ends in defeat, simply because the object of the workers' movement lies if at all in the future; the road to socialism is a series of small victories followed by defeats. Until .... So Elon is acting in a fine tradition. The distinction between goal and object (by whatever names) was relevant for the recent xmca discussion around the Brazilian social movements, which kept popping up with different goals, but, one suspects, shared a common object. Andy PS. For the distinction between goal and object, I rely on A N Leonytev's succinct definition of action: "Processes, the object and motive of which do not coincide with one another, we shall call ?actions?." but choice of words for object, goal, aim, motive, etc., is problematic. I have chosen "object" for what Hegel calls "Intention" and Leontyev calls "motivation" and "goal" for what ANL calls "object" in the above quote. ------------------------------------------------------------ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 25/07/2018 10:21 PM, Andy Blunden wrote: > > She achieved her goal. Her object will take longer to > realise. Important to recognise the difference. > > Andy > > ------------------------------------------------------------ > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > On 25/07/2018 10:18 PM, Julian Williams wrote: >> >> Andy >> >> >> >> She wins and yet she doesn?t ? the guy she went to >> ?rescue? was deported on another flight, but she got the >> support of people on the plane (some even joined her >> protest) and is being applauded by millions worldwide >> now: this is a growing aspect of resistance activism, >> losing and winning. >> >> >> >> And the battle against deportations, and indeed fascism, >> in Sweden and elsewhere continues?. >> >> >> >> Julian >> >> >> >> *From: * on behalf of >> Andy Blunden >> *Reply-To: *"eXtended Mind, Culture, Activity" >> >> *Date: *Wednesday, 25 July 2018 at 13:11 >> *To: *"xmca-l@mailman.ucsd.edu" >> *Subject: *[Xmca-l] Re: Swedish activist Elon Ersson wins >> the day >> >> >> >> Yes, you can see the stress on his young women's face and >> she stands strong under enormous pressure and she wins. >> Wonderful! >> >> andy >> >> ------------------------------------------------------------ >> >> Andy Blunden >> http://www.ethicalpolitics.org/ablunden/index.htm >> >> On 25/07/2018 10:07 PM, Julian Williams wrote: >> >> I think you and xmca may like this: >> >> >> >> https://www.theguardian.com/world/2018/jul/25/swedish-student-plane-protest-stops-mans-deportation-afghanistan >> >> >> >> ? >> >> >> >> Julian >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180726/9a070980/attachment.html From huw.softdesigns@gmail.com Thu Jul 26 02:13:16 2018 From: huw.softdesigns@gmail.com (Huw Lloyd) Date: Thu, 26 Jul 2018 10:13:16 +0100 Subject: [Xmca-l] Object of activity (was: Swedish activist Elon Ersson wins the day) Message-ID: Since my original endeavours I have switched to referring to the task goal or goal of activity, in conformance with Bedny et al's terminology. Personally this does not change my systemic formulations, but it does seem to point to holes in others', whilst reducing ambiguity. "An object of activity that can be material or mental (symbols, images, etc.) is something that can be modified by a subject according to the activity goal (Bedny and Karwowski, 2007; Leont?ev, 1981; Rubinshtein, 1957; Zinchenko, 1995)." Bedny (2015, p. 91) This is from the chapter "Basic Concepts and Terminology" which offers further elaboration (ref below). Best, Huw Bedny, G. Z. (2015) *Application of Systemic-Structural Activity Theory to Design and Training*. Boca Raton: CRC Press On 26 July 2018 at 02:54, Andy Blunden wrote: > ... to continue this dialogue on winning and losing, a now-departed friend > who was a writer once commented to me after we had together watched an > inspiring play performed by Melbourne Workers' Theatre, that for the > working class *every* struggle, every story of victory, ends in defeat, > simply because the object of the workers' movement lies if at all in the > future; the road to socialism is a series of small victories followed by > defeats. Until .... So Elon is acting in a fine tradition. > > The distinction between goal and object (by whatever names) was relevant > for the recent xmca discussion around the Brazilian social movements, which > kept popping up with different goals, but, one suspects, shared a common > object. > > Andy > > PS. For the distinction between goal and object, I rely on A N Leonytev's > succinct definition of action: "Processes, the object and motive of which > do not coincide with one another, we shall call ?actions?." but choice of > words for object, goal, aim, motive, etc., is problematic. I have chosen > "object" for what Hegel calls "Intention" and Leontyev calls "motivation" > and "goal" for what ANL calls "object" in the above quote. > ------------------------------ > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > On 25/07/2018 10:21 PM, Andy Blunden wrote: > > She achieved her goal. Her object will take longer to realise. Important > to recognise the difference. > > Andy > ------------------------------ > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > On 25/07/2018 10:18 PM, Julian Williams wrote: > > Andy > > > > She wins and yet she doesn?t ? the guy she went to ?rescue? was deported > on another flight, but she got the support of people on the plane (some > even joined her protest) and is being applauded by millions worldwide now: > this is a growing aspect of resistance activism, losing and winning. > > > > And the battle against deportations, and indeed fascism, in Sweden and > elsewhere continues?. > > > > Julian > > > > *From: * edu> on behalf of Andy Blunden > > *Reply-To: *"eXtended Mind, Culture, Activity" > > *Date: *Wednesday, 25 July 2018 at 13:11 > *To: *"xmca-l@mailman.ucsd.edu" > > *Subject: *[Xmca-l] Re: Swedish activist Elon Ersson wins the day > > > > Yes, you can see the stress on his young women's face and she stands > strong under enormous pressure and she wins. Wonderful! > > andy > ------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 25/07/2018 10:07 PM, Julian Williams wrote: > > I think you and xmca may like this: > > > > https://www.theguardian.com/world/2018/jul/25/swedish- > student-plane-protest-stops-mans-deportation-afghanistan > > > > ? > > > > Julian > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180726/90bb93d3/attachment.html From simangele.mayisela@wits.ac.za Thu Jul 26 02:54:49 2018 From: simangele.mayisela@wits.ac.za (Simangele Mayisela) Date: Thu, 26 Jul 2018 09:54:49 +0000 Subject: [Xmca-l] Re: A new and enlarged edition of "Vygotsky and Creativity" has been published In-Reply-To: References: Message-ID: <136A8BCDB24BB844A570A40E6ADF5DA8E77531F2@Ekho.ds.WITS.AC.ZA> Thank you and congrats! I can?t wait to lay my hands on it. Regards, S?ma Dr. Simangele Mayisela (PhD) Senior Lecturer / Educational Psychologist Department of Psychology School of Human and Community Development University of the Witwatersrand Tel: +27 11 717 4529 ?All truths are easy to understand once they are discovered; the point is to discover them? Galileo Galilei [cid:image003.png@01D3B455.B4589380] From: xmca-l-bounces@mailman.ucsd.edu [mailto:xmca-l-bounces@mailman.ucsd.edu] On Behalf Of Katerina Plakitsi Sent: Wednesday, 25 July 2018 08:25 To: eXtended Mind, Culture, Activity Cc: M. Cathrene Connery Subject: [Xmca-l] Re: A new and enlarged edition of "Vygotsky and Creativity" has been published Congrats!!! This has to be announced on the ISCAR website as well!!! Katerina Plakitsi ISCAR President ???? ???, 25 ???? 2018 ???? 01:02 ? ??????? Ana Marjanovic-Shane > ??????: Dear friends and colleagues, We happy to let you all know that, after a long wait and many unexpected delays, the second edition was finally published, of ?Vygotsky and Creativity: A Cultural-historical Approach to Play, Meaning Making, and the Arts? ? by M. Cathrene Connery, Vera John-Steiner and Ana Marjanovic-Shane!! Check out the flyer I am sending! The new edition has four more chapters ? including now the following authors: Vera John-Steiner, M. Cathrene Connery, Ana Marjanovic-Shane, Anna Stetsenko, Lois Holzman, Biljana C. Fredriksen, Patricia St. John, Artin G?ncu?, Barry Oreck, Jessica Nicoll, Peter Smagorinsky, Seana Moran, Beth Ferholt, Carrie Lobman, Michelle Zoss, and Larry and Francine Smolucha. See the attached Table of Contents! Take care, Ana and Cathrene -- Ana Marjanovic-Shane Phone: 267-334-2905 Email: anamshane@gmail.com -- [Image removed by sender.] Katerina Plakitsi ISCAR President Professor of Science Education Head of the Dept. of Early Childhood Education School of Education University of Ioannina, Greece tel. +302651005771 fax. +302651005842 mobile.phone +306972898463 Skype name: katerina.plakitsi3 https://www.iscar.org/ http://users.uoi.gr/kplakits www.epoque-project.eu http://bdfprojects.wixsite.com/mindset http://www.lib.uoi.gr/serp https://www.youtube.com/watch?v=isZAbefnRmo&t=7s This communication is intended for the addressee only. It is confidential. If you have received this communication in error, please notify us immediately and destroy the original message. You may not copy or disseminate this communication without the permission of the University. Only authorised signatories are competent to enter into agreements on behalf of the University and recipients are thus advised that the content of this message may not be legally binding on the University and may contain the personal views and opinions of the author, which are not necessarily the views and opinions of The University of the Witwatersrand, Johannesburg. All agreements between the University and outsiders are subject to South African Law unless the University agrees in writing to the contrary. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180726/a299ea98/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 137483 bytes Desc: image001.png Url : http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180726/a299ea98/attachment.png -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 377 bytes Desc: image003.jpg Url : http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180726/a299ea98/attachment.jpg From andyb@marxists.org Thu Jul 26 03:28:09 2018 From: andyb@marxists.org (Andy Blunden) Date: Thu, 26 Jul 2018 20:28:09 +1000 Subject: [Xmca-l] Re: Object of activity (was: Swedish activist Elon Ersson wins the day) In-Reply-To: References: Message-ID: <0818a714-ff40-d3bc-4b8d-76c63806708f@marxists.org> So "object" in your sense, the same sense in which Engestrom uses "object." This is something quite different from "goal" in Leontyev's sense, which is what the subject intends to transform the object into. Except that that concept of "goal" (intention) does not exist in Engestrom's system, only "outcome", which is clearly not the same thing as "goal" because things don't always go as intended. But from what I gather of "according to the activity goal", the "activity goal" is what Leontyev called the "motivation" - the reason for doing something. What you (and Engestrom) are calling "object" is like what Marx refers to as /Arbeitsgegenstand /- or "object of labour" (the "something" in your quote) whose form is changed. I think that's the Russian /predmet/. Fair enough. So you are contrasting "task goal" and "goal of activity". Fair enough, but isn't it confusing to use "goal" for both? That means you can never use the word "goal" without qualifying it as the "task goal" or the "goal of activity". Andy ------------------------------------------------------------ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 26/07/2018 7:13 PM, Huw Lloyd wrote: > Since my original endeavours I have switched to referring > to the task goal or goal of activity, in conformance with > Bedny et al's terminology. Personally this does not change > my systemic formulations, but it does seem to point to > holes in others', whilst reducing ambiguity. > > "An object of activity that can be material or mental > (symbols, images, etc.) is something that can be modified > by a subject according to the activity goal (Bedny and > Karwowski, 2007; Leont?ev, 1981; Rubinshtein, 1957; > Zinchenko, 1995)." Bedny (2015, p. 91) > > This is from the chapter "Basic Concepts and Terminology" > which offers further elaboration (ref below). > > Best, > Huw > > Bedny, G. Z. (2015) /Application of Systemic-Structural > Activity Theory to Design and Training/. Boca Raton: CRC Press > > On 26 July 2018 at 02:54, Andy Blunden > wrote: > > ... to continue this dialogue on winning and losing, a > now-departed friend who was a writer once commented to > me after we had together watched an inspiring play > performed by Melbourne Workers' Theatre, that for the > working class *every* struggle, every story of > victory, ends in defeat, simply because the object of > the workers' movement lies if at all in the future; > the road to socialism is a series of small victories > followed by defeats. Until .... So Elon is acting in a > fine tradition. > > The distinction between goal and object (by whatever > names) was relevant for the recent xmca discussion > around the Brazilian social movements, which kept > popping up with different goals, but, one suspects, > shared a common object. > > Andy > > PS. For the distinction between goal and object, I > rely on A N Leonytev's succinct definition of action: > "Processes, the object and motive of which do not > coincide with one another, we shall call ?actions?." > but choice of words for object, goal, aim, motive, > etc., is problematic. I have chosen "object" for what > Hegel calls "Intention" and Leontyev calls > "motivation" and "goal" for what ANL calls "object" in > the above quote. > > ------------------------------------------------------------ > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 25/07/2018 10:21 PM, Andy Blunden wrote: >> >> She achieved her goal. Her object will take longer to >> realise. Important to recognise the difference. >> >> Andy >> >> ------------------------------------------------------------ >> Andy Blunden >> http://www.ethicalpolitics.org/ablunden/index.htm >> >> On 25/07/2018 10:18 PM, Julian Williams wrote: >>> >>> Andy >>> >>> >>> >>> She wins and yet she doesn?t ? the guy she went to >>> ?rescue? was deported on another flight, but she got >>> the support of people on the plane (some even joined >>> her protest) and is being applauded by millions >>> worldwide now: this is a growing aspect of >>> resistance activism, losing and winning. >>> >>> >>> >>> And the battle against deportations, and indeed >>> fascism, in Sweden and elsewhere continues?. >>> >>> >>> >>> Julian >>> >>> >>> >>> *From: * >>> on behalf >>> of Andy Blunden >>> >>> *Reply-To: *"eXtended Mind, Culture, Activity" >>> >>> >>> *Date: *Wednesday, 25 July 2018 at 13:11 >>> *To: *"xmca-l@mailman.ucsd.edu" >>> >>> >>> >>> *Subject: *[Xmca-l] Re: Swedish activist Elon Ersson >>> wins the day >>> >>> >>> >>> Yes, you can see the stress on his young women's >>> face and she stands strong under enormous pressure >>> and she wins. Wonderful! >>> >>> andy >>> >>> ------------------------------------------------------------ >>> >>> Andy Blunden >>> http://www.ethicalpolitics.org/ablunden/index.htm >>> >>> >>> On 25/07/2018 10:07 PM, Julian Williams wrote: >>> >>> I think you and xmca may like this: >>> >>> >>> >>> https://www.theguardian.com/world/2018/jul/25/swedish-student-plane-protest-stops-mans-deportation-afghanistan >>> >>> >>> >>> >>> ? >>> >>> >>> >>> Julian >>> >>> >>> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180726/2e43bb98/attachment.html From huw.softdesigns@gmail.com Thu Jul 26 06:46:26 2018 From: huw.softdesigns@gmail.com (Huw Lloyd) Date: Thu, 26 Jul 2018 14:46:26 +0100 Subject: [Xmca-l] Re: Object of activity (was: Swedish activist Elon Ersson wins the day) In-Reply-To: <0818a714-ff40-d3bc-4b8d-76c63806708f@marxists.org> References: <0818a714-ff40-d3bc-4b8d-76c63806708f@marxists.org> Message-ID: In this terminology the object is simply the artefact pertaining to the activity. I doubt very much whether there is alignment with Engestrom other than potentially some basic referents. As I said, the terms do not change my own system of relations. I simply bow to a custom articulated by a Russian speaker with a long history in the tradition of activity theory. On the matter of multiple goals, this is not ambiguous to the degree that it reflects the nesting that takes place in such activity, i.e. the plurality is authentic. If you wish to engage any thinking in the matter, I suggest you'd be better off starting from Gregory Bedny's chapter. I'll email Gregory to see if he is willing to share the chapter. Best, Huw On 26 July 2018 at 11:28, Andy Blunden wrote: > So "object" in your sense, the same sense in which Engestrom uses > "object." This is something quite different from "goal" in Leontyev's > sense, which is what the subject intends to transform the object into. > Except that that concept of "goal" (intention) does not exist in > Engestrom's system, only "outcome", which is clearly not the same thing as > "goal" because things don't always go as intended. But from what I gather > of "according to the activity goal", the "activity goal" is what Leontyev > called the "motivation" - the reason for doing something. What you (and > Engestrom) are calling "object" is like what Marx refers to as *Arbeitsgegenstand > *- or "object of labour" (the "something" in your quote) whose form is > changed. I think that's the Russian *predmet*. Fair enough. > > So you are contrasting "task goal" and "goal of activity". Fair enough, > but isn't it confusing to use "goal" for both? That means you can never use > the word "goal" without qualifying it as the "task goal" or the "goal of > activity". > > Andy > ------------------------------ > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > On 26/07/2018 7:13 PM, Huw Lloyd wrote: > > Since my original endeavours I have switched to referring to the task goal > or goal of activity, in conformance with Bedny et al's terminology. > Personally this does not change my systemic formulations, but it does seem > to point to holes in others', whilst reducing ambiguity. > > "An object of activity that can be material or mental (symbols, images, > etc.) is something that can be modified by a subject according to the > activity goal (Bedny and Karwowski, 2007; Leont?ev, 1981; Rubinshtein, > 1957; Zinchenko, 1995)." Bedny (2015, p. 91) > > This is from the chapter "Basic Concepts and Terminology" which offers > further elaboration (ref below). > > Best, > Huw > > Bedny, G. Z. (2015) *Application of Systemic-Structural Activity Theory > to Design and Training*. Boca Raton: CRC Press > > On 26 July 2018 at 02:54, Andy Blunden wrote: > >> ... to continue this dialogue on winning and losing, a now-departed >> friend who was a writer once commented to me after we had together watched >> an inspiring play performed by Melbourne Workers' Theatre, that for the >> working class *every* struggle, every story of victory, ends in defeat, >> simply because the object of the workers' movement lies if at all in the >> future; the road to socialism is a series of small victories followed by >> defeats. Until .... So Elon is acting in a fine tradition. >> >> The distinction between goal and object (by whatever names) was relevant >> for the recent xmca discussion around the Brazilian social movements, which >> kept popping up with different goals, but, one suspects, shared a common >> object. >> >> Andy >> >> PS. For the distinction between goal and object, I rely on A N Leonytev's >> succinct definition of action: "Processes, the object and motive of >> which do not coincide with one another, we shall call ?actions?." but >> choice of words for object, goal, aim, motive, etc., is problematic. I have >> chosen "object" for what Hegel calls "Intention" and Leontyev calls >> "motivation" and "goal" for what ANL calls "object" in the above quote. >> ------------------------------ >> Andy Blunden >> http://www.ethicalpolitics.org/ablunden/index.htm >> On 25/07/2018 10:21 PM, Andy Blunden wrote: >> >> She achieved her goal. Her object will take longer to realise. Important >> to recognise the difference. >> >> Andy >> ------------------------------ >> Andy Blunden >> http://www.ethicalpolitics.org/ablunden/index.htm >> On 25/07/2018 10:18 PM, Julian Williams wrote: >> >> Andy >> >> >> >> She wins and yet she doesn?t ? the guy she went to ?rescue? was deported >> on another flight, but she got the support of people on the plane (some >> even joined her protest) and is being applauded by millions worldwide now: >> this is a growing aspect of resistance activism, losing and winning. >> >> >> >> And the battle against deportations, and indeed fascism, in Sweden and >> elsewhere continues?. >> >> >> >> Julian >> >> >> >> *From: *< xmca-l-bounces@mailman.ucsd.edu> >> on behalf of Andy Blunden >> >> *Reply-To: *"eXtended Mind, Culture, Activity" >> >> *Date: *Wednesday, 25 July 2018 at 13:11 >> *To: * "xmca-l@mailman.ucsd.edu" >> >> >> *Subject: *[Xmca-l] Re: Swedish activist Elon Ersson wins the day >> >> >> >> Yes, you can see the stress on his young women's face and she stands >> strong under enormous pressure and she wins. Wonderful! >> >> andy >> ------------------------------ >> >> Andy Blunden >> http://www.ethicalpolitics.org/ablunden/index.htm >> >> On 25/07/2018 10:07 PM, Julian Williams wrote: >> >> I think you and xmca may like this: >> >> >> >> >> >> https://www.theguardian.com/world/2018/jul/25/swedish-studen >> t-plane-protest-stops-mans-deportation-afghanistan >> >> >> >> ? >> >> >> >> Julian >> >> >> >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180726/981a2ecd/attachment.html From andyb@marxists.org Thu Jul 26 06:53:30 2018 From: andyb@marxists.org (Andy Blunden) Date: Thu, 26 Jul 2018 23:53:30 +1000 Subject: [Xmca-l] Re: Object of activity (was: Swedish activist Elon Ersson wins the day) In-Reply-To: References: <0818a714-ff40-d3bc-4b8d-76c63806708f@marxists.org> Message-ID: <6be8a6be-907e-6373-3c7e-c62e7944e454@marxists.org> Sure, the terminology is so variable, it is the meaning not the word which must be paid attention to. But it is not about *multiple* goals, or *plurality*. The crucial distinction, the distinction which is constitutive of consciousness, is the "task goal" and the reason for the task. That's a definite "two-ness." Though, this does not rule out "plurality." a ------------------------------------------------------------ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 26/07/2018 11:46 PM, Huw Lloyd wrote: > In this terminology the object is simply the artefact > pertaining to the activity. I doubt very much whether > there is alignment with Engestrom other than potentially > some basic referents. > > As I said, the terms do not change my own system of > relations. I simply bow to a custom articulated by a > Russian speaker with a long history in the tradition of > activity theory. > > On the matter of multiple goals, this is not ambiguous to > the degree that it reflects the nesting that takes place > in such activity, i.e. the plurality is authentic. > > If you wish to engage any thinking in the matter, I > suggest you'd be better off starting from Gregory Bedny's > chapter. I'll email Gregory to see if he is willing to > share the chapter. > > Best, > Huw > > > > > On 26 July 2018 at 11:28, Andy Blunden > wrote: > > So "object" in your sense, the same sense in which > Engestrom uses "object." This is something quite > different from "goal" in Leontyev's sense, which is > what the subject intends to transform the object into. > Except that that concept of "goal" (intention) does > not exist in Engestrom's system, only "outcome", which > is clearly not the same thing as "goal" because things > don't always go as intended. But from what I gather of > "according to the activity goal", the "activity goal" > is what Leontyev called the "motivation" - the reason > for doing something. What you (and Engestrom) are > calling "object" is like what Marx refers to as > /Arbeitsgegenstand /- or "object of labour" (the > "something" in your quote) whose form is changed. I > think that's the Russian /predmet/. Fair enough. > > So you are contrasting "task goal" and "goal of > activity". Fair enough, but isn't it confusing to use > "goal" for both? That means you can never use the word > "goal" without qualifying it as the "task goal" or the > "goal of activity". > > Andy > > ------------------------------------------------------------ > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 26/07/2018 7:13 PM, Huw Lloyd wrote: >> Since my original endeavours I have switched to >> referring to the task goal or goal of activity, in >> conformance with Bedny et al's terminology. >> Personally this does not change my systemic >> formulations, but it does seem to point to holes in >> others', whilst reducing ambiguity. >> >> "An object of activity that can be material or mental >> (symbols, images, etc.) is something that can be >> modified by a subject according to the activity goal >> (Bedny and Karwowski, 2007; Leont?ev, 1981; >> Rubinshtein, 1957; Zinchenko, 1995)." Bedny (2015, p. 91) >> >> This is from the chapter "Basic Concepts and >> Terminology" which offers further elaboration (ref >> below). >> >> Best, >> Huw >> >> Bedny, G. Z. (2015) /Application of >> Systemic-Structural Activity Theory to Design and >> Training/. Boca Raton: CRC Press >> >> On 26 July 2018 at 02:54, Andy Blunden >> > wrote: >> >> ... to continue this dialogue on winning and >> losing, a now-departed friend who was a writer >> once commented to me after we had together >> watched an inspiring play performed by Melbourne >> Workers' Theatre, that for the working class >> *every* struggle, every story of victory, ends in >> defeat, simply because the object of the workers' >> movement lies if at all in the future; the road >> to socialism is a series of small victories >> followed by defeats. Until .... So Elon is acting >> in a fine tradition. >> >> The distinction between goal and object (by >> whatever names) was relevant for the recent xmca >> discussion around the Brazilian social movements, >> which kept popping up with different goals, but, >> one suspects, shared a common object. >> >> Andy >> >> PS. For the distinction between goal and object, >> I rely on A N Leonytev's succinct definition of >> action: "Processes, the object and motive of >> which do not coincide with one another, we shall >> call ?actions?." but choice of words for object, >> goal, aim, motive, etc., is problematic. I have >> chosen "object" for what Hegel calls "Intention" >> and Leontyev calls "motivation" and "goal" for >> what ANL calls "object" in the above quote. >> >> ------------------------------------------------------------ >> Andy Blunden >> http://www.ethicalpolitics.org/ablunden/index.htm >> >> On 25/07/2018 10:21 PM, Andy Blunden wrote: >>> >>> She achieved her goal. Her object will take >>> longer to realise. Important to recognise the >>> difference. >>> >>> Andy >>> >>> ------------------------------------------------------------ >>> Andy Blunden >>> http://www.ethicalpolitics.org/ablunden/index.htm >>> >>> On 25/07/2018 10:18 PM, Julian Williams wrote: >>>> >>>> Andy >>>> >>>> >>>> >>>> She wins and yet she doesn?t ? the guy she went >>>> to ?rescue? was deported on another flight, but >>>> she got the support of people on the plane >>>> (some even joined her protest) and is being >>>> applauded by millions worldwide now: this is a >>>> growing aspect of resistance activism, losing >>>> and winning. >>>> >>>> >>>> >>>> And the battle against deportations, and indeed >>>> fascism, in Sweden and elsewhere continues?. >>>> >>>> >>>> >>>> Julian >>>> >>>> >>>> >>>> *From: *< >>>> xmca-l-bounces@mailman.ucsd >>>> .edu> on >>>> behalf of Andy Blunden >>>> >>>> *Reply-To: *"eXtended Mind, Culture, Activity" >>>> >>>> >>>> *Date: *Wednesday, 25 July 2018 at 13:11 >>>> *To: *"xmca-l@mailman.ucsd.edu" >>>> >>>> >>>> >>>> *Subject: *[Xmca-l] Re: Swedish activist Elon >>>> Ersson wins the day >>>> >>>> >>>> >>>> Yes, you can see the stress on his young >>>> women's face and she stands strong under >>>> enormous pressure and she wins. Wonderful! >>>> >>>> andy >>>> >>>> ------------------------------------------------------------ >>>> >>>> Andy Blunden >>>> http://www.ethicalpolitics.org/ablunden/index.htm >>>> >>>> >>>> >>>> On 25/07/2018 10:07 PM, Julian Williams wrote: >>>> >>>> I think you and xmca may like this: >>>> >>>> >>>> >>>> https://www.theguardian.com/world/2018/jul/25/swedish-student-plane-protest-stops-mans-deportation-afghanistan >>>> >>>> >>>> >>>> ? >>>> >>>> >>>> >>>> Julian >>>> >>>> >>>> >>> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180726/b5eeee0e/attachment.html From julian.williams@manchester.ac.uk Thu Jul 26 08:54:53 2018 From: julian.williams@manchester.ac.uk (Julian Williams) Date: Thu, 26 Jul 2018 15:54:53 +0000 Subject: [Xmca-l] Re: Object of activity (was: Swedish activist Elon Ersson wins the day) In-Reply-To: <6be8a6be-907e-6373-3c7e-c62e7944e454@marxists.org> References: <0818a714-ff40-d3bc-4b8d-76c63806708f@marxists.org> <6be8a6be-907e-6373-3c7e-c62e7944e454@marxists.org> Message-ID: <5ADE98C5-136A-49A2-9EB1-BD39F0CD8ABA@manchester.ac.uk> Andy/Huw and all Elin (sorry I put Elon by mistake in my original) went to the airport to rescue a young refugee from deportation? I guess this was a coordinated effort by her refugee campaign group who would have helped her plan, buy the ticket, etc (and the Americans would have dragged them off and sent the lot to guantanamo bay on conspiracy charges ? ?) but in fact she found a different, older refugee was being deported (the young man they expected to be there having been deported on a different route). So she did not fulfil her initial intended conscious goal, but something happened during the activity and she still did get a refugee off the plane and appear to win the day. The initial goal was not achieved, but a new goal developed during the activity? that made complete sense within the activity context. The activity was not just about the young man (or the old guy) ? obviously. Ultimately, anyway, probably (like the workers? struggles Andy mentions) this older refugee will also be deported at a later date. A loss then, because even the amended goal (to rescue him, and save his life) will be undone, but it would still be right to say that the action/activity was successful, because the campaign continues more strongly, and many people know better what is going on ?in our name? than did before. The idea of goals and motives developing in activity is an important one (in any terminology) and I think Leontiev affords that by making the distinction (and offering a potential contradiction) between individual conscious acts (related to ?goals?) and the object-motive of collective activity (which rarely aligns with the conscious goals of many of the acting subjects jointly engaged). A student may study the text because it is required for the exam (eg a history text), but become interested in it for the sake, developing a new social motive of the wider history-object (to make sure history doesn?t repeat itself!) What is not clear in Leontiev, I think, is that actions sometimes have conscious goals/motives at several levels: I think Elin knew what she was doing in the Particular case, but also acted consciously with a Universal principle in mind ? this might help explain how she is so easily able to change the particular goal in line with the more general principle. And winning a bunch of passengers on that particular flight was an important moment ? the football team that stood up also and maybe was supportively refusing to sit down , it is a symbol for footballers everywhere - while technology linked that to a worldwide movement. On the Object: I find in English language texts (which is all I can read) that the conception of Object is very slippery, yes: a lot has been written about this on xmca in the past. But I quite like this slipperiness, because it more suits a dialectic, where the ?thing? being worked on changes/develops over time and space, and over the consciousnesses of the many subjects working on it. But if someone could help nail all this down conceptually I think it would help clarify a lot of us. Julian From: on behalf of Andy Blunden Reply-To: "eXtended Mind, Culture, Activity" Date: Thursday, 26 July 2018 at 14:55 To: "xmca-l@mailman.ucsd.edu" Subject: [Xmca-l] Re: Object of activity (was: Swedish activist Elon Ersson wins the day) Sure, the terminology is so variable, it is the meaning not the word which must be paid attention to. But it is not about *multiple* goals, or *plurality*. The crucial distinction, the distinction which is constitutive of consciousness, is the "task goal" and the reason for the task. That's a definite "two-ness." Though, this does not rule out "plurality." a ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 26/07/2018 11:46 PM, Huw Lloyd wrote: In this terminology the object is simply the artefact pertaining to the activity. I doubt very much whether there is alignment with Engestrom other than potentially some basic referents. As I said, the terms do not change my own system of relations. I simply bow to a custom articulated by a Russian speaker with a long history in the tradition of activity theory. On the matter of multiple goals, this is not ambiguous to the degree that it reflects the nesting that takes place in such activity, i.e. the plurality is authentic. If you wish to engage any thinking in the matter, I suggest you'd be better off starting from Gregory Bedny's chapter. I'll email Gregory to see if he is willing to share the chapter. Best, Huw On 26 July 2018 at 11:28, Andy Blunden > wrote: So "object" in your sense, the same sense in which Engestrom uses "object." This is something quite different from "goal" in Leontyev's sense, which is what the subject intends to transform the object into. Except that that concept of "goal" (intention) does not exist in Engestrom's system, only "outcome", which is clearly not the same thing as "goal" because things don't always go as intended. But from what I gather of "according to the activity goal", the "activity goal" is what Leontyev called the "motivation" - the reason for doing something. What you (and Engestrom) are calling "object" is like what Marx refers to as Arbeitsgegenstand - or "object of labour" (the "something" in your quote) whose form is changed. I think that's the Russian predmet. Fair enough. So you are contrasting "task goal" and "goal of activity". Fair enough, but isn't it confusing to use "goal" for both? That means you can never use the word "goal" without qualifying it as the "task goal" or the "goal of activity". Andy ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 26/07/2018 7:13 PM, Huw Lloyd wrote: Since my original endeavours I have switched to referring to the task goal or goal of activity, in conformance with Bedny et al's terminology. Personally this does not change my systemic formulations, but it does seem to point to holes in others', whilst reducing ambiguity. "An object of activity that can be material or mental (symbols, images, etc.) is something that can be modified by a subject according to the activity goal (Bedny and Karwowski, 2007; Leont?ev, 1981; Rubinshtein, 1957; Zinchenko, 1995)." Bedny (2015, p. 91) This is from the chapter "Basic Concepts and Terminology" which offers further elaboration (ref below). Best, Huw Bedny, G. Z. (2015) Application of Systemic-Structural Activity Theory to Design and Training. Boca Raton: CRC Press On 26 July 2018 at 02:54, Andy Blunden > wrote: ... to continue this dialogue on winning and losing, a now-departed friend who was a writer once commented to me after we had together watched an inspiring play performed by Melbourne Workers' Theatre, that for the working class *every* struggle, every story of victory, ends in defeat, simply because the object of the workers' movement lies if at all in the future; the road to socialism is a series of small victories followed by defeats. Until .... So Elon is acting in a fine tradition. The distinction between goal and object (by whatever names) was relevant for the recent xmca discussion around the Brazilian social movements, which kept popping up with different goals, but, one suspects, shared a common object. Andy PS. For the distinction between goal and object, I rely on A N Leonytev's succinct definition of action: "Processes, the object and motive of which do not coincide with one another, we shall call ?actions?." but choice of words for object, goal, aim, motive, etc., is problematic. I have chosen "object" for what Hegel calls "Intention" and Leontyev calls "motivation" and "goal" for what ANL calls "object" in the above quote. ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 25/07/2018 10:21 PM, Andy Blunden wrote: She achieved her goal. Her object will take longer to realise. Important to recognise the difference. Andy ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 25/07/2018 10:18 PM, Julian Williams wrote: Andy She wins and yet she doesn?t ? the guy she went to ?rescue? was deported on another flight, but she got the support of people on the plane (some even joined her protest) and is being applauded by millions worldwide now: this is a growing aspect of resistance activism, losing and winning. And the battle against deportations, and indeed fascism, in Sweden and elsewhere continues?. Julian From: <xmca-l-bounces@mailman.ucsd.edu> on behalf of Andy Blunden Reply-To: "eXtended Mind, Culture, Activity" Date: Wednesday, 25 July 2018 at 13:11 To: "xmca-l@mailman.ucsd.edu" Subject: [Xmca-l] Re: Swedish activist Elon Ersson wins the day Yes, you can see the stress on his young women's face and she stands strong under enormous pressure and she wins. Wonderful! andy ________________________________ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 25/07/2018 10:07 PM, Julian Williams wrote: I think you and xmca may like this: https://www.theguardian.com/world/2018/jul/25/swedish-student-plane-protest-stops-mans-deportation-afghanistan ? Julian -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180726/c32503e2/attachment.html From peg.griffin@att.net Thu Jul 26 10:08:07 2018 From: peg.griffin@att.net (peg.griffin) Date: Thu, 26 Jul 2018 13:08:07 -0400 Subject: [Xmca-l] Re: Object of activity (was: Swedish activist Elon Ersson wins the day) In-Reply-To: <5ADE98C5-136A-49A2-9EB1-BD39F0CD8ABA@manchester.ac.uk> Message-ID: <201807261708.w6QH8jUH016075@mailman.ucsd.edu> Thanks for this thread with its good reading and thinking jogs. Maybe the following can be of interest: I find that in many contemporary US resistance groups, there's planning almost to the point of choreography and several alternative responses to possible conditions and acts by others on the scene. And there are lots of reherasals and art builds as the plan begins and metamorphosizes -- parts designed for?memorable attention and parts for hiding in plain sight before & after the denoument. While the front line members of the group hold sway in the planning allies & other supporters participate firmly in the making and doing.So, yes, maybe a univeresal principle but also a hell of lot coordinated long term and hard work.I hope some young CHAT folks are doing well documented participant observation and interviews, contributing to the ever evolving but not really improvisatonal resistance in different locales, its sociocultutal history, and its potential contribution to theory? building & testingBTW, the large group from J21 arrested en masse on inauguration day in DC after various militarized police herded and coralled them, have just about all now had their cases dropped. And did you know that the ACLU helped found an active group called People Power?PegPS: if any of you are Code Pink people, kudos & grins to you! Nice to see so many pink and older folks in the often anti-fa wing. Sent -------- Original message --------From: Julian Williams Date: 7/26/18 11:54 AM (GMT-05:00) To: "eXtended Mind, Culture, Activity" Subject: [Xmca-l] Re: Object of activity (was: Swedish activist Elon Ersson wins the day) Andy/Huw and all ? Elin (sorry I put Elon by mistake in my original) went to the airport to rescue a young refugee from deportation? ?I guess this was a coordinated effort by her refugee campaign group who would have helped her plan, buy the ticket, etc (and the Americans would have dragged them off and sent the lot to guantanamo bay on conspiracy charges ? ?) but in fact she found a different, older refugee was being deported (the young man they expected to be there having been deported on a different route). So she did not fulfil her initial intended conscious goal, but something happened during the activity and she still did get a refugee off the plane and appear to win the day. The initial goal was not achieved, but a new goal developed during the activity? that made complete sense within the activity context. The activity was not just about the young man (or the old guy) ?? obviously. ? Ultimately, anyway, probably (like the workers? struggles Andy mentions) this older refugee will also be deported at a later date. A loss then, because even the amended goal (to rescue him, and save his life) will be undone, but it would still be right to say that the action/activity was successful, because the campaign continues more strongly, and many people know better what is going on ?in our name? than did before. ? The idea of goals and motives developing in activity is an important one (in any terminology) and I think Leontiev affords that by making the distinction (and offering a potential contradiction) between individual conscious acts (related to ?goals?) and the object-motive of collective activity (which rarely aligns with the conscious goals of many of the acting subjects jointly engaged). A student may study the text because it is required for the exam (eg a history text), but become interested in it for the sake, developing a new social motive of the wider history-object (to make sure history doesn?t repeat itself!) ? What is not clear in Leontiev, I think, is that actions sometimes have conscious goals/motives at several levels: I think Elin knew what she was doing in the Particular case, but also acted consciously with a Universal principle in mind ? this might help explain how she is so easily able to change the particular goal in line with the more general principle. ?And winning a bunch of passengers on that particular flight was an important moment ? the football team that stood up also and maybe was supportively refusing to sit down , it is a symbol for footballers everywhere - ?while technology linked that to a worldwide movement. ? On the Object: I find in English language texts (which is all I can read) that the conception of Object is very slippery, yes: a lot has been written about this on xmca in the past. But I quite like this slipperiness, because it more suits a dialectic, where the ?thing? being worked on changes/develops over time and space, and over the consciousnesses of the many subjects working on it. ? But if someone could help nail all this down conceptually I think it would help clarify a lot of us. ? Julian ? ? From: on behalf of Andy Blunden Reply-To: "eXtended Mind, Culture, Activity" Date: Thursday, 26 July 2018 at 14:55 To: "xmca-l@mailman.ucsd.edu" Subject: [Xmca-l] Re: Object of activity (was: Swedish activist Elon Ersson wins the day) ? Sure, the terminology is so variable, it is the meaning not the word which must be paid attention to. But it is not about *multiple* goals, or *plurality*. The crucial distinction, the distinction which is constitutive of consciousness, is the "task goal" and the reason for the task. That's a definite "two-ness." Though, this does not rule out "plurality." a Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 26/07/2018 11:46 PM, Huw Lloyd wrote: In this terminology the object is simply the artefact pertaining to the activity. I doubt very much whether there is alignment with Engestrom other than potentially some basic referents. ? As I said, the terms do not change my own system of relations. I simply bow to a custom articulated by a Russian speaker with a long history in the tradition of activity theory. ? On the matter of multiple goals, this is not ambiguous to the degree that it reflects the nesting that takes place in such activity, i.e. the plurality is authentic. ? If you wish to engage any thinking in the matter, I suggest you'd be better off starting from Gregory Bedny's chapter. I'll email Gregory to see if he is willing to share the chapter. ? Best, Huw ? ? ? ? On 26 July 2018 at 11:28, Andy Blunden wrote: So "object" in your sense, the same sense in which Engestrom uses "object." This is something quite different from "goal" in Leontyev's sense, which is what the subject intends to transform the object into. Except that that concept of "goal" (intention) does not exist in Engestrom's system, only "outcome", which is clearly not the same thing as "goal" because things don't always go as intended. But from what I gather of "according to the activity goal", the "activity goal" is what Leontyev called the "motivation" - the reason for doing something. What you (and Engestrom) are calling "object" is like what Marx refers to as Arbeitsgegenstand - or "object of labour" (the "something" in your quote) whose form is changed. I think that's the Russian predmet. Fair enough. So you are contrasting "task goal" and "goal of activity". Fair enough, but isn't it confusing to use "goal" for both? That means you can never use the word "goal" without qualifying it as the "task goal" or the "goal of activity". Andy Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 26/07/2018 7:13 PM, Huw Lloyd wrote: Since my original endeavours I have switched to referring to the task goal or goal of activity, in conformance with Bedny et al's terminology. Personally this does not change my systemic formulations, but it does seem to point to holes in others', whilst reducing ambiguity. "An object of activity that can be material or mental (symbols, images, etc.) is something that can be modified by a subject according to the activity goal (Bedny and Karwowski, 2007; Leont?ev, 1981; Rubinshtein, 1957; Zinchenko, 1995)." Bedny (2015, p. 91) ? This is from the chapter "Basic Concepts and Terminology" which offers further elaboration (ref below). ? Best, Huw ? Bedny, G. Z. (2015) Application of Systemic-Structural Activity Theory to Design and Training. Boca Raton: CRC Press ? On 26 July 2018 at 02:54, Andy Blunden wrote: ... to continue this dialogue on winning and losing, a now-departed friend who was a writer once commented to me after we had together watched an inspiring play performed by Melbourne Workers' Theatre, that for the working class *every* struggle, every story of victory, ends in defeat, simply because the object of the workers' movement lies if at all in the future; the road to socialism is a series of small victories followed by defeats. Until .... So Elon is acting in a fine tradition. The distinction between goal and object (by whatever names) was relevant for the recent xmca discussion around the Brazilian social movements, which kept popping up with different goals, but, one suspects, shared a common object. Andy PS. For the distinction between goal and object, I rely on A N Leonytev's succinct definition of action: "Processes, the object and motive of which do not coincide with one another, we shall call ?actions?." but choice of words for object, goal, aim, motive, etc., is problematic. I have chosen "object" for what Hegel calls "Intention" and Leontyev calls "motivation" and "goal" for what ANL calls "object" in the above quote. Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 25/07/2018 10:21 PM, Andy Blunden wrote: She achieved her goal. Her object will take longer to realise. Important to recognise the difference. Andy Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 25/07/2018 10:18 PM, Julian Williams wrote: Andy ? She wins and yet she doesn?t ? the guy she went to ?rescue? was deported on another flight, but she got the support of people on the plane (some even joined her protest) and is being applauded by millions worldwide now: this is a growing aspect of resistance activism, losing and winning. ? And the battle against deportations, and indeed fascism, in Sweden and elsewhere continues?. ? Julian ? ? From: on behalf of Andy Blunden Reply-To: "eXtended Mind, Culture, Activity" Date: Wednesday, 25 July 2018 at 13:11 To: "xmca-l@mailman.ucsd.edu" Subject: [Xmca-l] Re: Swedish activist Elon Ersson wins the day ? Yes, you can see the stress on his young women's face and she stands strong under enormous pressure and she wins. Wonderful! andy Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 25/07/2018 10:07 PM, Julian Williams wrote: I think you and xmca may like this: ? https://www.theguardian.com/world/2018/jul/25/swedish-student-plane-protest-stops-mans-deportation-afghanistan ? ? ? Julian ? ? ? ? ? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180726/cd2544f2/attachment.html From huw.softdesigns@gmail.com Thu Jul 26 14:14:49 2018 From: huw.softdesigns@gmail.com (Huw Lloyd) Date: Thu, 26 Jul 2018 22:14:49 +0100 Subject: [Xmca-l] Re: Object of activity (was: Swedish activist Elon Ersson wins the day) In-Reply-To: <5ADE98C5-136A-49A2-9EB1-BD39F0CD8ABA@manchester.ac.uk> References: <0818a714-ff40-d3bc-4b8d-76c63806708f@marxists.org> <6be8a6be-907e-6373-3c7e-c62e7944e454@marxists.org> <5ADE98C5-136A-49A2-9EB1-BD39F0CD8ABA@manchester.ac.uk> Message-ID: Some of the content of the following paper (also attached) looks similar to the previous chapter I mentioned, I have merely scanned relevant sections. https://www.researchgate.net/publication/261878261_Applicat_Basic_Terminolgy Having a moving target is compatible, Julian. Although one should bear in mind that SSAT is concerned with technicalities addressing measuring systems of performance, ergonomics etc. Best, Huw On 26 July 2018 at 16:54, Julian Williams wrote: > Andy/Huw and all > > > > Elin (sorry I put Elon by mistake in my original) went to the airport to > rescue a young refugee from deportation? I guess this was a coordinated > effort by her refugee campaign group who would have helped her plan, buy > the ticket, etc (and the Americans would have dragged them off and sent the > lot to guantanamo bay on conspiracy charges ? ?) but in fact she found a > different, older refugee was being deported (the young man they expected to > be there having been deported on a different route). So she did not fulfil > her* initial* intended conscious goal, but something happened during the > activity and she still did get a refugee off the plane and appear to win > the day. The initial goal was not achieved, but a new goal developed during > the activity? that made complete sense within the activity context. The > activity was not just about the young man (or the old guy) ? obviously. > > > > Ultimately, anyway, probably (like the workers? struggles Andy mentions) > this older refugee will also be deported at a later date. A loss then, > because even the amended goal (to rescue him, and save his life) will be > undone, but it would still be right to say that the action/activity was > successful, because the campaign continues more strongly, and many people > know better what is going on ?in our name? than did before. > > > > The idea of goals and motives developing in activity is an important one > (in any terminology) and I think Leontiev affords that by making the > distinction (and offering a potential contradiction) between individual > conscious acts (related to ?goals?) and the object-motive of collective > activity (which rarely aligns with the conscious goals of many of the > acting subjects jointly engaged). A student may study the text because it > is required for the exam (eg a history text), but become interested in it > for the sake, developing a new social motive of the wider history-object > (to make sure history doesn?t repeat itself!) > > > > What is not clear in Leontiev, I think, is that actions sometimes have > conscious goals/motives at several levels: I think Elin knew what she was > doing in the Particular case, but also acted consciously with a Universal > principle in mind ? this might help explain how she is so easily able to > change the particular goal in line with the more general principle. And > winning a bunch of passengers on that particular flight was an important > moment ? the football team that stood up also and maybe was supportively > refusing to sit down , it is a symbol for footballers everywhere - while > technology linked that to a worldwide movement. > > > > On the Object: I find in English language texts (which is all I can read) > that the conception of Object is very slippery, yes: a lot has been written > about this on xmca in the past. But I quite like this slipperiness, because > it more suits a dialectic, where the ?thing? being worked on > changes/develops over time and space, and over the consciousnesses of the > many subjects working on it. > > > > But if someone could help nail all this down conceptually I think it would > help clarify a lot of us. > > > > Julian > > > > > > *From: * on behalf of Andy Blunden < > andyb@marxists.org> > *Reply-To: *"eXtended Mind, Culture, Activity" > *Date: *Thursday, 26 July 2018 at 14:55 > *To: *"xmca-l@mailman.ucsd.edu" > *Subject: *[Xmca-l] Re: Object of activity (was: Swedish activist Elon > Ersson wins the day) > > > > Sure, the terminology is so variable, it is the meaning not the word which > must be paid attention to. But it is not about *multiple* goals, or > *plurality*. The crucial distinction, the distinction which is constitutive > of consciousness, is the "task goal" and the reason for the task. That's a > definite "two-ness." Though, this does not rule out "plurality." > > a > ------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 26/07/2018 11:46 PM, Huw Lloyd wrote: > > In this terminology the object is simply the artefact pertaining to the > activity. I doubt very much whether there is alignment with Engestrom other > than potentially some basic referents. > > > > As I said, the terms do not change my own system of relations. I simply > bow to a custom articulated by a Russian speaker with a long history in the > tradition of activity theory. > > > > On the matter of multiple goals, this is not ambiguous to the degree that > it reflects the nesting that takes place in such activity, i.e. the > plurality is authentic. > > > > If you wish to engage any thinking in the matter, I suggest you'd be > better off starting from Gregory Bedny's chapter. I'll email Gregory to see > if he is willing to share the chapter. > > > > Best, > > Huw > > > > > > > > > > On 26 July 2018 at 11:28, Andy Blunden wrote: > > So "object" in your sense, the same sense in which Engestrom uses > "object." This is something quite different from "goal" in Leontyev's > sense, which is what the subject intends to transform the object into. > Except that that concept of "goal" (intention) does not exist in > Engestrom's system, only "outcome", which is clearly not the same thing as > "goal" because things don't always go as intended. But from what I gather > of "according to the activity goal", the "activity goal" is what Leontyev > called the "motivation" - the reason for doing something. What you (and > Engestrom) are calling "object" is like what Marx refers to as *Arbeitsgegenstand > *- or "object of labour" (the "something" in your quote) whose form is > changed. I think that's the Russian *predmet*. Fair enough. > > So you are contrasting "task goal" and "goal of activity". Fair enough, > but isn't it confusing to use "goal" for both? That means you can never use > the word "goal" without qualifying it as the "task goal" or the "goal of > activity". > > Andy > ------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 26/07/2018 7:13 PM, Huw Lloyd wrote: > > Since my original endeavours I have switched to referring to the task goal > or goal of activity, in conformance with Bedny et al's terminology. > Personally this does not change my systemic formulations, but it does seem > to point to holes in others', whilst reducing ambiguity. > > "An object of activity that can be material or mental (symbols, images, > etc.) is something that can be modified by a subject according to the > activity goal (Bedny and Karwowski, 2007; Leont?ev, 1981; Rubinshtein, > 1957; Zinchenko, 1995)." Bedny (2015, p. 91) > > > > This is from the chapter "Basic Concepts and Terminology" which offers > further elaboration (ref below). > > > > Best, > > Huw > > > > Bedny, G. Z. (2015) *Application of Systemic-Structural Activity Theory > to Design and Training*. Boca Raton: CRC Press > > > > On 26 July 2018 at 02:54, Andy Blunden wrote: > > ... to continue this dialogue on winning and losing, a now-departed friend > who was a writer once commented to me after we had together watched an > inspiring play performed by Melbourne Workers' Theatre, that for the > working class *every* struggle, every story of victory, ends in defeat, > simply because the object of the workers' movement lies if at all in the > future; the road to socialism is a series of small victories followed by > defeats. Until .... So Elon is acting in a fine tradition. > > The distinction between goal and object (by whatever names) was relevant > for the recent xmca discussion around the Brazilian social movements, which > kept popping up with different goals, but, one suspects, shared a common > object. > > Andy > > PS. For the distinction between goal and object, I rely on A N Leonytev's > succinct definition of action: "Processes, the object and motive of which > do not coincide with one another, we shall call ?actions?." but choice of > words for object, goal, aim, motive, etc., is problematic. I have chosen > "object" for what Hegel calls "Intention" and Leontyev calls "motivation" > and "goal" for what ANL calls "object" in the above quote. > ------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 25/07/2018 10:21 PM, Andy Blunden wrote: > > She achieved her goal. Her object will take longer to realise. Important > to recognise the difference. > > Andy > ------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 25/07/2018 10:18 PM, Julian Williams wrote: > > Andy > > > > She wins and yet she doesn?t ? the guy she went to ?rescue? was deported > on another flight, but she got the support of people on the plane (some > even joined her protest) and is being applauded by millions worldwide now: > this is a growing aspect of resistance activism, losing and winning. > > > > And the battle against deportations, and indeed fascism, in Sweden and > elsewhere continues?. > > > > Julian > > > > *From: *< xmca-l-bounces@mailman.ucsd.edu> > on behalf of Andy Blunden > *Reply-To: *"eXtended Mind, Culture, Activity" > > *Date: *Wednesday, 25 July 2018 at 13:11 > *To: *"xmca-l@mailman.ucsd.edu" > > *Subject: *[Xmca-l] Re: Swedish activist Elon Ersson wins the day > > > > Yes, you can see the stress on his young women's face and she stands > strong under enormous pressure and she wins. Wonderful! > > andy > ------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 25/07/2018 10:07 PM, Julian Williams wrote: > > I think you and xmca may like this: > > > > https://www.theguardian.com/world/2018/jul/25/swedish- > student-plane-protest-stops-mans-deportation-afghanistan > > > > ? > > > > Julian > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180726/28bedea0/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: Applicat_Basic_Terminolgy.pdf Type: application/pdf Size: 284450 bytes Desc: not available Url : http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180726/28bedea0/attachment-0001.pdf From andyb@marxists.org Thu Jul 26 18:47:00 2018 From: andyb@marxists.org (Andy Blunden) Date: Fri, 27 Jul 2018 11:47:00 +1000 Subject: [Xmca-l] Re: Object of activity (was: Swedish activist Elon Ersson wins the day) In-Reply-To: <5ADE98C5-136A-49A2-9EB1-BD39F0CD8ABA@manchester.ac.uk> References: <0818a714-ff40-d3bc-4b8d-76c63806708f@marxists.org> <6be8a6be-907e-6373-3c7e-c62e7944e454@marxists.org> <5ADE98C5-136A-49A2-9EB1-BD39F0CD8ABA@manchester.ac.uk> Message-ID: <363777f6-aec2-fed5-ddc3-4bf87bdff9be@marxists.org> Julian, In Activity Theory, the idea of 'germ cell' has partially incorporated the fact that projects 'evolve' as they are realised. It always amused me that the Russian word /proyekt/ translates as either 'project' or 'design'. The research into "design projects," like Mike's work on 5D, are based on the idea that goals evolve in the course of a subject-object interaction. My main criticism of the Engestrom approach is that it does not theorise /mis/conceptions, only concretisation. I can see your logic in theorising Elin's project in terms of particular and universal. Certainly, we can make no progress without understanding the the goal is a /concept/, and I think that it is a weakness of the Leontyev approach, in its insistence on the goal being objective, rather than a concept, which inherently contains unforeseen contradictions and misconceptions in its subjective/objective unfolding. However, universal/particular does not quite capture the essential issues here. In Hegel's theorising the transformation of Purpose into Intention is in terms of /realisation/, which hinges on the idea that there can be no unproblematic boundary placed around the object - the initial act has necessarily has unforeseen consequences, which go beyond what the actor can reasonably foresee. As a result Hegel was not a supporter of civil disobedience. Obviously, I part company with Hegel there! I agree that the concept of 'object' is inherently slippery. And it is no less slippery in Russian or German than it is in English. My take on this is in https://www.ethicalpolitics.org/ablunden/pdfs/Concept%20of%20Object.pdf . I haven't read the text that Huw provided, but I will. Hegel's take on all this, which I think, for all its faults, is more adequate than any past formulation by CHAT writers, I review here: https://www.ethicalpolitics.org/ablunden/pdfs/Hegel%20on%20action.pdf The points you raise, Julian, that Elon acted as part of a movement, not as an individual, and that her goals were those of there movement, not just her own, that the goal (task) which was realised necessarily differed from the goal which was initially conceptualised, but in either case, the goal can only be understood in the context of the universal /reason for doing it/, the concept which the movement had of its object (motive, activity goal), which is realised only by multiple actions by multiple actors overt time, and is in any case never what anyone actually conceived of in the beginning. Hegel calls this Welfare, the final outcome of Intentions (which are not just subjective for Hegel, and nor are they simply objective, but concepts which evolve from subjective to objective as they are realised, and the individual's action interacts with the world and ripples spread across the surface of the pond). Elon of course acted very much with a consciousness that the object (in the sense of 'means') is unbounded - that is, the impact of her actions on the entire social and political world; she acts not just on an airline crew but on the entire political set-up. This was a crucial issue on Mike's 5D Project - instances of 5D sank or swam as an outcome of changes in the entire socio-political system as it was realised in the particular community. Andy ------------------------------------------------------------ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 27/07/2018 1:54 AM, Julian Williams wrote: > > Andy/Huw and all > > > > Elin (sorry I put Elon by mistake in my original) went to > the airport to rescue a young refugee from deportation? I > guess this was a coordinated effort by her refugee > campaign group who would have helped her plan, buy the > ticket, etc (and the Americans would have dragged them off > and sent the lot to guantanamo bay on conspiracy charges ? > ?) but in fact she found a different, older refugee was > being deported (the young man they expected to be there > having been deported on a different route). So she did not > fulfil her*initial* intended conscious goal, but something > happened during the activity and she still did get a > refugee off the plane and appear to win the day. The > initial goal was not achieved, but a new goal developed > during the activity? that made complete sense within the > activity context. The activity was not just about the > young man (or the old guy) ? obviously. > > > > Ultimately, anyway, probably (like the workers? struggles > Andy mentions) this older refugee will also be deported at > a later date. A loss then, because even the amended goal > (to rescue him, and save his life) will be undone, but it > would still be right to say that the action/activity was > successful, because the campaign continues more strongly, > and many people know better what is going on ?in our name? > than did before. > > > > The idea of goals and motives developing in activity is an > important one (in any terminology) and I think Leontiev > affords that by making the distinction (and offering a > potential contradiction) between individual conscious acts > (related to ?goals?) and the object-motive of collective > activity (which rarely aligns with the conscious goals of > many of the acting subjects jointly engaged). A student > may study the text because it is required for the exam (eg > a history text), but become interested in it for the sake, > developing a new social motive of the wider history-object > (to make sure history doesn?t repeat itself!) > > > > What is not clear in Leontiev, I think, is that actions > sometimes have conscious goals/motives at several levels: > I think Elin knew what she was doing in the Particular > case, but also acted consciously with a Universal > principle in mind ? this might help explain how she is so > easily able to change the particular goal in line with the > more general principle. And winning a bunch of passengers > on that particular flight was an important moment ? the > football team that stood up also and maybe was > supportively refusing to sit down , it is a symbol for > footballers everywhere - while technology linked that to > a worldwide movement. > > > > On the Object: I find in English language texts (which is > all I can read) that the conception of Object is very > slippery, yes: a lot has been written about this on xmca > in the past. But I quite like this slipperiness, because > it more suits a dialectic, where the ?thing? being worked > on changes/develops over time and space, and over the > consciousnesses of the many subjects working on it. > > > > But if someone could help nail all this down conceptually > I think it would help clarify a lot of us. > > > > Julian > > > > > > *From: * on behalf of > Andy Blunden > *Reply-To: *"eXtended Mind, Culture, Activity" > > *Date: *Thursday, 26 July 2018 at 14:55 > *To: *"xmca-l@mailman.ucsd.edu" > *Subject: *[Xmca-l] Re: Object of activity (was: Swedish > activist Elon Ersson wins the day) > > > > Sure, the terminology is so variable, it is the meaning > not the word which must be paid attention to. But it is > not about *multiple* goals, or *plurality*. The crucial > distinction, the distinction which is constitutive of > consciousness, is the "task goal" and the reason for the > task. That's a definite "two-ness." Though, this does not > rule out "plurality." > > a > > ------------------------------------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 26/07/2018 11:46 PM, Huw Lloyd wrote: > > In this terminology the object is simply the artefact > pertaining to the activity. I doubt very much whether > there is alignment with Engestrom other than > potentially some basic referents. > > > > As I said, the terms do not change my own system of > relations. I simply bow to a custom articulated by a > Russian speaker with a long history in the tradition > of activity theory. > > > > On the matter of multiple goals, this is not ambiguous > to the degree that it reflects the nesting that takes > place in such activity, i.e. the plurality is authentic. > > > > If you wish to engage any thinking in the matter, I > suggest you'd be better off starting from Gregory > Bedny's chapter. I'll email Gregory to see if he is > willing to share the chapter. > > > > Best, > > Huw > > > > > > > > > > On 26 July 2018 at 11:28, Andy Blunden > > wrote: > > So "object" in your sense, the same sense in which > Engestrom uses "object." This is something quite > different from "goal" in Leontyev's sense, which > is what the subject intends to transform the > object into. Except that that concept of "goal" > (intention) does not exist in Engestrom's system, > only "outcome", which is clearly not the same > thing as "goal" because things don't always go as > intended. But from what I gather of "according to > the activity goal", the "activity goal" is what > Leontyev called the "motivation" - the reason for > doing something. What you (and Engestrom) are > calling "object" is like what Marx refers to as > /Arbeitsgegenstand /- or "object of labour" (the > "something" in your quote) whose form is changed. > I think that's the Russian /predmet/. Fair enough. > > So you are contrasting "task goal" and "goal of > activity". Fair enough, but isn't it confusing to > use "goal" for both? That means you can never use > the word "goal" without qualifying it as the "task > goal" or the "goal of activity". > > Andy > > ------------------------------------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > On 26/07/2018 7:13 PM, Huw Lloyd wrote: > > Since my original endeavours I have switched > to referring to the task goal or goal of > activity, in conformance with Bedny et al's > terminology. Personally this does not change > my systemic formulations, but it does seem to > point to holes in others', whilst reducing > ambiguity. > > "An object of activity that can be material or > mental (symbols, images, etc.) is something > that can be modified by a subject according to > the activity goal (Bedny and Karwowski, 2007; > Leont?ev, 1981; Rubinshtein, 1957; Zinchenko, > 1995)." Bedny (2015, p. 91) > > > > This is from the chapter "Basic Concepts and > Terminology" which offers further elaboration > (ref below). > > > > Best, > > Huw > > > > Bedny, G. Z. (2015) /Application of > Systemic-Structural Activity Theory to Design > and Training/. Boca Raton: CRC Press > > > > On 26 July 2018 at 02:54, Andy Blunden > > wrote: > > ... to continue this dialogue on winning > and losing, a now-departed friend who was > a writer once commented to me after we had > together watched an inspiring play > performed by Melbourne Workers' Theatre, > that for the working class *every* > struggle, every story of victory, ends in > defeat, simply because the object of the > workers' movement lies if at all in the > future; the road to socialism is a series > of small victories followed by defeats. > Until .... So Elon is acting in a fine > tradition. > > The distinction between goal and object > (by whatever names) was relevant for the > recent xmca discussion around the > Brazilian social movements, which kept > popping up with different goals, but, one > suspects, shared a common object. > > Andy > > PS. For the distinction between goal and > object, I rely on A N Leonytev's succinct > definition of action: "Processes, the > object and motive of which do not coincide > with one another, we shall call > ?actions?." but choice of words for > object, goal, aim, motive, etc., is > problematic. I have chosen "object" for > what Hegel calls "Intention" and Leontyev > calls "motivation" and "goal" for what ANL > calls "object" in the above quote. > > ------------------------------------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > > On 25/07/2018 10:21 PM, Andy Blunden wrote: > > She achieved her goal. Her object will > take longer to realise. Important to > recognise the difference. > > Andy > > ------------------------------------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > > On 25/07/2018 10:18 PM, Julian > Williams wrote: > > Andy > > > > She wins and yet she doesn?t ? the > guy she went to ?rescue? was > deported on another flight, but > she got the support of people on > the plane (some even joined her > protest) and is being applauded by > millions worldwide now: this is a > growing aspect of resistance > activism, losing and winning. > > > > And the battle against > deportations, and indeed fascism, > in Sweden and elsewhere continues?. > > > > Julian > > > > *From: *< > xmca-l-bounces@mailman.ucsd > .edu> > on behalf of Andy Blunden > > > *Reply-To: *"eXtended Mind, > Culture, Activity" > > > *Date: *Wednesday, 25 July 2018 at > 13:11 > *To: *"xmca-l@mailman.ucsd.edu" > > > > *Subject: *[Xmca-l] Re: Swedish > activist Elon Ersson wins the day > > > > Yes, you can see the stress on his > young women's face and she stands > strong under enormous pressure and > she wins. Wonderful! > > andy > > ------------------------------------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > > On 25/07/2018 10:07 PM, Julian > Williams wrote: > > I think you and xmca may like > this: > > > > https://www.theguardian.com/world/2018/jul/25/swedish-student-plane-protest-stops-mans-deportation-afghanistan > > > > ? > > > > Julian > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180727/c81399ae/attachment.html From andyb@marxists.org Thu Jul 26 19:45:29 2018 From: andyb@marxists.org (Andy Blunden) Date: Fri, 27 Jul 2018 12:45:29 +1000 Subject: [Xmca-l] Re: Object of activity (was: Swedish activist Elon Ersson wins the day) In-Reply-To: References: <0818a714-ff40-d3bc-4b8d-76c63806708f@marxists.org> <6be8a6be-907e-6373-3c7e-c62e7944e454@marxists.org> <5ADE98C5-136A-49A2-9EB1-BD39F0CD8ABA@manchester.ac.uk> Message-ID: <3fcc7df4-b865-ff27-c780-b95684286093@marxists.org> Thank you for providing that text, Huw. For my point of view, this paper simply goes from confusion to even deeper confusion. It is basically a reconciliation of CHAT with what Vygotsky called "analysis by elements" not units. -- "The main units of activity analysis are mental or cognitive and behavioural actions." Psychology as a branch of engineering. An interesting illustration of Julian's observation about the "slipperiness" of concepts of object in that Bedny insists on translating /predmet /as 'subject'. I could not count of how many times I have read about translating /predmet/ as 'object' in Leontyevian AT. Just as translators of Marx render /Arbeitsgegenstand /as "subject of labour" in translating Capital, but render /Gegenstand /as "object" in translating Theses on Feuerbach. This is no-one's "fault" - it's in the nature of the concepts. :) I don't wish to go on, Huw. This current of AT is new to me and I am unaware of coming across any supporters of it before now. Andy ------------------------------------------------------------ Andy Blunden http://www.ethicalpolitics.org/ablunden/index.htm On 27/07/2018 7:14 AM, Huw Lloyd wrote: > Some of the content of the following paper (also attached) > looks similar to the previous chapter I mentioned, I have > merely scanned relevant sections. > > https://www.researchgate.net/publication/261878261_Applicat_Basic_Terminolgy > > Having a moving target is compatible, Julian. Although one > should bear in mind that SSAT is concerned with > technicalities addressing measuring systems of > performance, ergonomics etc. > > Best, > Huw > > On 26 July 2018 at 16:54, Julian Williams > > wrote: > > Andy/Huw and all > > > > Elin (sorry I put Elon by mistake in my original) went > to the airport to rescue a young refugee from > deportation? I guess this was a coordinated effort by > her refugee campaign group who would have helped her > plan, buy the ticket, etc (and the Americans would > have dragged them off and sent the lot to guantanamo > bay on conspiracy charges ? ?) but in fact she found a > different, older refugee was being deported (the young > man they expected to be there having been deported on > a different route). So she did not fulfil her*initial* > intended conscious goal, but something happened during > the activity and she still did get a refugee off the > plane and appear to win the day. The initial goal was > not achieved, but a new goal developed during the > activity? that made complete sense within the activity > context. The activity was not just about the young man > (or the old guy) ? obviously. > > > > Ultimately, anyway, probably (like the workers? > struggles Andy mentions) this older refugee will also > be deported at a later date. A loss then, because even > the amended goal (to rescue him, and save his life) > will be undone, but it would still be right to say > that the action/activity was successful, because the > campaign continues more strongly, and many people know > better what is going on ?in our name? than did before. > > > > The idea of goals and motives developing in activity > is an important one (in any terminology) and I think > Leontiev affords that by making the distinction (and > offering a potential contradiction) between individual > conscious acts (related to ?goals?) and the > object-motive of collective activity (which rarely > aligns with the conscious goals of many of the acting > subjects jointly engaged). A student may study the > text because it is required for the exam (eg a history > text), but become interested in it for the sake, > developing a new social motive of the wider > history-object (to make sure history doesn?t repeat > itself!) > > > > What is not clear in Leontiev, I think, is that > actions sometimes have conscious goals/motives at > several levels: I think Elin knew what she was doing > in the Particular case, but also acted consciously > with a Universal principle in mind ? this might help > explain how she is so easily able to change the > particular goal in line with the more general > principle. And winning a bunch of passengers on that > particular flight was an important moment ? the > football team that stood up also and maybe was > supportively refusing to sit down , it is a symbol for > footballers everywhere - while technology linked that > to a worldwide movement. > > > > On the Object: I find in English language texts (which > is all I can read) that the conception of Object is > very slippery, yes: a lot has been written about this > on xmca in the past. But I quite like this > slipperiness, because it more suits a dialectic, where > the ?thing? being worked on changes/develops over time > and space, and over the consciousnesses of the many > subjects working on it. > > > > But if someone could help nail all this down > conceptually I think it would help clarify a lot of us. > > > > Julian > > > > > > *From: * > on behalf of > Andy Blunden > > *Reply-To: *"eXtended Mind, Culture, Activity" > > > *Date: *Thursday, 26 July 2018 at 14:55 > *To: *"xmca-l@mailman.ucsd.edu > " > > > *Subject: *[Xmca-l] Re: Object of activity (was: > Swedish activist Elon Ersson wins the day) > > > > Sure, the terminology is so variable, it is the > meaning not the word which must be paid attention to. > But it is not about *multiple* goals, or *plurality*. > The crucial distinction, the distinction which is > constitutive of consciousness, is the "task goal" and > the reason for the task. That's a definite "two-ness." > Though, this does not rule out "plurality." > > a > > ------------------------------------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > > On 26/07/2018 11:46 PM, Huw Lloyd wrote: > > In this terminology the object is simply the > artefact pertaining to the activity. I doubt very > much whether there is alignment with Engestrom > other than potentially some basic referents. > > > > As I said, the terms do not change my own system > of relations. I simply bow to a custom articulated > by a Russian speaker with a long history in the > tradition of activity theory. > > > > On the matter of multiple goals, this is not > ambiguous to the degree that it reflects the > nesting that takes place in such activity, i.e. > the plurality is authentic. > > > > If you wish to engage any thinking in the matter, > I suggest you'd be better off starting from > Gregory Bedny's chapter. I'll email Gregory to see > if he is willing to share the chapter. > > > > Best, > > Huw > > > > > > > > > > On 26 July 2018 at 11:28, Andy Blunden > > > wrote: > > So "object" in your sense, the same sense in > which Engestrom uses "object." This is > something quite different from "goal" in > Leontyev's sense, which is what the subject > intends to transform the object into. Except > that that concept of "goal" (intention) does > not exist in Engestrom's system, only > "outcome", which is clearly not the same thing > as "goal" because things don't always go as > intended. But from what I gather of "according > to the activity goal", the "activity goal" is > what Leontyev called the "motivation" - the > reason for doing something. What you (and > Engestrom) are calling "object" is like what > Marx refers to as /Arbeitsgegenstand /- or > "object of labour" (the "something" in your > quote) whose form is changed. I think that's > the Russian /predmet/. Fair enough. > > So you are contrasting "task goal" and "goal > of activity". Fair enough, but isn't it > confusing to use "goal" for both? That means > you can never use the word "goal" without > qualifying it as the "task goal" or the "goal > of activity". > > Andy > > ------------------------------------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > > > On 26/07/2018 7:13 PM, Huw Lloyd wrote: > > Since my original endeavours I have > switched to referring to the task goal or > goal of activity, in conformance with > Bedny et al's terminology. Personally this > does not change my systemic formulations, > but it does seem to point to holes in > others', whilst reducing ambiguity. > > "An object of activity that can be > material or mental (symbols, images, etc.) > is something that can be modified by a > subject according to the activity goal > (Bedny and Karwowski, 2007; Leont?ev, > 1981; Rubinshtein, 1957; Zinchenko, > 1995)." Bedny (2015, p. 91) > > > > This is from the chapter "Basic Concepts > and Terminology" which offers further > elaboration (ref below). > > > > Best, > > Huw > > > > Bedny, G. Z. (2015) /Application of > Systemic-Structural Activity Theory to > Design and Training/. Boca Raton: CRC Press > > > > On 26 July 2018 at 02:54, Andy Blunden > > wrote: > > ... to continue this dialogue on > winning and losing, a now-departed > friend who was a writer once commented > to me after we had together watched an > inspiring play performed by Melbourne > Workers' Theatre, that for the working > class *every* struggle, every story of > victory, ends in defeat, simply > because the object of the workers' > movement lies if at all in the future; > the road to socialism is a series of > small victories followed by defeats. > Until .... So Elon is acting in a fine > tradition. > > The distinction between goal and > object (by whatever names) was > relevant for the recent xmca > discussion around the Brazilian social > movements, which kept popping up with > different goals, but, one suspects, > shared a common object. > > Andy > > PS. For the distinction between goal > and object, I rely on A N Leonytev's > succinct definition of action: > "Processes, the object and motive of > which do not coincide with one > another, we shall call ?actions?." but > choice of words for object, goal, aim, > motive, etc., is problematic. I have > chosen "object" for what Hegel calls > "Intention" and Leontyev calls > "motivation" and "goal" for what ANL > calls "object" in the above quote. > > ------------------------------------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > > > On 25/07/2018 10:21 PM, Andy Blunden > wrote: > > She achieved her goal. Her object > will take longer to realise. > Important to recognise the difference. > > Andy > > ------------------------------------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > > > On 25/07/2018 10:18 PM, Julian > Williams wrote: > > Andy > > > > She wins and yet she doesn?t ? > the guy she went to ?rescue? > was deported on another > flight, but she got the > support of people on the plane > (some even joined her protest) > and is being applauded by > millions worldwide now: this > is a growing aspect of > resistance activism, losing > and winning. > > > > And the battle against > deportations, and indeed > fascism, in Sweden and > elsewhere continues?. > > > > Julian > > > > *From: *< > xmca-l-bounces@mailman.ucsd > .edu> > on behalf of Andy Blunden > > > *Reply-To: *"eXtended Mind, > Culture, Activity" > > > *Date: *Wednesday, 25 July > 2018 at 13:11 > *To: > *"xmca-l@mailman.ucsd.edu" > > > > *Subject: *[Xmca-l] Re: > Swedish activist Elon Ersson > wins the day > > > > Yes, you can see the stress on > his young women's face and she > stands strong under enormous > pressure and she wins. Wonderful! > > andy > > ------------------------------------------------------------ > > Andy Blunden > http://www.ethicalpolitics.org/ablunden/index.htm > > > On 25/07/2018 10:07 PM, Julian > Williams wrote: > > I think you and xmca may > like this: > > > > https://www.theguardian.com/world/2018/jul/25/swedish-student-plane-protest-stops-mans-deportation-afghanistan > > > > ? > > > > Julian > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180727/d79c9492/attachment.html From mcole@ucsd.edu Sat Jul 28 08:31:56 2018 From: mcole@ucsd.edu (mike cole) Date: Sat, 28 Jul 2018 08:31:56 -0700 Subject: [Xmca-l] Fwd: [COGDEVSOC] Seeking candidates for developmental position In-Reply-To: References: Message-ID: Jobs ---------- Forwarded message --------- From: Cushman, Fiery Andrews Date: Sat, Jul 28, 2018 at 5:44 AM Subject: [COGDEVSOC] Seeking candidates for developmental position To: cogdevsoc@lists.cogdevsoc.org Dear Colleague: The Harvard University Psychology Department seeks to hire Assistant Professors in the areas of *Developmental Psychology* and *Clinical Psychology/Clinical Science*. We hope to recruit psychologists who are conducting excellent research and who can contribute to our teaching needs in at the undergraduate and graduate levels. The Department at present consists of a highly integrated group of about 28 individuals in clinical psychology, cognitive psychology, developmental psychology, and social psychology who actively collaborate across labs. We write to you even though we have placed our ad in all the standard sources. *It has been our experience that even the best candidates do not always apply for the jobs for which they are qualified. We recognize that candidates are often not the best judge of their own accomplishments and that they may hold themselves back. We encourage you to circulate this letter broadly to your community through internal mail or listservs to announce our interest in any candidate who is interested in our Department and in this position.* *Developmental Psychology position*: The application deadline is September 15, 2018. Full advertisement: http://academicpositions.harvard.edu/postings/8336. *Clinical Psychology/Clinical Science position*: The application deadline is October 1, 2018. Full advertisement: http://academicpositions.harvard.edu/postings/8335. *If there are particularly strong candidates who may be reluctant to apply without additional encouragement, please feel free to send us their names so that we may invite them to apply. *Write to: mahzarin_banaji@harvard.edu*. * Thank you in advance for your assistance in bringing these ads to the attention of your department and especially for alerting us to any candidates early in their career you believe we should consider. Kind regards, [image: mrb] Mahzarin R. Banaji Richard Clarke Cabot Professor of Social Ethics Chair, Department of Psychology Harvard University _______________________________________________ To post to the CDS listserv, send your message to: cogdevsoc@lists.cogdevsoc.org (If you belong to the listserv and have not included any large attachments, your message will be posted without moderation--so be careful!) To subscribe or unsubscribe from the listserv, visit: http://lists.cogdevsoc.org/listinfo.cgi/cogdevsoc-cogdevsoc.org -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180728/a26e333d/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1313 bytes Desc: not available Url : http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20180728/a26e333d/attachment.png