From annalisa@unm.edu Sat Dec 21 11:22:49 2019 From: annalisa@unm.edu (Annalisa Aguilar) Date: Sat, 21 Dec 2019 19:22:49 +0000 Subject: [Xmca-l] The ethics of artificial intelligence, past present and future Message-ID: Hello fellow and distant XMCArs, So today I saw this in the Intercept and thought I would share for your awareness, because of the recent developments that likely impact you, namely: * the neoliberalization of higher academic learning * the compromise of privacy and civil life in the US and other countries * the (apparently) hidden agenda of technology as it hard-wires biases and control over women, minorities, and other vulnerable people to reproduce past prejudices and power structures. In my thesis I discuss historical mental models of mind and how they inform technology design. During reading for my thesis I had always been bothered about the story of the AI Winter. Marvin Minsky, an "august" researcher from MIT labs of that period, had discredited Frank Rosenblatt's work on Perceptrons (which was reborn in the neural networks of the 1980's to early naughts). That act basically neutralized funding of legitimate research in AI and, through vicious academic politics, stymied anyone doing research even smelling like Perceptrons. Frank Rosenblatt died in 1971, likely feeling disgraced and ruined, never knowing the outcome of his lifework. It is a nightmare no academic would ever want. Thanks to Herbert Dreyfus, we know this story which is discussed in What Computers Still Can't Do https://mitpress.mit.edu/books/what-computers-still-cant-do Well, it ends up that Minksy has been allegedly tied up with Jeffery Epstein and his exploitation of young women. This has been recently reported in an article by Rodrigo Ochigame of Brazil, who was a student of Joichi Ito, who ran the MIT Media Lab. We know that Ito's projects were funded by none other than Epstein, and this reveal forced Ito's resignation. Read about it here: https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ I have not completed reading the article, because I had to stop just to pass this on to the list, to share. One might say that computer technology is by its very nature going to reproduce power structures, but I would rather say that our mental models are not serving us to create those technology tools that we require to create an equitable society. How else can we free the tools from the power structures, if the only people who use them are those who perpetuate privilege and cheat, for example by thwarting academic freedom in its process? How can we develop equality in society if the tools we create come from inequitable motivations and interactions? Is it even possible? As I see it, the ethics at MIT Labs reveals concretely how the Cartesian model of mind, basically normalizes the mind of the privileged, and why only a holistic mental model provides safeguards against these biases that lead to these abuses. Models such as distributed cognition, CHAT, and similar constructs, intertwine the threads of thought to the body, to culture, history, tool-use, language, and society, because these models encapsulate how environment develops mind, which in turn develops environment and so on. Mind is not separate, in a certain sense, mind IS material and not disembodied. It is when mind is portrayed otherwise that the means of legitimizing abuse is given its nutrition to germinate without check. I feel an odd confirmation, as much as I am horrified to learn this new alleged connection of Minsky to Epstein, how the ways in which as a society we fool ourselves with these hyper-rational models which only reproduce abusive power structures. That is how it is done. It might also be a reminder to anyone who has been unethical how history has a way of revealing past deeds. Justice does come, albeit slowly. Kind regards as we near the end of 2019, Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191221/cf8200a9/attachment.html From ewall@umich.edu Sat Dec 21 13:37:55 2019 From: ewall@umich.edu (Edward Wall) Date: Sat, 21 Dec 2019 15:37:55 -0600 Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future In-Reply-To: References: Message-ID: <764B6D2E-6C90-4C9B-8E39-926FB4813AE0@umich.edu> Annalisa In my read when Dreyfus wrote the book you reference, he believed that ?mind' was neither ?material? nor ?mental? On the other hand, I have often wondered if ?minds' aren?t ?material.' Ed Wall Imagination was given to man to compensate him for what he is not, and a sense of humor was provided to console him for what he is. > On Dec 21, 2019, at 1:22 PM, Annalisa Aguilar wrote: > > Hello fellow and distant XMCArs, > > So today I saw this in the Intercept and thought I would share for your awareness, because of the recent developments that likely impact you, namely: > the neoliberalization of higher academic learning > the compromise of privacy and civil life in the US and other countries > the (apparently) hidden agenda of technology as it hard-wires biases and control over women, minorities, and other vulnerable people to reproduce past prejudices and power structures. > In my thesis I discuss historical mental models of mind and how they inform technology design. During reading for my thesis I had always been bothered about the story of the AI Winter. > > Marvin Minsky, an "august" researcher from MIT labs of that period, had discredited Frank Rosenblatt's work on Perceptrons (which was reborn in the neural networks of the 1980's to early naughts). That act basically neutralized funding of legitimate research in AI and, through vicious academic politics, stymied anyone doing research even smelling like Perceptrons. Frank Rosenblatt died in 1971, likely feeling disgraced and ruined, never knowing the outcome of his lifework. It is a nightmare no academic would ever want. > > Thanks to Herbert Dreyfus, we know this story which is discussed in What Computers Still Can't Do https://mitpress.mit.edu/books/what-computers-still-cant-do > > Well, it ends up that Minksy has been allegedly tied up with Jeffery Epstein and his exploitation of young women. > > This has been recently reported in an article by Rodrigo Ochigame of Brazil, who was a student of Joichi Ito, who ran the MIT Media Lab. We know that Ito's projects were funded by none other than Epstein, and this reveal forced Ito's resignation. Read about it here: https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ > > I have not completed reading the article, because I had to stop just to pass this on to the list, to share. > > One might say that computer technology is by its very nature going to reproduce power structures, but I would rather say that our mental models are not serving us to create those technology tools that we require to create an equitable society. How else can we free the tools from the power structures, if the only people who use them are those who perpetuate privilege and cheat, for example by thwarting academic freedom in its process? How can we develop equality in society if the tools we create come from inequitable motivations and interactions? Is it even possible? > > As I see it, the ethics at MIT Labs reveals concretely how the Cartesian model of mind, basically normalizes the mind of the privileged, and why only a holistic mental model provides safeguards against these biases that lead to these abuses. Models such as distributed cognition, CHAT, and similar constructs, intertwine the threads of thought to the body, to culture, history, tool-use, language, and society, because these models encapsulate how environment develops mind, which in turn develops environment and so on. Mind is not separate, in a certain sense, mind IS material and not disembodied. It is when mind is portrayed otherwise that the means of legitimizing abuse is given its nutrition to germinate without check. > > I feel an odd confirmation, as much as I am horrified to learn this new alleged connection of Minsky to Epstein, how the ways in which as a society we fool ourselves with these hyper-rational models which only reproduce abusive power structures. > > That is how it is done. > > It might also be a reminder to anyone who has been unethical how history has a way of revealing past deeds. Justice does come, albeit slowly. > > Kind regards as we near the end of 2019, > > Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191221/07d1990f/attachment.html From annalisa@unm.edu Sat Dec 21 21:03:49 2019 From: annalisa@unm.edu (Annalisa Aguilar) Date: Sun, 22 Dec 2019 05:03:49 +0000 Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future In-Reply-To: <764B6D2E-6C90-4C9B-8E39-926FB4813AE0@umich.edu> References: , <764B6D2E-6C90-4C9B-8E39-926FB4813AE0@umich.edu> Message-ID: Hi Ed, Regarding Dreyfus, I don't recall him asserting the matter of mind or not, though it's been almost 10 years since I read the book. I am compelled to say that minds are material in the same way that stories are material. Consider a few analogies. The book is material, the words are printed ink on the paper of the pages, but without the book present the story will not manifest in the mind of a reader (as long as the book is written in the same language as the reader). Is the story not material if it is located in this book and not in that one? Also the story can exist outside the book, in the memories of a person, but the person is also material. The light in an electrical light bulb is there when the electricity passes through the filament, and not when the electricity is not there. We know thanks to Einstein that light is energetic material that travels really fast. The filament is gross material, the electricity is subtle as is the light, but the three are material. I assert that a mind too is subtle energy passing through a brain, which is a conglomerate of neuronal connections of grey matter. I see the physical and transactional world as material of infinitely different graded properties, subtle to gross, in different combinations of active qualities. In the same way the story resides in the book and the light resides, or emanates, from the light bulb the subtle permeates the gross. A more perfect illustration is the red hot iron ball. Iron and fire are in the same location, one is gross the other subtle. But both are material. What can happen however is if we do not know the properties of iron (heavy and round) or fire (red and hot) we can superimpose one element upon the other (i.e., assert that fire is heavy and round, while iron is hot and red) and this is easy to do because they are present in the same location perceptually; we cannot remove the iron from the fire or vice versa. (though it is possible if you are a blacksmith you can purge the iron in water, extinguishing the flame, I suppose, but you get my point, I hope.) With this in mind, is it possible to also assert that ethics is also a material entity? Whereby ethical conduct is that which possesses the most truth for the most harmony for the largest part of society while also holding the same for the individual. Can ethical conduct have universal laws like physics? If so, it might be an attainable goal to create the ethical algorithm. Yet, the weirdness enters when considering whether it is ethical to train computer to learn and improve an algorithm until it is "perfectly ethical", if what it needs to do to get there is to fail several times before it can actually become perfect. How many failures should there be before it's not ethical to continue training the computer? I would say it's not ethical to do that, if it means for example surveilling a population with face recognition technology until it is able to perfectly identify a criminal from his or her doppelganger. There will always be the risk of accusing an innocent person, which is not ethical. Algorithms usually don't take into consideration context. I recall Rosenblatt's work on perceptrons were a way to create context by computers learning about contexts (by sensing). That actually might safer than constructing algorithms. You have to wonder what computers would be like now if Rosenblatt had been able to pursue his work unfettered by Minsky and others from MIT back then. Academic freedom must be protected. On that I hope we can agree! Kind regards, Annalisa ________________________________ From: xmca-l-bounces@mailman.ucsd.edu on behalf of Edward Wall Sent: Saturday, December 21, 2019 2:37 PM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future UNM-IT Warning: This message was sent from outside of the LoboMail system. Do not click on links or open attachments unless you are sure the content is safe. (2.3) Annalisa In my read when Dreyfus wrote the book you reference, he believed that ?mind' was neither ?material? nor ?mental? On the other hand, I have often wondered if ?minds' aren?t ?material.' Ed Wall Imagination was given to man to compensate him for what he is not, and a sense of humor was provided to console him for what he is. On Dec 21, 2019, at 1:22 PM, Annalisa Aguilar > wrote: Hello fellow and distant XMCArs, So today I saw this in the Intercept and thought I would share for your awareness, because of the recent developments that likely impact you, namely: * the neoliberalization of higher academic learning * the compromise of privacy and civil life in the US and other countries * the (apparently) hidden agenda of technology as it hard-wires biases and control over women, minorities, and other vulnerable people to reproduce past prejudices and power structures. In my thesis I discuss historical mental models of mind and how they inform technology design. During reading for my thesis I had always been bothered about the story of the AI Winter. Marvin Minsky, an "august" researcher from MIT labs of that period, had discredited Frank Rosenblatt's work on Perceptrons (which was reborn in the neural networks of the 1980's to early naughts). That act basically neutralized funding of legitimate research in AI and, through vicious academic politics, stymied anyone doing research even smelling like Perceptrons. Frank Rosenblatt died in 1971, likely feeling disgraced and ruined, never knowing the outcome of his lifework. It is a nightmare no academic would ever want. Thanks to Herbert Dreyfus, we know this story which is discussed in What Computers Still Can't Do https://mitpress.mit.edu/books/what-computers-still-cant-do Well, it ends up that Minksy has been allegedly tied up with Jeffery Epstein and his exploitation of young women. This has been recently reported in an article by Rodrigo Ochigame of Brazil, who was a student of Joichi Ito, who ran the MIT Media Lab. We know that Ito's projects were funded by none other than Epstein, and this reveal forced Ito's resignation. Read about it here: https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ I have not completed reading the article, because I had to stop just to pass this on to the list, to share. One might say that computer technology is by its very nature going to reproduce power structures, but I would rather say that our mental models are not serving us to create those technology tools that we require to create an equitable society. How else can we free the tools from the power structures, if the only people who use them are those who perpetuate privilege and cheat, for example by thwarting academic freedom in its process? How can we develop equality in society if the tools we create come from inequitable motivations and interactions? Is it even possible? As I see it, the ethics at MIT Labs reveals concretely how the Cartesian model of mind, basically normalizes the mind of the privileged, and why only a holistic mental model provides safeguards against these biases that lead to these abuses. Models such as distributed cognition, CHAT, and similar constructs, intertwine the threads of thought to the body, to culture, history, tool-use, language, and society, because these models encapsulate how environment develops mind, which in turn develops environment and so on. Mind is not separate, in a certain sense, mind IS material and not disembodied. It is when mind is portrayed otherwise that the means of legitimizing abuse is given its nutrition to germinate without check. I feel an odd confirmation, as much as I am horrified to learn this new alleged connection of Minsky to Epstein, how the ways in which as a society we fool ourselves with these hyper-rational models which only reproduce abusive power structures. That is how it is done. It might also be a reminder to anyone who has been unethical how history has a way of revealing past deeds. Justice does come, albeit slowly. Kind regards as we near the end of 2019, Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191222/7639f40c/attachment.html From andyb@marxists.org Sat Dec 21 22:42:22 2019 From: andyb@marxists.org (Andy Blunden) Date: Sun, 22 Dec 2019 17:42:22 +1100 Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future In-Reply-To: References: <764B6D2E-6C90-4C9B-8E39-926FB4813AE0@umich.edu> Message-ID: <5e5b54ae-d99f-f4cf-946d-d9c3bc227c3e@marxists.org> What is /not/ material? ------------------------------------------------------------ *Andy Blunden* Hegel for Social Movements Home Page On 22/12/2019 4:03 pm, Annalisa Aguilar wrote: > Hi Ed, > > Regarding Dreyfus, I don't recall him asserting the matter > of mind or not, though it's been almost 10 years since I > read the book. > > I am compelled to say that minds are material in the same > way that stories are material. > > Consider a few analogies. > > The book is material, the words are printed ink on the > paper of the pages, but without the book present the story > will not manifest in the mind of a reader (as long as the > book is written in the same language as the reader). Is > the story not material if it is located in this book and > not in that one? Also the story can exist outside the > book, in the memories of a person, but the person is also > material. > > The light in an electrical light bulb is there when the > electricity passes through the filament, and not when the > electricity is not there. We know thanks to Einstein that > light is energetic material that travels really fast. The > filament is gross material, the electricity is subtle as > is the light, but the three are material. > > I assert that a mind too is subtle energy passing through > a brain, which is a conglomerate of neuronal connections > of grey matter. > > I see the physical and transactional world as material of > infinitely different graded properties, subtle to gross, > in different combinations of active qualities. In the same > way the story resides in the book and the light resides, > or emanates, from the light bulb the subtle permeates the > gross. > > A more perfect illustration is the red hot iron ball. Iron > and fire are in the same location, one is gross the other > subtle. But both are material. What can happen however is > if we do not know the properties of iron (heavy and round) > or fire (red and hot) we can superimpose one element upon > the other (i.e., assert that fire is heavy and round, > while iron is hot and red) and this is easy to do because > they are present in the same location perceptually; we > cannot remove the iron from the fire or vice versa. > (though it is possible if you are a blacksmith you can > purge the iron in water, extinguishing the flame, I > suppose, but you get my point, I hope.) > > With this in mind, is it possible to also assert that > ethics is also a material entity? Whereby ethical conduct > is that which possesses the most truth for the most > harmony for the largest part of society while also holding > the same for the individual. > > Can ethical conduct have universal laws like physics? If > so, it might be an attainable goal to create the ethical > algorithm. Yet, the weirdness enters when considering > whether it is ethical to train computer to learn and > improve an algorithm until it is "perfectly ethical", if > what it needs to do to get there is to fail several times > before it can actually become perfect. How many failures > should there be before it's not ethical to continue > training the computer? > > I would say it's not ethical to do that, if it means for > example surveilling a population with face recognition > technology until it is able to perfectly identify a > criminal from his or her doppelganger. There will always > be the risk of accusing an innocent person, which is not > ethical. > > Algorithms usually don't take into consideration context. > I recall Rosenblatt's work on perceptrons were a way to > create context by computers learning about contexts (by > sensing). That actually might safer than constructing > algorithms. > > You have to wonder what computers would be like now if > Rosenblatt had been able to pursue his work unfettered by > Minsky and others from MIT back then. > > Academic freedom must be protected. On that I hope we can > agree! > > Kind regards, > > Annalisa > > > > > > > > > ------------------------------------------------------------ > *From:* xmca-l-bounces@mailman.ucsd.edu > on behalf of Edward Wall > > *Sent:* Saturday, December 21, 2019 2:37 PM > *To:* eXtended Mind, Culture, Activity > > *Subject:* [Xmca-l] Re: The ethics of artificial > intelligence, past present and future > > *? UNM-IT Warning:*?This message was sent from outside of > the LoboMail system. Do not click on links or open > attachments unless you are sure the content is safe. (2.3) > > Annalisa > > ? ? ?In my read when Dreyfus wrote the book you reference, > he believed that ?mind' was neither ?material? nor > ?mental? On the other hand, I have often wondered if > ?minds' aren?t ?material.' > > Ed Wall > > Imagination was given to man to compensate him for what he > is not, and a sense of humor was provided to console him > for what he is. > >> On Dec 21, 2019, at ?1:22 PM, Annalisa Aguilar >> > wrote: >> >> Hello fellow and distant XMCArs, >> >> So today I saw this in the Intercept and thought I would >> share for your awareness, because of the recent >> developments that likely impact you, namely: >> >> * the neoliberalization of higher academic learning >> * the compromise of privacy and civil life in the US >> and other countries >> * the (apparently) hidden agenda of technology as it >> hard-wires biases and control over women, minorities, >> and other vulnerable people to reproduce past >> prejudices and power structures. >> >> In my thesis I discuss historical mental models of mind >> and how they inform technology design. During reading for >> my thesis I had always been bothered about the story of >> the AI Winter. >> >> Marvin Minsky,?an "august" researcher from MIT labs of >> that period, had discredited Frank Rosenblatt's work on >> Perceptrons (which was reborn in the neural networks of >> the 1980's to early naughts). That act basically >> neutralized funding of legitimate research in AI and, >> through vicious academic politics, stymied anyone doing >> research even smelling like Perceptrons. Frank Rosenblatt >> died in 1971, likely feeling disgraced and ruined, never >> knowing the outcome of his lifework. It is a nightmare no >> academic would ever want. >> >> Thanks to Herbert Dreyfus, we know this story which is >> discussed in What Computers Still Can't >> Dohttps://mitpress.mit.edu/books/what-computers-still-cant-do >> >> Well, it ends up that Minksy has been allegedly tied up >> with Jeffery Epstein and his exploitation of young women. >> >> This has been recently reported in an article by Rodrigo >> Ochigame of Brazil, who was a student of Joichi Ito, who >> ran the MIT Media Lab. We know that Ito's projects were >> funded by none other than Epstein, and this reveal forced >> Ito's resignation. Read about it >> here:https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ >> >> >> I have not completed reading the article, because I had >> to stop just to pass this on to the list, to share. >> >> One might say that computer technology is by its very >> nature going to reproduce power structures, but I would >> rather say that our mental models are not serving us to >> create those technology tools that we require to create >> an equitable society. How else can we free the tools from >> the power structures, if the only people who use them are >> those who perpetuate privilege and cheat, for example by >> thwarting academic freedom in its process? How can we >> develop equality in society if the tools we create come >> from inequitable motivations and interactions? Is it even >> possible? >> >> As I see it, the ethics at MIT Labs reveals concretely >> how the Cartesian model of mind, basically normalizes the >> mind of the privileged, and why only a holistic mental >> model provides safeguards against these biases that lead >> to these abuses. Models such as distributed cognition, >> CHAT, and similar constructs, intertwine the threads of >> thought to the body, to culture, history, tool-use, >> language, and society, because these models encapsulate >> how environment develops mind, which in turn develops >> environment and so on. Mind is not separate, in a certain >> sense, mind IS material and not disembodied. It is when >> mind is portrayed otherwise that the means of >> legitimizing abuse is given its nutrition to germinate >> without check. >> >> I feel an odd confirmation, as much as I am horrified to >> learn this new alleged connection of Minsky to Epstein, >> how the ways in which as a society we fool ourselves with >> these hyper-rational models which only reproduce abusive >> power structures. >> >> That is how it is done. >> >> It might also be a reminder to anyone who has been >> unethical how history has a way of revealing past deeds. >> Justice does come, albeit slowly. >> >> Kind regards as we near the end of 2019, >> >> Annalisa > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191222/44e27af0/attachment.html From billkerr@gmail.com Sat Dec 21 22:49:53 2019 From: billkerr@gmail.com (Bill Kerr) Date: Sun, 22 Dec 2019 16:19:53 +0930 Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future In-Reply-To: References: <764B6D2E-6C90-4C9B-8E39-926FB4813AE0@umich.edu> Message-ID: You have to wonder what computers would be like now if Rosenblatt had been able to pursue his work unfettered by Minsky and others from MIT back then. Academic freedom must be protected. On that I hope we can agree! Perceptrons was written by MInsky and Papert in 1969. Many have argued and I agree with them that their other work kicked off an extremely rich field of educational computing (call it the MIT group if you want), which persists in numerous branches today: Scratch3.0, AppInventor, Makey Makey all came out of MIT not to mention the associated theoretical work. I google Perceptrons and it confirmed what I thought from before: that it made legitimate criticisms of that path of research. It was a legitimate dispute between different approaches at that time. I can't evaluate myself because my maths isn't good enough. It is true that work on Perceptrons dried up for quite a while after that probably because no one could refute the critique by Minsky and Papert. Irrespective, in your words Annalisa we should support their academic freedom to argue their case. It is also true that parallel processing perceptrons / neural networks have achieved remarkable things in recent years. It seems that MInsky / Papert made a legitimate criticism which ended up sidelining what turned out to be another rich research field. Decades later Minsky, now dead, is accused of having sex with a 17yo at Epstein's compound when he was in his 70s. Therefore, what "Minsky and others" did before that is now suspect by association. Is that what you are arguing Annalisa? On Sun, Dec 22, 2019 at 2:37 PM Annalisa Aguilar wrote: > Hi Ed, > > Regarding Dreyfus, I don't recall him asserting the matter of mind or not, > though it's been almost 10 years since I read the book. > > I am compelled to say that minds are material in the same way that stories > are material. > > Consider a few analogies. > > The book is material, the words are printed ink on the paper of the pages, > but without the book present the story will not manifest in the mind of a > reader (as long as the book is written in the same language as the reader). > Is the story not material if it is located in this book and not in that > one? Also the story can exist outside the book, in the memories of a > person, but the person is also material. > > The light in an electrical light bulb is there when the electricity passes > through the filament, and not when the electricity is not there. We know > thanks to Einstein that light is energetic material that travels really > fast. The filament is gross material, the electricity is subtle as is the > light, but the three are material. > > I assert that a mind too is subtle energy passing through a brain, which > is a conglomerate of neuronal connections of grey matter. > > I see the physical and transactional world as material of infinitely > different graded properties, subtle to gross, in different combinations of > active qualities. In the same way the story resides in the book and the > light resides, or emanates, from the light bulb the subtle permeates the > gross. > > A more perfect illustration is the red hot iron ball. Iron and fire are in > the same location, one is gross the other subtle. But both are material. > What can happen however is if we do not know the properties of iron (heavy > and round) or fire (red and hot) we can superimpose one element upon the > other (i.e., assert that fire is heavy and round, while iron is hot and > red) and this is easy to do because they are present in the same location > perceptually; we cannot remove the iron from the fire or vice versa. > (though it is possible if you are a blacksmith you can purge the iron in > water, extinguishing the flame, I suppose, but you get my point, I hope.) > > With this in mind, is it possible to also assert that ethics is also a > material entity? Whereby ethical conduct is that which possesses the most > truth for the most harmony for the largest part of society while also > holding the same for the individual. > > Can ethical conduct have universal laws like physics? If so, it might be > an attainable goal to create the ethical algorithm. Yet, the weirdness > enters when considering whether it is ethical to train computer to learn > and improve an algorithm until it is "perfectly ethical", if what it needs > to do to get there is to fail several times before it can actually become > perfect. How many failures should there be before it's not ethical to > continue training the computer? > > I would say it's not ethical to do that, if it means for example > surveilling a population with face recognition technology until it is able > to perfectly identify a criminal from his or her doppelganger. There will > always be the risk of accusing an innocent person, which is not ethical. > > Algorithms usually don't take into consideration context. I recall > Rosenblatt's work on perceptrons were a way to create context by computers > learning about contexts (by sensing). That actually might safer than > constructing algorithms. > > You have to wonder what computers would be like now if Rosenblatt had been > able to pursue his work unfettered by Minsky and others from MIT back then. > > Academic freedom must be protected. On that I hope we can agree! > > Kind regards, > > Annalisa > > > > > > > > > ------------------------------ > *From:* xmca-l-bounces@mailman.ucsd.edu > on behalf of Edward Wall > *Sent:* Saturday, December 21, 2019 2:37 PM > *To:* eXtended Mind, Culture, Activity > *Subject:* [Xmca-l] Re: The ethics of artificial intelligence, past > present and future > > > * UNM-IT Warning:* This message was sent from outside of the LoboMail > system. Do not click on links or open attachments unless you are sure the > content is safe. (2.3) > Annalisa > > In my read when Dreyfus wrote the book you reference, he believed > that ?mind' was neither ?material? nor ?mental? On the other hand, I have > often wondered if ?minds' aren?t ?material.' > > Ed Wall > > Imagination was given to man to compensate him for what he is not, and a > sense of humor was provided to console him for what he is. > > On Dec 21, 2019, at 1:22 PM, Annalisa Aguilar wrote: > > Hello fellow and distant XMCArs, > > So today I saw this in the Intercept and thought I would share for your > awareness, because of the recent developments that likely impact you, > namely: > > - the neoliberalization of higher academic learning > - the compromise of privacy and civil life in the US and other > countries > - the (apparently) hidden agenda of technology as it hard-wires biases > and control over women, minorities, and other vulnerable people to > reproduce past prejudices and power structures. > > In my thesis I discuss historical mental models of mind and how they > inform technology design. During reading for my thesis I had always been > bothered about the story of the AI Winter. > > Marvin Minsky, an "august" researcher from MIT labs of that period, had > discredited Frank Rosenblatt's work on Perceptrons (which was reborn in the > neural networks of the 1980's to early naughts). That act basically > neutralized funding of legitimate research in AI and, through vicious > academic politics, stymied anyone doing research even smelling like > Perceptrons. Frank Rosenblatt died in 1971, likely feeling disgraced and > ruined, never knowing the outcome of his lifework. It is a nightmare no > academic would ever want. > > Thanks to Herbert Dreyfus, we know this story which is discussed in What > Computers Still Can't Do > https://mitpress.mit.edu/books/what-computers-still-cant-do > > Well, it ends up that Minksy has been allegedly tied up with Jeffery > Epstein and his exploitation of young women. > > This has been recently reported in an article by Rodrigo Ochigame of > Brazil, who was a student of Joichi Ito, who ran the MIT Media Lab. We know > that Ito's projects were funded by none other than Epstein, and this reveal > forced Ito's resignation. Read about it here: > https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ > > > I have not completed reading the article, because I had to stop just to > pass this on to the list, to share. > > One might say that computer technology is by its very nature going to > reproduce power structures, but I would rather say that our mental models > are not serving us to create those technology tools that we require to > create an equitable society. How else can we free the tools from the power > structures, if the only people who use them are those who perpetuate > privilege and cheat, for example by thwarting academic freedom in its > process? How can we develop equality in society if the tools we create come > from inequitable motivations and interactions? Is it even possible? > > As I see it, the ethics at MIT Labs reveals concretely how the Cartesian > model of mind, basically normalizes the mind of the privileged, and why > only a holistic mental model provides safeguards against these biases that > lead to these abuses. Models such as distributed cognition, CHAT, and > similar constructs, intertwine the threads of thought to the body, to > culture, history, tool-use, language, and society, because these models > encapsulate how environment develops mind, which in turn develops > environment and so on. Mind is not separate, in a certain sense, mind IS > material and not disembodied. It is when mind is portrayed otherwise that > the means of legitimizing abuse is given its nutrition to germinate without > check. > > I feel an odd confirmation, as much as I am horrified to learn this new > alleged connection of Minsky to Epstein, how the ways in which as a society > we fool ourselves with these hyper-rational models which only reproduce > abusive power structures. > > That is how it is done. > > It might also be a reminder to anyone who has been unethical how history > has a way of revealing past deeds. Justice does come, albeit slowly. > > Kind regards as we near the end of 2019, > > Annalisa > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191222/50fd9d89/attachment.html From annalisa@unm.edu Sun Dec 22 10:08:34 2019 From: annalisa@unm.edu (Annalisa Aguilar) Date: Sun, 22 Dec 2019 18:08:34 +0000 Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future In-Reply-To: References: <764B6D2E-6C90-4C9B-8E39-926FB4813AE0@umich.edu> , Message-ID: Hi Bill, Well, I think it has to do with the ethics. Not the scientific arguments. If I read Dreyfus correctly, there was a lot of back-channeling that stained many legitimate researchers in the area of Rosenblatt's work and this "witch hunting" dried up funding for anything related to his work. What I'm asserting is that there is something nefarious about the model of mind that is Cartesian, and that inherently it "protects" and legitimizes the downward slope of bad behavior. Given that Damasio has shown that we feel before we reason (see Descartes' Error), and we require sensing to make good decisions, if we deny sensing information, then we can rationalize whatever we want, essentially suppressing any "inner compass" that affords our own welfare. I may not be remembering this exactly, however the gist is that Damasio's patient, who suffered brain damage in the part of the brain that senses, possessed a disability that did not allow him not able to make good decisions, putting him in harm's way. He was a danger to himself. If a person has been brought up to deny sensing as important information by which to orient, then does it surprise that someone like Minsky would behave as he did? Maybe I am painting with too broad strokes, but I tend to intuit that we have within us mechanisms that allow us to err on the side of "first do no harm," like the way mirror neurons work. I believe that culture can either suppress or enhance this biological construct with which we are born. Though there can be those who are not born that way, I intuit that it's not a norm, otherwise we would see a lot of humans running off cliffs like lemmings. I'm arguing that a culture's model of mind can determine a lot about that culture's behavior. Like most models, if they are closer to the transactional world, then the models will be more "true," if they are less accurate, they will lead many astray. Consider how the model of hysteria came to be used to control women (and still is if we recollect the 2016 election) Descartes' model of mind was constructed as an expedient measure to protect scientific research from the draconian persecution of the Church, and a real threat to the lives of the European intellectual class. Now, the model no longer works, yet we still hold on to it. It's over 400 years old!!! We've since moved on from Newtonian physics, but not the mind/body split. I'm asserting that the model of mind/body split causes harm, and I'm illustrating how it might be so when we consider how it impacts something like the study of Artificial Intelligence and how it is that there is a danger in it when it comes to academic freedom. If you are willing to say that a scholar's research is different than his personal life when it comes to ethics, I don't accept it, because a person is a whole person and not divided, regardless if such person subscribes or orients to the mind/body split. Sure, it's entirely possible I'm off the mark, but I'm suggesting that just like racists models create certain interactions in society that are harmful, so it is with models of mind. The story in the Intercept confirms my point of view. Kind regards, Annalisa ________________________________ From: xmca-l-bounces@mailman.ucsd.edu on behalf of Bill Kerr Sent: Saturday, December 21, 2019 11:49 PM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future You have to wonder what computers would be like now if Rosenblatt had been able to pursue his work unfettered by Minsky and others from MIT back then. Academic freedom must be protected. On that I hope we can agree! Perceptrons was written by MInsky and Papert in 1969. Many have argued and I agree with them that their other work kicked off an extremely rich field of educational computing (call it the MIT group if you want), which persists in numerous branches today: Scratch3.0, AppInventor, Makey Makey all came out of MIT not to mention the associated theoretical work. I google Perceptrons and it confirmed what I thought from before: that it made legitimate criticisms of that path of research. It was a legitimate dispute between different approaches at that time. I can't evaluate myself because my maths isn't good enough. It is true that work on Perceptrons dried up for quite a while after that probably because no one could refute the critique by Minsky and Papert. Irrespective, in your words Annalisa we should support their academic freedom to argue their case. It is also true that parallel processing perceptrons / neural networks have achieved remarkable things in recent years. It seems that MInsky / Papert made a legitimate criticism which ended up sidelining what turned out to be another rich research field. Decades later Minsky, now dead, is accused of having sex with a 17yo at Epstein's compound when he was in his 70s. Therefore, what "Minsky and others" did before that is now suspect by association. Is that what you are arguing Annalisa? On Sun, Dec 22, 2019 at 2:37 PM Annalisa Aguilar > wrote: Hi Ed, Regarding Dreyfus, I don't recall him asserting the matter of mind or not, though it's been almost 10 years since I read the book. I am compelled to say that minds are material in the same way that stories are material. Consider a few analogies. The book is material, the words are printed ink on the paper of the pages, but without the book present the story will not manifest in the mind of a reader (as long as the book is written in the same language as the reader). Is the story not material if it is located in this book and not in that one? Also the story can exist outside the book, in the memories of a person, but the person is also material. The light in an electrical light bulb is there when the electricity passes through the filament, and not when the electricity is not there. We know thanks to Einstein that light is energetic material that travels really fast. The filament is gross material, the electricity is subtle as is the light, but the three are material. I assert that a mind too is subtle energy passing through a brain, which is a conglomerate of neuronal connections of grey matter. I see the physical and transactional world as material of infinitely different graded properties, subtle to gross, in different combinations of active qualities. In the same way the story resides in the book and the light resides, or emanates, from the light bulb the subtle permeates the gross. A more perfect illustration is the red hot iron ball. Iron and fire are in the same location, one is gross the other subtle. But both are material. What can happen however is if we do not know the properties of iron (heavy and round) or fire (red and hot) we can superimpose one element upon the other (i.e., assert that fire is heavy and round, while iron is hot and red) and this is easy to do because they are present in the same location perceptually; we cannot remove the iron from the fire or vice versa. (though it is possible if you are a blacksmith you can purge the iron in water, extinguishing the flame, I suppose, but you get my point, I hope.) With this in mind, is it possible to also assert that ethics is also a material entity? Whereby ethical conduct is that which possesses the most truth for the most harmony for the largest part of society while also holding the same for the individual. Can ethical conduct have universal laws like physics? If so, it might be an attainable goal to create the ethical algorithm. Yet, the weirdness enters when considering whether it is ethical to train computer to learn and improve an algorithm until it is "perfectly ethical", if what it needs to do to get there is to fail several times before it can actually become perfect. How many failures should there be before it's not ethical to continue training the computer? I would say it's not ethical to do that, if it means for example surveilling a population with face recognition technology until it is able to perfectly identify a criminal from his or her doppelganger. There will always be the risk of accusing an innocent person, which is not ethical. Algorithms usually don't take into consideration context. I recall Rosenblatt's work on perceptrons were a way to create context by computers learning about contexts (by sensing). That actually might safer than constructing algorithms. You have to wonder what computers would be like now if Rosenblatt had been able to pursue his work unfettered by Minsky and others from MIT back then. Academic freedom must be protected. On that I hope we can agree! Kind regards, Annalisa ________________________________ From: xmca-l-bounces@mailman.ucsd.edu > on behalf of Edward Wall > Sent: Saturday, December 21, 2019 2:37 PM To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future UNM-IT Warning: This message was sent from outside of the LoboMail system. Do not click on links or open attachments unless you are sure the content is safe. (2.3) Annalisa In my read when Dreyfus wrote the book you reference, he believed that ?mind' was neither ?material? nor ?mental? On the other hand, I have often wondered if ?minds' aren?t ?material.' Ed Wall Imagination was given to man to compensate him for what he is not, and a sense of humor was provided to console him for what he is. On Dec 21, 2019, at 1:22 PM, Annalisa Aguilar > wrote: Hello fellow and distant XMCArs, So today I saw this in the Intercept and thought I would share for your awareness, because of the recent developments that likely impact you, namely: * the neoliberalization of higher academic learning * the compromise of privacy and civil life in the US and other countries * the (apparently) hidden agenda of technology as it hard-wires biases and control over women, minorities, and other vulnerable people to reproduce past prejudices and power structures. In my thesis I discuss historical mental models of mind and how they inform technology design. During reading for my thesis I had always been bothered about the story of the AI Winter. Marvin Minsky, an "august" researcher from MIT labs of that period, had discredited Frank Rosenblatt's work on Perceptrons (which was reborn in the neural networks of the 1980's to early naughts). That act basically neutralized funding of legitimate research in AI and, through vicious academic politics, stymied anyone doing research even smelling like Perceptrons. Frank Rosenblatt died in 1971, likely feeling disgraced and ruined, never knowing the outcome of his lifework. It is a nightmare no academic would ever want. Thanks to Herbert Dreyfus, we know this story which is discussed in What Computers Still Can't Do https://mitpress.mit.edu/books/what-computers-still-cant-do Well, it ends up that Minksy has been allegedly tied up with Jeffery Epstein and his exploitation of young women. This has been recently reported in an article by Rodrigo Ochigame of Brazil, who was a student of Joichi Ito, who ran the MIT Media Lab. We know that Ito's projects were funded by none other than Epstein, and this reveal forced Ito's resignation. Read about it here: https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ I have not completed reading the article, because I had to stop just to pass this on to the list, to share. One might say that computer technology is by its very nature going to reproduce power structures, but I would rather say that our mental models are not serving us to create those technology tools that we require to create an equitable society. How else can we free the tools from the power structures, if the only people who use them are those who perpetuate privilege and cheat, for example by thwarting academic freedom in its process? How can we develop equality in society if the tools we create come from inequitable motivations and interactions? Is it even possible? As I see it, the ethics at MIT Labs reveals concretely how the Cartesian model of mind, basically normalizes the mind of the privileged, and why only a holistic mental model provides safeguards against these biases that lead to these abuses. Models such as distributed cognition, CHAT, and similar constructs, intertwine the threads of thought to the body, to culture, history, tool-use, language, and society, because these models encapsulate how environment develops mind, which in turn develops environment and so on. Mind is not separate, in a certain sense, mind IS material and not disembodied. It is when mind is portrayed otherwise that the means of legitimizing abuse is given its nutrition to germinate without check. I feel an odd confirmation, as much as I am horrified to learn this new alleged connection of Minsky to Epstein, how the ways in which as a society we fool ourselves with these hyper-rational models which only reproduce abusive power structures. That is how it is done. It might also be a reminder to anyone who has been unethical how history has a way of revealing past deeds. Justice does come, albeit slowly. Kind regards as we near the end of 2019, Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191222/10b24325/attachment.html From annalisa@unm.edu Sun Dec 22 10:09:39 2019 From: annalisa@unm.edu (Annalisa Aguilar) Date: Sun, 22 Dec 2019 18:09:39 +0000 Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future In-Reply-To: <5e5b54ae-d99f-f4cf-946d-d9c3bc227c3e@marxists.org> References: <764B6D2E-6C90-4C9B-8E39-926FB4813AE0@umich.edu> , <5e5b54ae-d99f-f4cf-946d-d9c3bc227c3e@marxists.org> Message-ID: To answer your question, "What is not material?" I'd say the thought that cannot be thought. ?? ________________________________ From: xmca-l-bounces@mailman.ucsd.edu on behalf of Andy Blunden Sent: Saturday, December 21, 2019 11:42 PM To: xmca-l@mailman.ucsd.edu Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future What is not material? ________________________________ Andy Blunden Hegel for Social Movements Home Page On 22/12/2019 4:03 pm, Annalisa Aguilar wrote: Hi Ed, Regarding Dreyfus, I don't recall him asserting the matter of mind or not, though it's been almost 10 years since I read the book. I am compelled to say that minds are material in the same way that stories are material. Consider a few analogies. The book is material, the words are printed ink on the paper of the pages, but without the book present the story will not manifest in the mind of a reader (as long as the book is written in the same language as the reader). Is the story not material if it is located in this book and not in that one? Also the story can exist outside the book, in the memories of a person, but the person is also material. The light in an electrical light bulb is there when the electricity passes through the filament, and not when the electricity is not there. We know thanks to Einstein that light is energetic material that travels really fast. The filament is gross material, the electricity is subtle as is the light, but the three are material. I assert that a mind too is subtle energy passing through a brain, which is a conglomerate of neuronal connections of grey matter. I see the physical and transactional world as material of infinitely different graded properties, subtle to gross, in different combinations of active qualities. In the same way the story resides in the book and the light resides, or emanates, from the light bulb the subtle permeates the gross. A more perfect illustration is the red hot iron ball. Iron and fire are in the same location, one is gross the other subtle. But both are material. What can happen however is if we do not know the properties of iron (heavy and round) or fire (red and hot) we can superimpose one element upon the other (i.e., assert that fire is heavy and round, while iron is hot and red) and this is easy to do because they are present in the same location perceptually; we cannot remove the iron from the fire or vice versa. (though it is possible if you are a blacksmith you can purge the iron in water, extinguishing the flame, I suppose, but you get my point, I hope.) With this in mind, is it possible to also assert that ethics is also a material entity? Whereby ethical conduct is that which possesses the most truth for the most harmony for the largest part of society while also holding the same for the individual. Can ethical conduct have universal laws like physics? If so, it might be an attainable goal to create the ethical algorithm. Yet, the weirdness enters when considering whether it is ethical to train computer to learn and improve an algorithm until it is "perfectly ethical", if what it needs to do to get there is to fail several times before it can actually become perfect. How many failures should there be before it's not ethical to continue training the computer? I would say it's not ethical to do that, if it means for example surveilling a population with face recognition technology until it is able to perfectly identify a criminal from his or her doppelganger. There will always be the risk of accusing an innocent person, which is not ethical. Algorithms usually don't take into consideration context. I recall Rosenblatt's work on perceptrons were a way to create context by computers learning about contexts (by sensing). That actually might safer than constructing algorithms. You have to wonder what computers would be like now if Rosenblatt had been able to pursue his work unfettered by Minsky and others from MIT back then. Academic freedom must be protected. On that I hope we can agree! Kind regards, Annalisa ________________________________ From: xmca-l-bounces@mailman.ucsd.edu on behalf of Edward Wall Sent: Saturday, December 21, 2019 2:37 PM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future UNM-IT Warning: This message was sent from outside of the LoboMail system. Do not click on links or open attachments unless you are sure the content is safe. (2.3) Annalisa In my read when Dreyfus wrote the book you reference, he believed that ?mind' was neither ?material? nor ?mental? On the other hand, I have often wondered if ?minds' aren?t ?material.' Ed Wall Imagination was given to man to compensate him for what he is not, and a sense of humor was provided to console him for what he is. On Dec 21, 2019, at 1:22 PM, Annalisa Aguilar > wrote: Hello fellow and distant XMCArs, So today I saw this in the Intercept and thought I would share for your awareness, because of the recent developments that likely impact you, namely: * the neoliberalization of higher academic learning * the compromise of privacy and civil life in the US and other countries * the (apparently) hidden agenda of technology as it hard-wires biases and control over women, minorities, and other vulnerable people to reproduce past prejudices and power structures. In my thesis I discuss historical mental models of mind and how they inform technology design. During reading for my thesis I had always been bothered about the story of the AI Winter. Marvin Minsky, an "august" researcher from MIT labs of that period, had discredited Frank Rosenblatt's work on Perceptrons (which was reborn in the neural networks of the 1980's to early naughts). That act basically neutralized funding of legitimate research in AI and, through vicious academic politics, stymied anyone doing research even smelling like Perceptrons. Frank Rosenblatt died in 1971, likely feeling disgraced and ruined, never knowing the outcome of his lifework. It is a nightmare no academic would ever want. Thanks to Herbert Dreyfus, we know this story which is discussed in What Computers Still Can't Do https://mitpress.mit.edu/books/what-computers-still-cant-do Well, it ends up that Minksy has been allegedly tied up with Jeffery Epstein and his exploitation of young women. This has been recently reported in an article by Rodrigo Ochigame of Brazil, who was a student of Joichi Ito, who ran the MIT Media Lab. We know that Ito's projects were funded by none other than Epstein, and this reveal forced Ito's resignation. Read about it here: https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ I have not completed reading the article, because I had to stop just to pass this on to the list, to share. One might say that computer technology is by its very nature going to reproduce power structures, but I would rather say that our mental models are not serving us to create those technology tools that we require to create an equitable society. How else can we free the tools from the power structures, if the only people who use them are those who perpetuate privilege and cheat, for example by thwarting academic freedom in its process? How can we develop equality in society if the tools we create come from inequitable motivations and interactions? Is it even possible? As I see it, the ethics at MIT Labs reveals concretely how the Cartesian model of mind, basically normalizes the mind of the privileged, and why only a holistic mental model provides safeguards against these biases that lead to these abuses. Models such as distributed cognition, CHAT, and similar constructs, intertwine the threads of thought to the body, to culture, history, tool-use, language, and society, because these models encapsulate how environment develops mind, which in turn develops environment and so on. Mind is not separate, in a certain sense, mind IS material and not disembodied. It is when mind is portrayed otherwise that the means of legitimizing abuse is given its nutrition to germinate without check. I feel an odd confirmation, as much as I am horrified to learn this new alleged connection of Minsky to Epstein, how the ways in which as a society we fool ourselves with these hyper-rational models which only reproduce abusive power structures. That is how it is done. It might also be a reminder to anyone who has been unethical how history has a way of revealing past deeds. Justice does come, albeit slowly. Kind regards as we near the end of 2019, Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191222/1d688a1f/attachment.html From glassman.13@osu.edu Sun Dec 22 11:06:22 2019 From: glassman.13@osu.edu (Glassman, Michael) Date: Sun, 22 Dec 2019 19:06:22 +0000 Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future In-Reply-To: References: <764B6D2E-6C90-4C9B-8E39-926FB4813AE0@umich.edu> , Message-ID: Hi Annalisa and Ed, I guess I hesitate getting involve in this discussion somewhat because it is based in work I was doing a couple of years ago. If I misremember or get something wrong please let me know. But based on my own understanding the neoliberalization of AI goes much deeper than Rosenblatt and perceptrons, with some of the earliest work not having much to do with academics, and in some ways a rebellion against it. I believe the major fissure was between the more idealistic explorers on the West Coast, in particular those who worked in Engelbart's Augmentation Research Center, the Homebrew Computer Club and those who graduated to the early days of the Palo Alto Research Center. Some of the people who formed the Homebrew Computer Club were influenced by Illich and his ideas of breaking away from systematic thinking. Sort of an "every user a citizen of the world type mentality." I am pretty convinced from reading the history of the time that these early "programmers" went down to Illich's "language school" in Mexico and influenced by went on there. Freire was in residence for a good part of the time, Heinz von Foerster (who I am becoming convinced was the great genius of the time nobody knows about) gave a summer seminar down there. There is no real evidence for this, nobody wrote anything down, but I think there is a good argument to be made. Much of this sprung from the commune and DIY (do it yourself) movement. As the impact of the Internet became more apparent there was an attempt to gain control, witch AI being seen as something that was a product of the system rather than fought against it. Much of this battle emerged in to pages of Stuart Brands "Co-evolution Quarterly" with people like Illich and Bateson taking the idea that our thinking follows the world and others trying to argue that you need elites (I know a lot of people don't like this word) create systems that control thinking for the good of mankind. The magazine "Wired" was started as a neo-liberal answer to Co-evolution Quarterly and of course had more money and more play in the press. To this day I never much trust people who refer to Wired as a source of information and opinion. On the East Coast there was the rise of the MIT media Lab, led by one of the Podhertz off-spring, the champions of neo-liberalism. I always wondered where they got their money and power and finally the Epstein saga has made me realized they would do anything for money. They also started the TED series, where you have "great minds" telling you how they are developing systems that can make our lives better. Like Wired I don't really trust TED. The Media Lab was/is supported by people who believe in the system. Anyway, the West Coast adventurers soon had nowhere to go. They believe electronic media was a tool to free humans, but the idea it was a tool to control humans (DIWYT - Do what your told - I just made that up) had all the money and all the power. They not only had funding from the government - which probably was not that much - they had funding from the rich, those who Epstein buzzed around, working as general con man. The way the media lab protected Epstein's money, if not Epstein, suggests a very ugly chapter that we may never be able to read (Podhertz was Epstein's chief apologist). I believe there is a story to be written here but I despair nobody will ever be allowed to write it. Michael From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Annalisa Aguilar Sent: Sunday, December 22, 2019 1:09 PM To: Bill Kerr ; eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future Hi Bill, Well, I think it has to do with the ethics. Not the scientific arguments. If I read Dreyfus correctly, there was a lot of back-channeling that stained many legitimate researchers in the area of Rosenblatt's work and this "witch hunting" dried up funding for anything related to his work. What I'm asserting is that there is something nefarious about the model of mind that is Cartesian, and that inherently it "protects" and legitimizes the downward slope of bad behavior. Given that Damasio has shown that we feel before we reason (see Descartes' Error), and we require sensing to make good decisions, if we deny sensing information, then we can rationalize whatever we want, essentially suppressing any "inner compass" that affords our own welfare. I may not be remembering this exactly, however the gist is that Damasio's patient, who suffered brain damage in the part of the brain that senses, possessed a disability that did not allow him not able to make good decisions, putting him in harm's way. He was a danger to himself. If a person has been brought up to deny sensing as important information by which to orient, then does it surprise that someone like Minsky would behave as he did? Maybe I am painting with too broad strokes, but I tend to intuit that we have within us mechanisms that allow us to err on the side of "first do no harm," like the way mirror neurons work. I believe that culture can either suppress or enhance this biological construct with which we are born. Though there can be those who are not born that way, I intuit that it's not a norm, otherwise we would see a lot of humans running off cliffs like lemmings. I'm arguing that a culture's model of mind can determine a lot about that culture's behavior. Like most models, if they are closer to the transactional world, then the models will be more "true," if they are less accurate, they will lead many astray. Consider how the model of hysteria came to be used to control women (and still is if we recollect the 2016 election) Descartes' model of mind was constructed as an expedient measure to protect scientific research from the draconian persecution of the Church, and a real threat to the lives of the European intellectual class. Now, the model no longer works, yet we still hold on to it. It's over 400 years old!!! We've since moved on from Newtonian physics, but not the mind/body split. I'm asserting that the model of mind/body split causes harm, and I'm illustrating how it might be so when we consider how it impacts something like the study of Artificial Intelligence and how it is that there is a danger in it when it comes to academic freedom. If you are willing to say that a scholar's research is different than his personal life when it comes to ethics, I don't accept it, because a person is a whole person and not divided, regardless if such person subscribes or orients to the mind/body split. Sure, it's entirely possible I'm off the mark, but I'm suggesting that just like racists models create certain interactions in society that are harmful, so it is with models of mind. The story in the Intercept confirms my point of view. Kind regards, Annalisa ________________________________ From: xmca-l-bounces@mailman.ucsd.edu > on behalf of Bill Kerr > Sent: Saturday, December 21, 2019 11:49 PM To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future You have to wonder what computers would be like now if Rosenblatt had been able to pursue his work unfettered by Minsky and others from MIT back then. Academic freedom must be protected. On that I hope we can agree! Perceptrons was written by MInsky and Papert in 1969. Many have argued and I agree with them that their other work kicked off an extremely rich field of educational computing (call it the MIT group if you want), which persists in numerous branches today: Scratch3.0, AppInventor, Makey Makey all came out of MIT not to mention the associated theoretical work. I google Perceptrons and it confirmed what I thought from before: that it made legitimate criticisms of that path of research. It was a legitimate dispute between different approaches at that time. I can't evaluate myself because my maths isn't good enough. It is true that work on Perceptrons dried up for quite a while after that probably because no one could refute the critique by Minsky and Papert. Irrespective, in your words Annalisa we should support their academic freedom to argue their case. It is also true that parallel processing perceptrons / neural networks have achieved remarkable things in recent years. It seems that MInsky / Papert made a legitimate criticism which ended up sidelining what turned out to be another rich research field. Decades later Minsky, now dead, is accused of having sex with a 17yo at Epstein's compound when he was in his 70s. Therefore, what "Minsky and others" did before that is now suspect by association. Is that what you are arguing Annalisa? On Sun, Dec 22, 2019 at 2:37 PM Annalisa Aguilar > wrote: Hi Ed, Regarding Dreyfus, I don't recall him asserting the matter of mind or not, though it's been almost 10 years since I read the book. I am compelled to say that minds are material in the same way that stories are material. Consider a few analogies. The book is material, the words are printed ink on the paper of the pages, but without the book present the story will not manifest in the mind of a reader (as long as the book is written in the same language as the reader). Is the story not material if it is located in this book and not in that one? Also the story can exist outside the book, in the memories of a person, but the person is also material. The light in an electrical light bulb is there when the electricity passes through the filament, and not when the electricity is not there. We know thanks to Einstein that light is energetic material that travels really fast. The filament is gross material, the electricity is subtle as is the light, but the three are material. I assert that a mind too is subtle energy passing through a brain, which is a conglomerate of neuronal connections of grey matter. I see the physical and transactional world as material of infinitely different graded properties, subtle to gross, in different combinations of active qualities. In the same way the story resides in the book and the light resides, or emanates, from the light bulb the subtle permeates the gross. A more perfect illustration is the red hot iron ball. Iron and fire are in the same location, one is gross the other subtle. But both are material. What can happen however is if we do not know the properties of iron (heavy and round) or fire (red and hot) we can superimpose one element upon the other (i.e., assert that fire is heavy and round, while iron is hot and red) and this is easy to do because they are present in the same location perceptually; we cannot remove the iron from the fire or vice versa. (though it is possible if you are a blacksmith you can purge the iron in water, extinguishing the flame, I suppose, but you get my point, I hope.) With this in mind, is it possible to also assert that ethics is also a material entity? Whereby ethical conduct is that which possesses the most truth for the most harmony for the largest part of society while also holding the same for the individual. Can ethical conduct have universal laws like physics? If so, it might be an attainable goal to create the ethical algorithm. Yet, the weirdness enters when considering whether it is ethical to train computer to learn and improve an algorithm until it is "perfectly ethical", if what it needs to do to get there is to fail several times before it can actually become perfect. How many failures should there be before it's not ethical to continue training the computer? I would say it's not ethical to do that, if it means for example surveilling a population with face recognition technology until it is able to perfectly identify a criminal from his or her doppelganger. There will always be the risk of accusing an innocent person, which is not ethical. Algorithms usually don't take into consideration context. I recall Rosenblatt's work on perceptrons were a way to create context by computers learning about contexts (by sensing). That actually might safer than constructing algorithms. You have to wonder what computers would be like now if Rosenblatt had been able to pursue his work unfettered by Minsky and others from MIT back then. Academic freedom must be protected. On that I hope we can agree! Kind regards, Annalisa ________________________________ From: xmca-l-bounces@mailman.ucsd.edu > on behalf of Edward Wall > Sent: Saturday, December 21, 2019 2:37 PM To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future UNM-IT Warning: This message was sent from outside of the LoboMail system. Do not click on links or open attachments unless you are sure the content is safe. (2.3) Annalisa In my read when Dreyfus wrote the book you reference, he believed that 'mind' was neither 'material' nor 'mental' On the other hand, I have often wondered if 'minds' aren't 'material.' Ed Wall Imagination was given to man to compensate him for what he is not, and a sense of humor was provided to console him for what he is. On Dec 21, 2019, at 1:22 PM, Annalisa Aguilar > wrote: Hello fellow and distant XMCArs, So today I saw this in the Intercept and thought I would share for your awareness, because of the recent developments that likely impact you, namely: * the neoliberalization of higher academic learning * the compromise of privacy and civil life in the US and other countries * the (apparently) hidden agenda of technology as it hard-wires biases and control over women, minorities, and other vulnerable people to reproduce past prejudices and power structures. In my thesis I discuss historical mental models of mind and how they inform technology design. During reading for my thesis I had always been bothered about the story of the AI Winter. Marvin Minsky, an "august" researcher from MIT labs of that period, had discredited Frank Rosenblatt's work on Perceptrons (which was reborn in the neural networks of the 1980's to early naughts). That act basically neutralized funding of legitimate research in AI and, through vicious academic politics, stymied anyone doing research even smelling like Perceptrons. Frank Rosenblatt died in 1971, likely feeling disgraced and ruined, never knowing the outcome of his lifework. It is a nightmare no academic would ever want. Thanks to Herbert Dreyfus, we know this story which is discussed in What Computers Still Can't Do https://mitpress.mit.edu/books/what-computers-still-cant-do Well, it ends up that Minksy has been allegedly tied up with Jeffery Epstein and his exploitation of young women. This has been recently reported in an article by Rodrigo Ochigame of Brazil, who was a student of Joichi Ito, who ran the MIT Media Lab. We know that Ito's projects were funded by none other than Epstein, and this reveal forced Ito's resignation. Read about it here: https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ I have not completed reading the article, because I had to stop just to pass this on to the list, to share. One might say that computer technology is by its very nature going to reproduce power structures, but I would rather say that our mental models are not serving us to create those technology tools that we require to create an equitable society. How else can we free the tools from the power structures, if the only people who use them are those who perpetuate privilege and cheat, for example by thwarting academic freedom in its process? How can we develop equality in society if the tools we create come from inequitable motivations and interactions? Is it even possible? As I see it, the ethics at MIT Labs reveals concretely how the Cartesian model of mind, basically normalizes the mind of the privileged, and why only a holistic mental model provides safeguards against these biases that lead to these abuses. Models such as distributed cognition, CHAT, and similar constructs, intertwine the threads of thought to the body, to culture, history, tool-use, language, and society, because these models encapsulate how environment develops mind, which in turn develops environment and so on. Mind is not separate, in a certain sense, mind IS material and not disembodied. It is when mind is portrayed otherwise that the means of legitimizing abuse is given its nutrition to germinate without check. I feel an odd confirmation, as much as I am horrified to learn this new alleged connection of Minsky to Epstein, how the ways in which as a society we fool ourselves with these hyper-rational models which only reproduce abusive power structures. That is how it is done. It might also be a reminder to anyone who has been unethical how history has a way of revealing past deeds. Justice does come, albeit slowly. Kind regards as we near the end of 2019, Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191222/6b6d8a65/attachment.html From ewall@umich.edu Sun Dec 22 12:04:01 2019 From: ewall@umich.edu (Edward Wall) Date: Sun, 22 Dec 2019 14:04:01 -0600 Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future In-Reply-To: References: <764B6D2E-6C90-4C9B-8E39-926FB4813AE0@umich.edu> Message-ID: <3D7D1EC9-4FEB-4CAF-9534-E0C679D47DE0@umich.edu> Annalisa As I said I happen to suspect that ?minds? are ?material? (the ?s? on minds is not there by chance) From which it may well follow that ?ethics? is ?material'. However, those quotes around ?mind? and ?material? are there to indicate that I am not really sure what any of these terms actually mean within our email exchange despite Andy?s possible assertion that all is material. As regards Dreyfus and Dreyfus, the Hubert Dreyfus of that time followed an interpretation of Heidegger as regards thinking and there happens to be a jump in the process which some might call ?imagination.? In later years Dreyfus was called on this interpretation by some of his former graduate students and I don?t know for sure where he finally ended up. Imagination was given to man to compensate him for what he is not, and a sense of humor was provided to console him for what he is. As to your examples, I understand your argument and the fact that you and I can see them makes them ?material? for me. I am not sure what I would say if you were hallucinating them. However Scrooge in his pre-Christmas moments may well have been right. Anyway, I like your conclusions! Ed > On Dec 21, 2019, at 11:03 PM, Annalisa Aguilar wrote: > > Hi Ed, > > Regarding Dreyfus, I don't recall him asserting the matter of mind or not, though it's been almost 10 years since I read the book. > > I am compelled to say that minds are material in the same way that stories are material. > > Consider a few analogies. > > The book is material, the words are printed ink on the paper of the pages, but without the book present the story will not manifest in the mind of a reader (as long as the book is written in the same language as the reader). Is the story not material if it is located in this book and not in that one? Also the story can exist outside the book, in the memories of a person, but the person is also material. > > The light in an electrical light bulb is there when the electricity passes through the filament, and not when the electricity is not there. We know thanks to Einstein that light is energetic material that travels really fast. The filament is gross material, the electricity is subtle as is the light, but the three are material. > > I assert that a mind too is subtle energy passing through a brain, which is a conglomerate of neuronal connections of grey matter. > > I see the physical and transactional world as material of infinitely different graded properties, subtle to gross, in different combinations of active qualities. In the same way the story resides in the book and the light resides, or emanates, from the light bulb the subtle permeates the gross. > > A more perfect illustration is the red hot iron ball. Iron and fire are in the same location, one is gross the other subtle. But both are material. What can happen however is if we do not know the properties of iron (heavy and round) or fire (red and hot) we can superimpose one element upon the other (i.e., assert that fire is heavy and round, while iron is hot and red) and this is easy to do because they are present in the same location perceptually; we cannot remove the iron from the fire or vice versa. (though it is possible if you are a blacksmith you can purge the iron in water, extinguishing the flame, I suppose, but you get my point, I hope.) > > With this in mind, is it possible to also assert that ethics is also a material entity? Whereby ethical conduct is that which possesses the most truth for the most harmony for the largest part of society while also holding the same for the individual. > > Can ethical conduct have universal laws like physics? If so, it might be an attainable goal to create the ethical algorithm. Yet, the weirdness enters when considering whether it is ethical to train computer to learn and improve an algorithm until it is "perfectly ethical", if what it needs to do to get there is to fail several times before it can actually become perfect. How many failures should there be before it's not ethical to continue training the computer? > > I would say it's not ethical to do that, if it means for example surveilling a population with face recognition technology until it is able to perfectly identify a criminal from his or her doppelganger. There will always be the risk of accusing an innocent person, which is not ethical. > > Algorithms usually don't take into consideration context. I recall Rosenblatt's work on perceptrons were a way to create context by computers learning about contexts (by sensing). That actually might safer than constructing algorithms. > > You have to wonder what computers would be like now if Rosenblatt had been able to pursue his work unfettered by Minsky and others from MIT back then. > > Academic freedom must be protected. On that I hope we can agree! > > Kind regards, > > Annalisa > > > > > > > > > From: xmca-l-bounces@mailman.ucsd.edu > on behalf of Edward Wall > > Sent: Saturday, December 21, 2019 2:37 PM > To: eXtended Mind, Culture, Activity > > Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future > > UNM-IT Warning: This message was sent from outside of the LoboMail system. Do not click on links or open attachments unless you are sure the content is safe. (2.3) > Annalisa > > In my read when Dreyfus wrote the book you reference, he believed that ?mind' was neither ?material? nor ?mental? On the other hand, I have often wondered if ?minds' aren?t ?material.' > > Ed Wall > > Imagination was given to man to compensate him for what he is not, and a sense of humor was provided to console him for what he is. > >> On Dec 21, 2019, at 1:22 PM, Annalisa Aguilar > wrote: >> >> Hello fellow and distant XMCArs, >> >> So today I saw this in the Intercept and thought I would share for your awareness, because of the recent developments that likely impact you, namely: >> the neoliberalization of higher academic learning >> the compromise of privacy and civil life in the US and other countries >> the (apparently) hidden agenda of technology as it hard-wires biases and control over women, minorities, and other vulnerable people to reproduce past prejudices and power structures. >> In my thesis I discuss historical mental models of mind and how they inform technology design. During reading for my thesis I had always been bothered about the story of the AI Winter. >> >> Marvin Minsky, an "august" researcher from MIT labs of that period, had discredited Frank Rosenblatt's work on Perceptrons (which was reborn in the neural networks of the 1980's to early naughts). That act basically neutralized funding of legitimate research in AI and, through vicious academic politics, stymied anyone doing research even smelling like Perceptrons. Frank Rosenblatt died in 1971, likely feeling disgraced and ruined, never knowing the outcome of his lifework. It is a nightmare no academic would ever want. >> >> Thanks to Herbert Dreyfus, we know this story which is discussed in What Computers Still Can't Do https://mitpress.mit.edu/books/what-computers-still-cant-do >> >> Well, it ends up that Minksy has been allegedly tied up with Jeffery Epstein and his exploitation of young women. >> >> This has been recently reported in an article by Rodrigo Ochigame of Brazil, who was a student of Joichi Ito, who ran the MIT Media Lab. We know that Ito's projects were funded by none other than Epstein, and this reveal forced Ito's resignation. Read about it here: https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ >> >> I have not completed reading the article, because I had to stop just to pass this on to the list, to share. >> >> One might say that computer technology is by its very nature going to reproduce power structures, but I would rather say that our mental models are not serving us to create those technology tools that we require to create an equitable society. How else can we free the tools from the power structures, if the only people who use them are those who perpetuate privilege and cheat, for example by thwarting academic freedom in its process? How can we develop equality in society if the tools we create come from inequitable motivations and interactions? Is it even possible? >> >> As I see it, the ethics at MIT Labs reveals concretely how the Cartesian model of mind, basically normalizes the mind of the privileged, and why only a holistic mental model provides safeguards against these biases that lead to these abuses. Models such as distributed cognition, CHAT, and similar constructs, intertwine the threads of thought to the body, to culture, history, tool-use, language, and society, because these models encapsulate how environment develops mind, which in turn develops environment and so on. Mind is not separate, in a certain sense, mind IS material and not disembodied. It is when mind is portrayed otherwise that the means of legitimizing abuse is given its nutrition to germinate without check. >> >> I feel an odd confirmation, as much as I am horrified to learn this new alleged connection of Minsky to Epstein, how the ways in which as a society we fool ourselves with these hyper-rational models which only reproduce abusive power structures. >> >> That is how it is done. >> >> It might also be a reminder to anyone who has been unethical how history has a way of revealing past deeds. Justice does come, albeit slowly. >> >> Kind regards as we near the end of 2019, >> >> Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191222/47120d70/attachment.html From ewall@umich.edu Sun Dec 22 12:20:29 2019 From: ewall@umich.edu (Edward Wall) Date: Sun, 22 Dec 2019 14:20:29 -0600 Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future In-Reply-To: References: <764B6D2E-6C90-4C9B-8E39-926FB4813AE0@umich.edu> Message-ID: <308F7B73-E3D2-48E0-B19B-36B13BC63AF9@umich.edu> Michael Quite fascinating! as I date roughly ?programming-wise' from that era. There were, as I think you indicate, a lot of side currents. Also there were a number of ??rebellious experimenters? who would find their way into labs and leave them to take other directions (indications are that this is still happening). A lot of this was made possible - the labs, that is - by ?free? defense department funding. When that dried up, commercial entities would step in. What you say resonates with my memories! Thanks Ed Imagination was given to man to compensate him for what he is not, and a sense of humor was provided to console him for what he is. > On Dec 22, 2019, at 1:06 PM, Glassman, Michael wrote: > > Hi Annalisa and Ed, > > I guess I hesitate getting involve in this discussion somewhat because it is based in work I was doing a couple of years ago. If I misremember or get something wrong please let me know. But based on my own understanding the neoliberalization of AI goes much deeper than Rosenblatt and perceptrons, with some of the earliest work not having much to do with academics, and in some ways a rebellion against it. > > I believe the major fissure was between the more idealistic explorers on the West Coast, in particular those who worked in Engelbart?s Augmentation Research Center, the Homebrew Computer Club and those who graduated to the early days of the Palo Alto Research Center. Some of the people who formed the Homebrew Computer Club were influenced by Illich and his ideas of breaking away from systematic thinking. Sort of an ?every user a citizen of the world type mentality.? I am pretty convinced from reading the history of the time that these early ?programmers? went down to Illich?s ?language school? in Mexico and influenced by went on there. Freire was in residence for a good part of the time, Heinz von Foerster (who I am becoming convinced was the great genius of the time nobody knows about) gave a summer seminar down there. There is no real evidence for this, nobody wrote anything down, but I think there is a good argument to be made. Much of this sprung from the commune and DIY (do it yourself) movement. As the impact of the Internet became more apparent there was an attempt to gain control, witch AI being seen as something that was a product of the system rather than fought against it. Much of this battle emerged in to pages of Stuart Brands ?Co-evolution Quarterly? with people like Illich and Bateson taking the idea that our thinking follows the world and others trying to argue that you need elites (I know a lot of people don?t like this word) create systems that control thinking for the good of mankind. The magazine ?Wired? was started as a neo-liberal answer to Co-evolution Quarterly and of course had more money and more play in the press. To this day I never much trust people who refer to Wired as a source of information and opinion. On the East Coast there was the rise of the MIT media Lab, led by one of the Podhertz off-spring, the champions of neo-liberalism. I always wondered where they got their money and power and finally the Epstein saga has made me realized they would do anything for money. They also started the TED series, where you have ?great minds? telling you how they are developing systems that can make our lives better. Like Wired I don?t really trust TED. The Media Lab was/is supported by people who believe in the system. > > Anyway, the West Coast adventurers soon had nowhere to go. They believe electronic media was a tool to free humans, but the idea it was a tool to control humans (DIWYT ? Do what your told ? I just made that up) had all the money and all the power. They not only had funding from the government ? which probably was not that much ? they had funding from the rich, those who Epstein buzzed around, working as general con man. The way the media lab protected Epstein?s money, if not Epstein, suggests a very ugly chapter that we may never be able to read (Podhertz was Epstein?s chief apologist). > > I believe there is a story to be written here but I despair nobody will ever be allowed to write it. > > Michael > > > From: xmca-l-bounces@mailman.ucsd.edu > On Behalf Of Annalisa Aguilar > Sent: Sunday, December 22, 2019 1:09 PM > To: Bill Kerr >; eXtended Mind, Culture, Activity > > Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future > > Hi Bill, > > Well, I think it has to do with the ethics. Not the scientific arguments. If I read Dreyfus correctly, there was a lot of back-channeling that stained many legitimate researchers in the area of Rosenblatt's work and this "witch hunting" dried up funding for anything related to his work. > > What I'm asserting is that there is something nefarious about the model of mind that is Cartesian, and that inherently it "protects" and legitimizes the downward slope of bad behavior. > > Given that Damasio has shown that we feel before we reason (see Descartes' Error), and we require sensing to make good decisions, if we deny sensing information, then we can rationalize whatever we want, essentially suppressing any "inner compass" that affords our own welfare. I may not be remembering this exactly, however the gist is that Damasio's patient, who suffered brain damage in the part of the brain that senses, possessed a disability that did not allow him not able to make good decisions, putting him in harm's way. He was a danger to himself. > > If a person has been brought up to deny sensing as important information by which to orient, then does it surprise that someone like Minsky would behave as he did? Maybe I am painting with too broad strokes, but I tend to intuit that we have within us mechanisms that allow us to err on the side of "first do no harm," like the way mirror neurons work. > > I believe that culture can either suppress or enhance this biological construct with which we are born. Though there can be those who are not born that way, I intuit that it's not a norm, otherwise we would see a lot of humans running off cliffs like lemmings. > > I'm arguing that a culture's model of mind can determine a lot about that culture's behavior. Like most models, if they are closer to the transactional world, then the models will be more "true," if they are less accurate, they will lead many astray. > > Consider how the model of hysteria came to be used to control women (and still is if we recollect the 2016 election) > > Descartes' model of mind was constructed as an expedient measure to protect scientific research from the draconian persecution of the Church, and a real threat to the lives of the European intellectual class. Now, the model no longer works, yet we still hold on to it. It's over 400 years old!!! We've since moved on from Newtonian physics, but not the mind/body split. > > I'm asserting that the model of mind/body split causes harm, and I'm illustrating how it might be so when we consider how it impacts something like the study of Artificial Intelligence and how it is that there is a danger in it when it comes to academic freedom. > > If you are willing to say that a scholar's research is different than his personal life when it comes to ethics, I don't accept it, because a person is a whole person and not divided, regardless if such person subscribes or orients to the mind/body split. > > Sure, it's entirely possible I'm off the mark, but I'm suggesting that just like racists models create certain interactions in society that are harmful, so it is with models of mind. The story in the Intercept confirms my point of view. > > Kind regards, > > Annalisa > From: xmca-l-bounces@mailman.ucsd.edu > on behalf of Bill Kerr > > Sent: Saturday, December 21, 2019 11:49 PM > To: eXtended Mind, Culture, Activity > > Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future > > You have to wonder what computers would be like now if Rosenblatt had been able to pursue his work unfettered by Minsky and others from MIT back then. > > Academic freedom must be protected. On that I hope we can agree! > > Perceptrons was written by MInsky and Papert in 1969. Many have argued and I agree with them that their other work kicked off an extremely rich field of educational computing (call it the MIT group if you want), which persists in numerous branches today: Scratch3.0, AppInventor, Makey Makey all came out of MIT not to mention the associated theoretical work. > > I google Perceptrons and it confirmed what I thought from before: that it made legitimate criticisms of that path of research. It was a legitimate dispute between different approaches at that time. I can't evaluate myself because my maths isn't good enough. > > It is true that work on Perceptrons dried up for quite a while after that probably because no one could refute the critique by Minsky and Papert. Irrespective, in your words Annalisa we should support their academic freedom to argue their case. > > It is also true that parallel processing perceptrons / neural networks have achieved remarkable things in recent years. It seems that MInsky / Papert made a legitimate criticism which ended up sidelining what turned out to be another rich research field. > > Decades later Minsky, now dead, is accused of having sex with a 17yo at Epstein's compound when he was in his 70s. > > Therefore, what "Minsky and others" did before that is now suspect by association. Is that what you are arguing Annalisa? > > > > > On Sun, Dec 22, 2019 at 2:37 PM Annalisa Aguilar > wrote: > Hi Ed, > > Regarding Dreyfus, I don't recall him asserting the matter of mind or not, though it's been almost 10 years since I read the book. > > I am compelled to say that minds are material in the same way that stories are material. > > Consider a few analogies. > > The book is material, the words are printed ink on the paper of the pages, but without the book present the story will not manifest in the mind of a reader (as long as the book is written in the same language as the reader). Is the story not material if it is located in this book and not in that one? Also the story can exist outside the book, in the memories of a person, but the person is also material. > > The light in an electrical light bulb is there when the electricity passes through the filament, and not when the electricity is not there. We know thanks to Einstein that light is energetic material that travels really fast. The filament is gross material, the electricity is subtle as is the light, but the three are material. > > I assert that a mind too is subtle energy passing through a brain, which is a conglomerate of neuronal connections of grey matter. > > I see the physical and transactional world as material of infinitely different graded properties, subtle to gross, in different combinations of active qualities. In the same way the story resides in the book and the light resides, or emanates, from the light bulb the subtle permeates the gross. > > A more perfect illustration is the red hot iron ball. Iron and fire are in the same location, one is gross the other subtle. But both are material. What can happen however is if we do not know the properties of iron (heavy and round) or fire (red and hot) we can superimpose one element upon the other (i.e., assert that fire is heavy and round, while iron is hot and red) and this is easy to do because they are present in the same location perceptually; we cannot remove the iron from the fire or vice versa. (though it is possible if you are a blacksmith you can purge the iron in water, extinguishing the flame, I suppose, but you get my point, I hope.) > > With this in mind, is it possible to also assert that ethics is also a material entity? Whereby ethical conduct is that which possesses the most truth for the most harmony for the largest part of society while also holding the same for the individual. > > Can ethical conduct have universal laws like physics? If so, it might be an attainable goal to create the ethical algorithm. Yet, the weirdness enters when considering whether it is ethical to train computer to learn and improve an algorithm until it is "perfectly ethical", if what it needs to do to get there is to fail several times before it can actually become perfect. How many failures should there be before it's not ethical to continue training the computer? > > I would say it's not ethical to do that, if it means for example surveilling a population with face recognition technology until it is able to perfectly identify a criminal from his or her doppelganger. There will always be the risk of accusing an innocent person, which is not ethical. > > Algorithms usually don't take into consideration context. I recall Rosenblatt's work on perceptrons were a way to create context by computers learning about contexts (by sensing). That actually might safer than constructing algorithms. > > You have to wonder what computers would be like now if Rosenblatt had been able to pursue his work unfettered by Minsky and others from MIT back then. > > Academic freedom must be protected. On that I hope we can agree! > > Kind regards, > > Annalisa > > > > > > > > > From: xmca-l-bounces@mailman.ucsd.edu > on behalf of Edward Wall > > Sent: Saturday, December 21, 2019 2:37 PM > To: eXtended Mind, Culture, Activity > > Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future > > UNM-IT Warning: This message was sent from outside of the LoboMail system. Do not click on links or open attachments unless you are sure the content is safe. (2.3) > > Annalisa > > In my read when Dreyfus wrote the book you reference, he believed that ?mind' was neither ?material? nor ?mental? On the other hand, I have often wondered if ?minds' aren?t ?material.' > > Ed Wall > > Imagination was given to man to compensate him for what he is not, and a sense of humor was provided to console him for what he is. > > On Dec 21, 2019, at 1:22 PM, Annalisa Aguilar > wrote: > > Hello fellow and distant XMCArs, > > So today I saw this in the Intercept and thought I would share for your awareness, because of the recent developments that likely impact you, namely: > the neoliberalization of higher academic learning > the compromise of privacy and civil life in the US and other countries > the (apparently) hidden agenda of technology as it hard-wires biases and control over women, minorities, and other vulnerable people to reproduce past prejudices and power structures. > In my thesis I discuss historical mental models of mind and how they inform technology design. During reading for my thesis I had always been bothered about the story of the AI Winter. > > Marvin Minsky, an "august" researcher from MIT labs of that period, had discredited Frank Rosenblatt's work on Perceptrons (which was reborn in the neural networks of the 1980's to early naughts). That act basically neutralized funding of legitimate research in AI and, through vicious academic politics, stymied anyone doing research even smelling like Perceptrons. Frank Rosenblatt died in 1971, likely feeling disgraced and ruined, never knowing the outcome of his lifework. It is a nightmare no academic would ever want. > > Thanks to Herbert Dreyfus, we know this story which is discussed in What Computers Still Can't Do https://mitpress.mit.edu/books/what-computers-still-cant-do > > Well, it ends up that Minksy has been allegedly tied up with Jeffery Epstein and his exploitation of young women. > > This has been recently reported in an article by Rodrigo Ochigame of Brazil, who was a student of Joichi Ito, who ran the MIT Media Lab. We know that Ito's projects were funded by none other than Epstein, and this reveal forced Ito's resignation. Read about it here: https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ > > I have not completed reading the article, because I had to stop just to pass this on to the list, to share. > > One might say that computer technology is by its very nature going to reproduce power structures, but I would rather say that our mental models are not serving us to create those technology tools that we require to create an equitable society. How else can we free the tools from the power structures, if the only people who use them are those who perpetuate privilege and cheat, for example by thwarting academic freedom in its process? How can we develop equality in society if the tools we create come from inequitable motivations and interactions? Is it even possible? > > As I see it, the ethics at MIT Labs reveals concretely how the Cartesian model of mind, basically normalizes the mind of the privileged, and why only a holistic mental model provides safeguards against these biases that lead to these abuses. Models such as distributed cognition, CHAT, and similar constructs, intertwine the threads of thought to the body, to culture, history, tool-use, language, and society, because these models encapsulate how environment develops mind, which in turn develops environment and so on. Mind is not separate, in a certain sense, mind IS material and not disembodied. It is when mind is portrayed otherwise that the means of legitimizing abuse is given its nutrition to germinate without check. > > I feel an odd confirmation, as much as I am horrified to learn this new alleged connection of Minsky to Epstein, how the ways in which as a society we fool ourselves with these hyper-rational models which only reproduce abusive power structures. > > That is how it is done. > > It might also be a reminder to anyone who has been unethical how history has a way of revealing past deeds. Justice does come, albeit slowly. > > Kind regards as we near the end of 2019, > > Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191222/644f79ba/attachment-0001.html From annalisa@unm.edu Sun Dec 22 13:35:20 2019 From: annalisa@unm.edu (Annalisa Aguilar) Date: Sun, 22 Dec 2019 21:35:20 +0000 Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future In-Reply-To: References: <764B6D2E-6C90-4C9B-8E39-926FB4813AE0@umich.edu> , , Message-ID: Hi Michael, I mean to reply but will do so later on. Thank you for your contribution. I want to discuss more the apparent split between research of the coasts, because I did come up against that in my reading, and I wondered if it was in *my imagination* or not. Apparently there was something going on in terms of orientation to what makes research. I'm keen to know more. More soon... Kind regards, Annalisa ________________________________ From: xmca-l-bounces@mailman.ucsd.edu on behalf of Glassman, Michael Sent: Sunday, December 22, 2019 12:06 PM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future Hi Annalisa and Ed, I guess I hesitate getting involve in this discussion somewhat because it is based in work I was doing a couple of years ago. If I misremember or get something wrong please let me know. But based on my own understanding the neoliberalization of AI goes much deeper than Rosenblatt and perceptrons, with some of the earliest work not having much to do with academics, and in some ways a rebellion against it. I believe the major fissure was between the more idealistic explorers on the West Coast, in particular those who worked in Engelbart?s Augmentation Research Center, the Homebrew Computer Club and those who graduated to the early days of the Palo Alto Research Center. Some of the people who formed the Homebrew Computer Club were influenced by Illich and his ideas of breaking away from systematic thinking. Sort of an ?every user a citizen of the world type mentality.? I am pretty convinced from reading the history of the time that these early ?programmers? went down to Illich?s ?language school? in Mexico and influenced by went on there. Freire was in residence for a good part of the time, Heinz von Foerster (who I am becoming convinced was the great genius of the time nobody knows about) gave a summer seminar down there. There is no real evidence for this, nobody wrote anything down, but I think there is a good argument to be made. Much of this sprung from the commune and DIY (do it yourself) movement. As the impact of the Internet became more apparent there was an attempt to gain control, witch AI being seen as something that was a product of the system rather than fought against it. Much of this battle emerged in to pages of Stuart Brands ?Co-evolution Quarterly? with people like Illich and Bateson taking the idea that our thinking follows the world and others trying to argue that you need elites (I know a lot of people don?t like this word) create systems that control thinking for the good of mankind. The magazine ?Wired? was started as a neo-liberal answer to Co-evolution Quarterly and of course had more money and more play in the press. To this day I never much trust people who refer to Wired as a source of information and opinion. On the East Coast there was the rise of the MIT media Lab, led by one of the Podhertz off-spring, the champions of neo-liberalism. I always wondered where they got their money and power and finally the Epstein saga has made me realized they would do anything for money. They also started the TED series, where you have ?great minds? telling you how they are developing systems that can make our lives better. Like Wired I don?t really trust TED. The Media Lab was/is supported by people who believe in the system. Anyway, the West Coast adventurers soon had nowhere to go. They believe electronic media was a tool to free humans, but the idea it was a tool to control humans (DIWYT ? Do what your told ? I just made that up) had all the money and all the power. They not only had funding from the government ? which probably was not that much ? they had funding from the rich, those who Epstein buzzed around, working as general con man. The way the media lab protected Epstein?s money, if not Epstein, suggests a very ugly chapter that we may never be able to read (Podhertz was Epstein?s chief apologist). I believe there is a story to be written here but I despair nobody will ever be allowed to write it. Michael From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Annalisa Aguilar Sent: Sunday, December 22, 2019 1:09 PM To: Bill Kerr ; eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future Hi Bill, Well, I think it has to do with the ethics. Not the scientific arguments. If I read Dreyfus correctly, there was a lot of back-channeling that stained many legitimate researchers in the area of Rosenblatt's work and this "witch hunting" dried up funding for anything related to his work. What I'm asserting is that there is something nefarious about the model of mind that is Cartesian, and that inherently it "protects" and legitimizes the downward slope of bad behavior. Given that Damasio has shown that we feel before we reason (see Descartes' Error), and we require sensing to make good decisions, if we deny sensing information, then we can rationalize whatever we want, essentially suppressing any "inner compass" that affords our own welfare. I may not be remembering this exactly, however the gist is that Damasio's patient, who suffered brain damage in the part of the brain that senses, possessed a disability that did not allow him not able to make good decisions, putting him in harm's way. He was a danger to himself. If a person has been brought up to deny sensing as important information by which to orient, then does it surprise that someone like Minsky would behave as he did? Maybe I am painting with too broad strokes, but I tend to intuit that we have within us mechanisms that allow us to err on the side of "first do no harm," like the way mirror neurons work. I believe that culture can either suppress or enhance this biological construct with which we are born. Though there can be those who are not born that way, I intuit that it's not a norm, otherwise we would see a lot of humans running off cliffs like lemmings. I'm arguing that a culture's model of mind can determine a lot about that culture's behavior. Like most models, if they are closer to the transactional world, then the models will be more "true," if they are less accurate, they will lead many astray. Consider how the model of hysteria came to be used to control women (and still is if we recollect the 2016 election) Descartes' model of mind was constructed as an expedient measure to protect scientific research from the draconian persecution of the Church, and a real threat to the lives of the European intellectual class. Now, the model no longer works, yet we still hold on to it. It's over 400 years old!!! We've since moved on from Newtonian physics, but not the mind/body split. I'm asserting that the model of mind/body split causes harm, and I'm illustrating how it might be so when we consider how it impacts something like the study of Artificial Intelligence and how it is that there is a danger in it when it comes to academic freedom. If you are willing to say that a scholar's research is different than his personal life when it comes to ethics, I don't accept it, because a person is a whole person and not divided, regardless if such person subscribes or orients to the mind/body split. Sure, it's entirely possible I'm off the mark, but I'm suggesting that just like racists models create certain interactions in society that are harmful, so it is with models of mind. The story in the Intercept confirms my point of view. Kind regards, Annalisa ________________________________ From: xmca-l-bounces@mailman.ucsd.edu > on behalf of Bill Kerr > Sent: Saturday, December 21, 2019 11:49 PM To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future You have to wonder what computers would be like now if Rosenblatt had been able to pursue his work unfettered by Minsky and others from MIT back then. Academic freedom must be protected. On that I hope we can agree! Perceptrons was written by MInsky and Papert in 1969. Many have argued and I agree with them that their other work kicked off an extremely rich field of educational computing (call it the MIT group if you want), which persists in numerous branches today: Scratch3.0, AppInventor, Makey Makey all came out of MIT not to mention the associated theoretical work. I google Perceptrons and it confirmed what I thought from before: that it made legitimate criticisms of that path of research. It was a legitimate dispute between different approaches at that time. I can't evaluate myself because my maths isn't good enough. It is true that work on Perceptrons dried up for quite a while after that probably because no one could refute the critique by Minsky and Papert. Irrespective, in your words Annalisa we should support their academic freedom to argue their case. It is also true that parallel processing perceptrons / neural networks have achieved remarkable things in recent years. It seems that MInsky / Papert made a legitimate criticism which ended up sidelining what turned out to be another rich research field. Decades later Minsky, now dead, is accused of having sex with a 17yo at Epstein's compound when he was in his 70s. Therefore, what "Minsky and others" did before that is now suspect by association. Is that what you are arguing Annalisa? On Sun, Dec 22, 2019 at 2:37 PM Annalisa Aguilar > wrote: Hi Ed, Regarding Dreyfus, I don't recall him asserting the matter of mind or not, though it's been almost 10 years since I read the book. I am compelled to say that minds are material in the same way that stories are material. Consider a few analogies. The book is material, the words are printed ink on the paper of the pages, but without the book present the story will not manifest in the mind of a reader (as long as the book is written in the same language as the reader). Is the story not material if it is located in this book and not in that one? Also the story can exist outside the book, in the memories of a person, but the person is also material. The light in an electrical light bulb is there when the electricity passes through the filament, and not when the electricity is not there. We know thanks to Einstein that light is energetic material that travels really fast. The filament is gross material, the electricity is subtle as is the light, but the three are material. I assert that a mind too is subtle energy passing through a brain, which is a conglomerate of neuronal connections of grey matter. I see the physical and transactional world as material of infinitely different graded properties, subtle to gross, in different combinations of active qualities. In the same way the story resides in the book and the light resides, or emanates, from the light bulb the subtle permeates the gross. A more perfect illustration is the red hot iron ball. Iron and fire are in the same location, one is gross the other subtle. But both are material. What can happen however is if we do not know the properties of iron (heavy and round) or fire (red and hot) we can superimpose one element upon the other (i.e., assert that fire is heavy and round, while iron is hot and red) and this is easy to do because they are present in the same location perceptually; we cannot remove the iron from the fire or vice versa. (though it is possible if you are a blacksmith you can purge the iron in water, extinguishing the flame, I suppose, but you get my point, I hope.) With this in mind, is it possible to also assert that ethics is also a material entity? Whereby ethical conduct is that which possesses the most truth for the most harmony for the largest part of society while also holding the same for the individual. Can ethical conduct have universal laws like physics? If so, it might be an attainable goal to create the ethical algorithm. Yet, the weirdness enters when considering whether it is ethical to train computer to learn and improve an algorithm until it is "perfectly ethical", if what it needs to do to get there is to fail several times before it can actually become perfect. How many failures should there be before it's not ethical to continue training the computer? I would say it's not ethical to do that, if it means for example surveilling a population with face recognition technology until it is able to perfectly identify a criminal from his or her doppelganger. There will always be the risk of accusing an innocent person, which is not ethical. Algorithms usually don't take into consideration context. I recall Rosenblatt's work on perceptrons were a way to create context by computers learning about contexts (by sensing). That actually might safer than constructing algorithms. You have to wonder what computers would be like now if Rosenblatt had been able to pursue his work unfettered by Minsky and others from MIT back then. Academic freedom must be protected. On that I hope we can agree! Kind regards, Annalisa ________________________________ From: xmca-l-bounces@mailman.ucsd.edu > on behalf of Edward Wall > Sent: Saturday, December 21, 2019 2:37 PM To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future UNM-IT Warning: This message was sent from outside of the LoboMail system. Do not click on links or open attachments unless you are sure the content is safe. (2.3) Annalisa In my read when Dreyfus wrote the book you reference, he believed that ?mind' was neither ?material? nor ?mental? On the other hand, I have often wondered if ?minds' aren?t ?material.' Ed Wall Imagination was given to man to compensate him for what he is not, and a sense of humor was provided to console him for what he is. On Dec 21, 2019, at 1:22 PM, Annalisa Aguilar > wrote: Hello fellow and distant XMCArs, So today I saw this in the Intercept and thought I would share for your awareness, because of the recent developments that likely impact you, namely: * the neoliberalization of higher academic learning * the compromise of privacy and civil life in the US and other countries * the (apparently) hidden agenda of technology as it hard-wires biases and control over women, minorities, and other vulnerable people to reproduce past prejudices and power structures. In my thesis I discuss historical mental models of mind and how they inform technology design. During reading for my thesis I had always been bothered about the story of the AI Winter. Marvin Minsky, an "august" researcher from MIT labs of that period, had discredited Frank Rosenblatt's work on Perceptrons (which was reborn in the neural networks of the 1980's to early naughts). That act basically neutralized funding of legitimate research in AI and, through vicious academic politics, stymied anyone doing research even smelling like Perceptrons. Frank Rosenblatt died in 1971, likely feeling disgraced and ruined, never knowing the outcome of his lifework. It is a nightmare no academic would ever want. Thanks to Herbert Dreyfus, we know this story which is discussed in What Computers Still Can't Do https://mitpress.mit.edu/books/what-computers-still-cant-do Well, it ends up that Minksy has been allegedly tied up with Jeffery Epstein and his exploitation of young women. This has been recently reported in an article by Rodrigo Ochigame of Brazil, who was a student of Joichi Ito, who ran the MIT Media Lab. We know that Ito's projects were funded by none other than Epstein, and this reveal forced Ito's resignation. Read about it here: https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ I have not completed reading the article, because I had to stop just to pass this on to the list, to share. One might say that computer technology is by its very nature going to reproduce power structures, but I would rather say that our mental models are not serving us to create those technology tools that we require to create an equitable society. How else can we free the tools from the power structures, if the only people who use them are those who perpetuate privilege and cheat, for example by thwarting academic freedom in its process? How can we develop equality in society if the tools we create come from inequitable motivations and interactions? Is it even possible? As I see it, the ethics at MIT Labs reveals concretely how the Cartesian model of mind, basically normalizes the mind of the privileged, and why only a holistic mental model provides safeguards against these biases that lead to these abuses. Models such as distributed cognition, CHAT, and similar constructs, intertwine the threads of thought to the body, to culture, history, tool-use, language, and society, because these models encapsulate how environment develops mind, which in turn develops environment and so on. Mind is not separate, in a certain sense, mind IS material and not disembodied. It is when mind is portrayed otherwise that the means of legitimizing abuse is given its nutrition to germinate without check. I feel an odd confirmation, as much as I am horrified to learn this new alleged connection of Minsky to Epstein, how the ways in which as a society we fool ourselves with these hyper-rational models which only reproduce abusive power structures. That is how it is done. It might also be a reminder to anyone who has been unethical how history has a way of revealing past deeds. Justice does come, albeit slowly. Kind regards as we near the end of 2019, Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191222/6a928091/attachment.html From haydizulfei@rocketmail.com Mon Dec 23 07:03:45 2019 From: haydizulfei@rocketmail.com (Haydi Zulfei) Date: Mon, 23 Dec 2019 15:03:45 +0000 (UTC) Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future In-Reply-To: References: <764B6D2E-6C90-4C9B-8E39-926FB4813AE0@umich.edu> Message-ID: <174716838.4801687.1577113425906@mail.yahoo.com> Dear all,?Excuse me if I had to send my message in the attached form.Best wishesHaydi On Monday, December 23, 2019, 01:08:12 AM GMT+3:30, Annalisa Aguilar wrote: Hi Michael, I mean to reply but will do so later on. Thank you for your contribution. I want to discuss more the apparent split between research of the coasts, because I did come up against that in my reading, and I wondered if it was in *my imagination* or not. Apparently there was something going on in terms of orientation to what makes research. I'm keen to know more. More soon... Kind regards, Annalisa From: xmca-l-bounces@mailman.ucsd.edu on behalf of Glassman, Michael Sent: Sunday, December 22, 2019 12:06 PM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future? Hi Annalisa and Ed, ? I guess I hesitate getting involve in this discussion somewhat because it is based in work I was doing a couple of years ago. If I misremember or get something wrong please let me know. But based on my own understanding the neoliberalization of AI goes much deeper than Rosenblatt and perceptrons, with some of the earliest work not having much to do with academics, and in some ways a rebellion against it. ? I believe the major fissure was between the more idealistic explorers on the West Coast, in particular those who worked in Engelbart?s Augmentation Research Center, the Homebrew Computer Club and those who graduated to the early days of the Palo Alto Research Center. Some of the people who formed the Homebrew Computer Club? were influenced by Illich and his ideas of breaking away from systematic thinking. Sort of an ?every user a citizen of the world type mentality.? I am pretty convinced from reading the history of the time that these early ?programmers? went down to Illich?s ??language school? in Mexico and influenced by went on there. Freire was in residence for a good part of the time, Heinz von Foerster (who I am becoming convinced was the great genius of the time nobody knows about) gave a summer seminar down there. There is no real evidence for this, nobody wrote anything down, but I think there is a good argument to be made. Much of this sprung from the commune and DIY (do it yourself) movement. As the impact of the Internet became more apparent there was an attempt to gain control, witch AI being seen as something that was a product of the system rather than fought against it. Much of this battle emerged in to pages of Stuart Brands ?Co-evolution Quarterly? with people like Illich and Bateson taking the idea that our thinking follows the world and others trying to argue that you need elites (I know a lot of people don?t like this word) create systems that control thinking for the good of mankind. The magazine ?Wired? was started as a neo-liberal answer to Co-evolution Quarterly and of course had more money and more play in the press. To this day I never much trust people who refer to Wired as a source of information and opinion. On the East Coast there was the rise of the MIT media Lab, led by one of the Podhertz off-spring, the champions of neo-liberalism. I always wondered where they got their money and power and finally the Epstein saga has made me realized they would do anything for money. They also started the TED series, where you have ?great minds? telling you how they are developing systems that can make our lives better. Like Wired I don?t really trust TED. The Media Lab was/is supported by people who believe in the system. ? Anyway, the West Coast adventurers soon had nowhere to go. They believe electronic media was a tool to free humans, but the idea it was a tool to control humans (DIWYT ? Do what your told ? I just made that up) had all the money and all the power. They not only had funding from the government ? which probably was not that much ? they had funding from the rich, those who Epstein buzzed around, working as general con man. The way the media lab protected Epstein?s money, if not Epstein, suggests a very ugly chapter that we may never be able to read (Podhertz was Epstein?s chief apologist). ? I believe there is a story to be written here but I despair nobody will ever be allowed to write it. ? Michael ? From: xmca-l-bounces@mailman.ucsd.edu On Behalf Of Annalisa Aguilar Sent: Sunday, December 22, 2019 1:09 PM To: Bill Kerr ; eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future ? Hi Bill, ? Well, I think it has to do with the ethics. Not the scientific arguments. If I read Dreyfus correctly, there was a lot of back-channeling that stained many legitimate researchers in the area of Rosenblatt's work and this "witch hunting" dried up funding for anything related to his work. ? What I'm asserting is that there is something nefarious about the model of mind that is Cartesian, and that inherently it "protects" and legitimizes the downward slope of bad behavior. ? Given that Damasio has shown that we feel before we reason (see Descartes' Error), and we require sensing to make good decisions, if we deny sensing information, then we can rationalize whatever we want, essentially suppressing any "inner compass" that affords our own welfare. I may not be remembering this exactly, however the gist is that Damasio's patient, who suffered brain damage in the part of the brain that senses, possessed a disability that did not allow him not able to make good decisions, putting him in harm's way. He was a danger to himself. ? If a person has been brought up to deny sensing as important information by which to orient, then does it surprise that someone like Minsky would behave as he did? Maybe I am painting with too broad strokes, but I tend to intuit that we have within us mechanisms that allow us to err on the side of "first do no harm," like the way mirror neurons work. ? I believe that culture can either suppress or enhance this biological construct with which we are born. Though there can be those who are not born that way, I intuit that it's not a norm, otherwise we would see a lot of humans running off cliffs like lemmings. ? I'm arguing that a culture's model of mind can determine a lot about that culture's behavior. Like most models, if they are closer to the transactional world, then the models will be more "true," if they are less accurate, they will lead many astray. ? Consider how the model of hysteria came to be used to control women (and still is if we recollect the 2016 election) ? Descartes' model of mind was constructed as an expedient measure to protect scientific research from the draconian persecution of the Church, and a real threat to the lives of the European intellectual class. Now, the model no longer works, yet we still hold on to it. It's over 400 years old!!! We've since moved on from Newtonian physics, but not the mind/body split. ? I'm asserting that the model of mind/body split causes harm, and I'm illustrating how it might be so when we consider how it impacts something like the study of Artificial Intelligence and how it is that there is a danger in it when it comes to academic freedom. ? If you are willing to say that a scholar's research is different than his personal life when it comes to ethics, I don't accept it, because a person is a whole person and not divided, regardless if such person subscribes or orients to the mind/body split. ? Sure, it's entirely possible I'm off the mark, but I'm suggesting that just like racists models create certain interactions in society that are harmful, so it is with models of mind. The story in the Intercept confirms my point of view. ? Kind regards, ? Annalisa From:xmca-l-bounces@mailman.ucsd.edu on behalf of Bill Kerr Sent: Saturday, December 21, 2019 11:49 PM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future ? You have to wonder what computers would be like now if Rosenblatt had been able to pursue his work unfettered by Minsky and others from MIT back then. ? Academic freedom must be protected. On that I hope we can agree! ? Perceptrons was written by MInsky and Papert in 1969. Many have argued and I agree with them that their other work kicked off an extremely rich field of educational computing (call it the MIT group if you want), which persists in numerous branches today: Scratch3.0, AppInventor, Makey Makey all came out of MIT not to mention the associated theoretical work. ? I google Perceptrons and it confirmed what I thought from before: that it made legitimate criticisms of? that path of research. It was a? legitimate dispute between different approaches at that time. I can't evaluate myself because my maths isn't good enough.? ? It is true that work on Perceptrons dried up for quite a while after that probably because no one could refute the critique by Minsky and Papert. Irrespective, in your words Annalisa we should support their academic freedom to argue their case. ? It is also true that? parallel processing perceptrons / neural networks have achieved remarkable things in recent years. It seems? that MInsky / Papert made a legitimate criticism which ended up sidelining what turned out to be another rich research field. ? Decades later Minsky, now dead, is accused?of having sex with a 17yo at Epstein's compound when he was in his 70s. ? Therefore, what "Minsky and others" did before that is now suspect by association. Is that what you are arguing Annalisa? ? ? ? ? On Sun, Dec 22, 2019 at 2:37 PM Annalisa Aguilar wrote: Hi Ed, ? Regarding Dreyfus, I don't recall him asserting the matter of mind or not, though it's been almost 10 years since I read the book. ? I am compelled to say that minds are material in the same way that stories are material. ? Consider a few analogies. ? The book is material, the words are printed ink on the paper of the pages, but without the book present the story will not manifest in the mind of a reader (as long as the book is written in the same language as the reader). Is the story not material if it is located in this book and not in that one? Also the story can exist outside the book, in the memories of a person, but the person is also material. ? The light in an electrical light bulb is there when the electricity passes through the filament, and not when the electricity is not there. We know thanks to Einstein that light is energetic material that travels really fast. The filament is gross material, the electricity is subtle as is the light, but the three are material. ? I assert that a mind too is subtle energy passing through a brain, which is a conglomerate of neuronal connections of grey matter. ? I see the physical and transactional world as material of infinitely different graded properties, subtle to gross, in different combinations of active qualities. In the same way the story resides in the book and the light resides, or emanates, from the light bulb the subtle permeates the gross. ? A more perfect illustration is the red hot iron ball. Iron and fire are in the same location, one is gross the other subtle. But both are material. What can happen however is if we do not know the properties of iron (heavy and round) or fire (red and hot) we can superimpose one element upon the other (i.e., assert that fire is heavy and round, while iron is hot and red) and this is easy to do because they are present in the same location perceptually; we cannot remove the iron from the fire or vice versa. (though it is possible if you are a blacksmith you can purge the iron in water, extinguishing the flame, I suppose, but you get my point, I hope.) ? With this in mind, is it possible to also assert that ethics is also a material entity? Whereby ethical conduct is that which possesses the most truth for the most harmony for the largest part of society while also holding the same for the individual. ? Can ethical conduct have universal laws like physics? If so, it might be an attainable goal to create the ethical algorithm. Yet, the weirdness enters when considering whether it is ethical to train computer to learn and improve an algorithm until it is "perfectly ethical", if what it needs to do to get there is to fail several times before it can actually become perfect. How many failures should there be before it's not ethical to continue training the computer? ? I would say it's not ethical to do that, if it means for example surveilling a population with face recognition technology until it is able to perfectly identify a criminal from his or her doppelganger. There will always be the risk of accusing an innocent person, which is not ethical. ? Algorithms usually don't take into consideration context. I recall Rosenblatt's work on perceptrons were a way to create context by computers learning about contexts (by sensing). That actually might safer than constructing algorithms. ? You have to wonder what computers would be like now if Rosenblatt had been able to pursue his work unfettered by Minsky and others from MIT back then. ? Academic freedom must be protected. On that I hope we can agree! ? Kind regards, ? Annalisa ? ? ? ? ? ? ? ? From:xmca-l-bounces@mailman.ucsd.edu on behalf of Edward Wall Sent: Saturday, December 21, 2019 2:37 PM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future ? ? UNM-IT Warning:?This message was sent from outside of the LoboMail system. Do not click on links or open attachments unless you are sure the content is safe. (2.3) Annalisa ? ? ? ?In my read when Dreyfus wrote the book you reference, he believed that ?mind' was neither ?material? nor ?mental? On the other hand, I have often wondered if ?minds' aren?t ?material.' ? Ed Wall ? Imagination was given to man to compensate him for what he is not, and a sense of humor was provided to console him for what he is. ? On Dec 21, 2019, at ?1:22 PM, Annalisa Aguilar wrote: ? Hello fellow and distant XMCArs, ? So today I saw this in the Intercept and thought I would share for your awareness, because of the recent developments that likely impact you, namely: - the neoliberalization of higher academic learning - the compromise of privacy and civil life in the US and other countries - the (apparently) hidden agenda of technology as it hard-wires biases and control over women, minorities, and other vulnerable people to reproduce past prejudices and power structures. In my thesis I discuss historical mental models of mind and how they inform technology design. During reading for my thesis I had always been bothered about the story of the AI Winter.? ? Marvin Minsky,?an "august" researcher from MIT labs of that period, had discredited Frank Rosenblatt's work on Perceptrons (which was reborn in the neural networks of the 1980's to early naughts). That act basically neutralized funding of legitimate research in AI and, through vicious academic politics, stymied anyone doing research even smelling like Perceptrons. Frank Rosenblatt died in 1971, likely feeling disgraced and ruined, never knowing the outcome of his lifework. It is a nightmare no academic would ever want.? ? Thanks to Herbert Dreyfus, we know this story which is discussed in What Computers Still Can't Do?https://mitpress.mit.edu/books/what-computers-still-cant-do ? Well, it ends up that Minksy has been allegedly tied up with Jeffery Epstein and his exploitation of young women.? ? This has been recently reported in an article by Rodrigo Ochigame of Brazil, who was a student of Joichi Ito, who ran the MIT Media Lab. We know that Ito's projects were funded by none other than Epstein, and this reveal forced Ito's resignation. Read about it here:?https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ ? I have not completed reading the article, because I had to stop just to pass this on to the list, to share.? ? One might say that computer technology is by its very nature going to reproduce power structures, but I would rather say that our mental models are not serving us to create those technology tools that we require to create an equitable society. How else can we free the tools from the power structures, if the only people who use them are those who perpetuate privilege and cheat, for example by thwarting academic freedom in its process? How can we develop equality in society if the tools we create come from inequitable motivations and interactions? Is it even possible? ? As I see it, the ethics at MIT Labs reveals concretely how the Cartesian model of mind, basically normalizes the mind of the privileged, and why only a holistic mental model provides safeguards against these biases that lead to these abuses. Models such as distributed cognition, CHAT, and similar constructs, intertwine the threads of thought to the body, to culture, history, tool-use, language, and society, because these models encapsulate how environment develops mind, which in turn develops environment and so on. Mind is not separate, in a certain sense, mind IS material and not disembodied. It is when mind is portrayed otherwise that the means of legitimizing abuse is given its nutrition to germinate without check. ? I feel an odd confirmation, as much as I am horrified to learn this new alleged connection of Minsky to Epstein, how the ways in which as a society we fool ourselves with these hyper-rational models which only reproduce abusive power structures.? ? That is how it is done.? ? It might also be a reminder to anyone who has been unethical how history has a way of revealing past deeds. Justice does come, albeit slowly. ? Kind regards as we near the end of 2019, ? Annalisa ? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191223/f6f1631a/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: Hello fellow and distant XMCArs.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 36644 bytes Desc: not available Url : http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191223/f6f1631a/attachment.bin From sebastien.lerique@normalesup.org Mon Dec 23 12:05:19 2019 From: sebastien.lerique@normalesup.org (=?utf-8?Q?S=C3=A9bastien?= Lerique) Date: Mon, 23 Dec 2019 21:05:19 +0100 Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future In-Reply-To: References: Message-ID: <87h81qdczk.fsf@normalesup.org> Dear all, I'm jumping on the occasion created by Annalisa's email to announce a workshop I am organising which might interest some in this community: it is the "Embodied interactions, Languaging and the Dynamic Medium Workshop" (ELDM2020), an event gathering interests and works in embodiment, languaging, diversity computing and humane technologies, on **18th February in Lyon, France**. As is confirmed in the messages here, recent developments in these communities are ripe for focused conversations, and this workshop will be a coming-together for cross-pollination and explorations of possible common futures. Invited speakers: - Elena Clare Cuffari (Worcester State University) - Mark Dingemanse (Radboud University) - Omar Rizwan (Dynamicland.org) - Jelle van Dijk (University of Twente) There is an open call for proposals until 6th January 2020, and registration opens on 1st January. All the details are available on the main website: https://wehlutyk.gitlab.io/eldm2020/ . I will be delighted to answer any questions that might arise! Best wishes, S?bastien Lerique PS: My sincere apologies for somewhat hijacking the thread with this announcement. I have in fact already sent this message three times to the list, and they seem to be spam-filtered as the announcements neven went through. Annalisa Aguilar writes: > Hello fellow and distant XMCArs, > > So today I saw this in the Intercept and thought I would share > for your awareness, because of the recent developments that > likely impact you, namely: > > * the neoliberalization of higher academic learning > * the compromise of privacy and civil life in the US and other > countries > * the (apparently) hidden agenda of technology as it hard-wires > biases and control over women, minorities, and other > vulnerable people to reproduce past prejudices and power > structures. > > In my thesis I discuss historical mental models of mind and how > they inform technology design. During reading for my thesis I > had always been bothered about the story of the AI Winter. > > Marvin Minsky, an "august" researcher from MIT labs of that > period, had discredited Frank Rosenblatt's work on Perceptrons > (which was reborn in the neural networks of the 1980's to early > naughts). That act basically neutralized funding of legitimate > research in AI and, through vicious academic politics, stymied > anyone doing research even smelling like Perceptrons. Frank > Rosenblatt died in 1971, likely feeling disgraced and ruined, > never knowing the outcome of his lifework. It is a nightmare no > academic would ever want. > > Thanks to Herbert Dreyfus, we know this story which is discussed > in What Computers Still Can't Do > https://mitpress.mit.edu/books/what-computers-still-cant-do > > Well, it ends up that Minksy has been allegedly tied up with > Jeffery Epstein and his exploitation of young women. > > This has been recently reported in an article by Rodrigo > Ochigame of Brazil, who was a student of Joichi Ito, who ran the > MIT > Media Lab. We know that Ito's projects were funded by none other > than Epstein, and this reveal forced Ito's resignation. Read > about it here: > https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ > > I have not completed reading the article, because I had to stop > just to pass this on to the list, to share. > > One might say that computer technology is by its very nature > going to reproduce power structures, but I would rather say that > our mental models are not serving us to create those technology > tools that we require to create an equitable society. How else > can we free the tools from the power structures, if the only > people who use them are those who perpetuate privilege and > cheat, for example by thwarting academic freedom in its process? > How can we develop equality in society if the tools we > create come from inequitable motivations and interactions? Is it > even possible? > > As I see it, the ethics at MIT Labs reveals concretely how the > Cartesian model of mind, basically normalizes the mind of the > privileged, and why only a holistic mental model provides > safeguards against these biases that lead to these > abuses. Models > such as distributed cognition, CHAT, and similar constructs, > intertwine the threads of thought to the body, to culture, > history, > tool-use, language, and society, because these models > encapsulate how environment develops mind, which in turn > develops > environment and so on. Mind is not separate, in a certain sense, > mind IS material and not disembodied. It is when mind is > portrayed otherwise that the means of legitimizing abuse is > given its nutrition to germinate without check. > > I feel an odd confirmation, as much as I am horrified to learn > this new alleged connection of Minsky to Epstein, how the ways > in > which as a society we fool ourselves with these hyper-rational > models which only reproduce abusive power structures. > > That is how it is done. > > It might also be a reminder to anyone who has been unethical how > history has a way of revealing past deeds. Justice does > come, albeit slowly. > > Kind regards as we near the end of 2019, > > Annalisa From annalisa@unm.edu Tue Dec 24 10:47:40 2019 From: annalisa@unm.edu (Annalisa Aguilar) Date: Tue, 24 Dec 2019 18:47:40 +0000 Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future In-Reply-To: <87h81qdczk.fsf@normalesup.org> References: , <87h81qdczk.fsf@normalesup.org> Message-ID: Hi S?bastien, This sounds like a marvelous conference. I wish I could attend. What does "languaging" mean? I would certainly be interested in receiving copies of the papers presented, if not a list of titles so I might find them. BTW, I have no issue with your announcement post on this thread. Kind regards, Annalisa ________________________________ From: xmca-l-bounces@mailman.ucsd.edu on behalf of S?bastien Lerique Sent: Monday, December 23, 2019 1:05 PM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future Dear all, I'm jumping on the occasion created by Annalisa's email to announce a workshop I am organising which might interest some in this community: it is the "Embodied interactions, Languaging and the Dynamic Medium Workshop" (ELDM2020), an event gathering interests and works in embodiment, languaging, diversity computing and humane technologies, on **18th February in Lyon, France**. As is confirmed in the messages here, recent developments in these communities are ripe for focused conversations, and this workshop will be a coming-together for cross-pollination and explorations of possible common futures. Invited speakers: - Elena Clare Cuffari (Worcester State University) - Mark Dingemanse (Radboud University) - Omar Rizwan (Dynamicland.org) - Jelle van Dijk (University of Twente) There is an open call for proposals until 6th January 2020, and registration opens on 1st January. All the details are available on the main website: https://wehlutyk.gitlab.io/eldm2020/ . I will be delighted to answer any questions that might arise! Best wishes, S?bastien Lerique PS: My sincere apologies for somewhat hijacking the thread with this announcement. I have in fact already sent this message three times to the list, and they seem to be spam-filtered as the announcements neven went through. Annalisa Aguilar writes: > Hello fellow and distant XMCArs, > > So today I saw this in the Intercept and thought I would share > for your awareness, because of the recent developments that > likely impact you, namely: > > * the neoliberalization of higher academic learning > * the compromise of privacy and civil life in the US and other > countries > * the (apparently) hidden agenda of technology as it hard-wires > biases and control over women, minorities, and other > vulnerable people to reproduce past prejudices and power > structures. > > In my thesis I discuss historical mental models of mind and how > they inform technology design. During reading for my thesis I > had always been bothered about the story of the AI Winter. > > Marvin Minsky, an "august" researcher from MIT labs of that > period, had discredited Frank Rosenblatt's work on Perceptrons > (which was reborn in the neural networks of the 1980's to early > naughts). That act basically neutralized funding of legitimate > research in AI and, through vicious academic politics, stymied > anyone doing research even smelling like Perceptrons. Frank > Rosenblatt died in 1971, likely feeling disgraced and ruined, > never knowing the outcome of his lifework. It is a nightmare no > academic would ever want. > > Thanks to Herbert Dreyfus, we know this story which is discussed > in What Computers Still Can't Do > https://mitpress.mit.edu/books/what-computers-still-cant-do > > Well, it ends up that Minksy has been allegedly tied up with > Jeffery Epstein and his exploitation of young women. > > This has been recently reported in an article by Rodrigo > Ochigame of Brazil, who was a student of Joichi Ito, who ran the > MIT > Media Lab. We know that Ito's projects were funded by none other > than Epstein, and this reveal forced Ito's resignation. Read > about it here: > https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ > > I have not completed reading the article, because I had to stop > just to pass this on to the list, to share. > > One might say that computer technology is by its very nature > going to reproduce power structures, but I would rather say that > our mental models are not serving us to create those technology > tools that we require to create an equitable society. How else > can we free the tools from the power structures, if the only > people who use them are those who perpetuate privilege and > cheat, for example by thwarting academic freedom in its process? > How can we develop equality in society if the tools we > create come from inequitable motivations and interactions? Is it > even possible? > > As I see it, the ethics at MIT Labs reveals concretely how the > Cartesian model of mind, basically normalizes the mind of the > privileged, and why only a holistic mental model provides > safeguards against these biases that lead to these > abuses. Models > such as distributed cognition, CHAT, and similar constructs, > intertwine the threads of thought to the body, to culture, > history, > tool-use, language, and society, because these models > encapsulate how environment develops mind, which in turn > develops > environment and so on. Mind is not separate, in a certain sense, > mind IS material and not disembodied. It is when mind is > portrayed otherwise that the means of legitimizing abuse is > given its nutrition to germinate without check. > > I feel an odd confirmation, as much as I am horrified to learn > this new alleged connection of Minsky to Epstein, how the ways > in > which as a society we fool ourselves with these hyper-rational > models which only reproduce abusive power structures. > > That is how it is done. > > It might also be a reminder to anyone who has been unethical how > history has a way of revealing past deeds. Justice does > come, albeit slowly. > > Kind regards as we near the end of 2019, > > Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191224/afeeb395/attachment.html From rbeach@umn.edu Tue Dec 24 11:56:19 2019 From: rbeach@umn.edu (Richard Beach) Date: Tue, 24 Dec 2019 13:56:19 -0600 Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future In-Reply-To: References: <87h81qdczk.fsf@normalesup.org> Message-ID: <5BD4FD22-3B13-43AB-88F9-D86B4DD5838B@umn.edu> Annalisa asked, ?What does ?languaging? mean?? For more on ?languaging? theory, a primary resource is Per Linnell?s 2009 book, Rethinking Language, Mind, and World Dialogically: Interactional and Contextual Theories of Human Sense-making . Charlotte, NC: Information Age Publishing. For research on application of languaging theory to teaching literacy see Beach & Bloome (Eds.) (2019). Languaging Relations for Transforming the Literacy and Language Arts Classrooms (Routledge) and the resource website for Beach & Beauchemin (2019). Teaching Language as Action in the ELA Classroom (Routledge). Richard Beach, Professor Emeritus of English Education, University of Minnesota rbeach@umn.edu Websites: Digital writing , Media?literacy , Teaching literature , Identity-focused ELA Teaching , Common Core?State Standards , Apps for literacy?learning , Teaching?about?climate change , Teaching language as action > On Dec 24, 2019, at 12:47 PM, Annalisa Aguilar wrote: > > Hi S?bastien, > > This sounds like a marvelous conference. I wish I could attend. > > What does "languaging" mean? > > I would certainly be interested in receiving copies of the papers presented, if not a list of titles so I might find them. > > BTW, I have no issue with your announcement post on this thread. > > Kind regards, > > Annalisa > > From: xmca-l-bounces@mailman.ucsd.edu on behalf of S?bastien Lerique > Sent: Monday, December 23, 2019 1:05 PM > To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future > > Dear all, > > I'm jumping on the occasion created by Annalisa's email to > announce a workshop I am organising which might interest some in > this community: it is the "Embodied interactions, Languaging and > the Dynamic Medium Workshop" (ELDM2020), an event gathering > interests and works in embodiment, languaging, diversity computing > and humane technologies, on **18th February in Lyon, France**. As > is confirmed in the messages here, recent developments in these > communities are ripe for focused conversations, and this workshop > will be a coming-together for cross-pollination and explorations > of possible common futures. > > Invited speakers: > - Elena Clare Cuffari (Worcester State University) > - Mark Dingemanse (Radboud University) > - Omar Rizwan (Dynamicland.org) > - Jelle van Dijk (University of Twente) > > There is an open call for proposals until 6th January 2020, and > registration opens on 1st January. All the details are available > on the main website: https://wehlutyk.gitlab.io/eldm2020/ . I will > be delighted to answer any questions that might arise! > > Best wishes, > S?bastien Lerique > > PS: My sincere apologies for somewhat hijacking the thread with > this announcement. I have in fact already sent this message three > times to the list, and they seem to be spam-filtered as the > announcements neven went through. > > > Annalisa Aguilar writes: > > > Hello fellow and distant XMCArs, > > > > So today I saw this in the Intercept and thought I would share > > for your awareness, because of the recent developments that > > likely impact you, namely: > > > > * the neoliberalization of higher academic learning > > * the compromise of privacy and civil life in the US and other > > countries > > * the (apparently) hidden agenda of technology as it hard-wires > > biases and control over women, minorities, and other > > vulnerable people to reproduce past prejudices and power > > structures. > > > > In my thesis I discuss historical mental models of mind and how > > they inform technology design. During reading for my thesis I > > had always been bothered about the story of the AI Winter. > > > > Marvin Minsky, an "august" researcher from MIT labs of that > > period, had discredited Frank Rosenblatt's work on Perceptrons > > (which was reborn in the neural networks of the 1980's to early > > naughts). That act basically neutralized funding of legitimate > > research in AI and, through vicious academic politics, stymied > > anyone doing research even smelling like Perceptrons. Frank > > Rosenblatt died in 1971, likely feeling disgraced and ruined, > > never knowing the outcome of his lifework. It is a nightmare no > > academic would ever want. > > > > Thanks to Herbert Dreyfus, we know this story which is discussed > > in What Computers Still Can't Do > > https://mitpress.mit.edu/books/what-computers-still-cant-do > > > > Well, it ends up that Minksy has been allegedly tied up with > > Jeffery Epstein and his exploitation of young women. > > > > This has been recently reported in an article by Rodrigo > > Ochigame of Brazil, who was a student of Joichi Ito, who ran the > > MIT > > Media Lab. We know that Ito's projects were funded by none other > > than Epstein, and this reveal forced Ito's resignation. Read > > about it here: > > https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ > > > > I have not completed reading the article, because I had to stop > > just to pass this on to the list, to share. > > > > One might say that computer technology is by its very nature > > going to reproduce power structures, but I would rather say that > > our mental models are not serving us to create those technology > > tools that we require to create an equitable society. How else > > can we free the tools from the power structures, if the only > > people who use them are those who perpetuate privilege and > > cheat, for example by thwarting academic freedom in its process? > > How can we develop equality in society if the tools we > > create come from inequitable motivations and interactions? Is it > > even possible? > > > > As I see it, the ethics at MIT Labs reveals concretely how the > > Cartesian model of mind, basically normalizes the mind of the > > privileged, and why only a holistic mental model provides > > safeguards against these biases that lead to these > > abuses. Models > > such as distributed cognition, CHAT, and similar constructs, > > intertwine the threads of thought to the body, to culture, > > history, > > tool-use, language, and society, because these models > > encapsulate how environment develops mind, which in turn > > develops > > environment and so on. Mind is not separate, in a certain sense, > > mind IS material and not disembodied. It is when mind is > > portrayed otherwise that the means of legitimizing abuse is > > given its nutrition to germinate without check. > > > > I feel an odd confirmation, as much as I am horrified to learn > > this new alleged connection of Minsky to Epstein, how the ways > > in > > which as a society we fool ourselves with these hyper-rational > > models which only reproduce abusive power structures. > > > > That is how it is done. > > > > It might also be a reminder to anyone who has been unethical how > > history has a way of revealing past deeds. Justice does > > come, albeit slowly. > > > > Kind regards as we near the end of 2019, > > > > Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191224/ee47e0d3/attachment.html From annalisa@unm.edu Tue Dec 24 18:48:20 2019 From: annalisa@unm.edu (Annalisa Aguilar) Date: Wed, 25 Dec 2019 02:48:20 +0000 Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future In-Reply-To: <5BD4FD22-3B13-43AB-88F9-D86B4DD5838B@umn.edu> References: <87h81qdczk.fsf@normalesup.org> , <5BD4FD22-3B13-43AB-88F9-D86B4DD5838B@umn.edu> Message-ID: When I searched "languaging" I found this: "A term coined by Swain (1985) relating to the cognitive process of negotiating and producing meaningful, comprehensible output as part of language learning." Oddly, I've never heard this word before, I've heard "speaking" though. I would like to know what the nuanced difference of this word-use is from "speaking." Or is it the act of translating from one language to another, as in the bridging one does as one becomes fluent in another language. Is that it? Or is it what one does when one is writing a poem and trying to make a rhyme or have a phrase fit in the meter of the line? Are we languaging now? Kind regards, Annalisa ________________________________ From: xmca-l-bounces@mailman.ucsd.edu on behalf of Richard Beach Sent: Tuesday, December 24, 2019 12:56 PM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future Annalisa asked, ?What does ?languaging? mean?? For more on ?languaging? theory, a primary resource is Per Linnell?s 2009 book, Rethinking Language, Mind, and World Dialogically: Interactional and Contextual Theories of Human Sense-making. Charlotte, NC: Information Age Publishing. For research on application of languaging theory to teaching literacy see Beach & Bloome (Eds.) (2019). Languaging Relations for Transforming the Literacy and Language Arts Classrooms (Routledge) and the resource website for Beach & Beauchemin (2019). Teaching Language as Action in the ELA Classroom (Routledge). Richard Beach, Professor Emeritus of English Education, University of Minnesota rbeach@umn.edu Websites: Digital writing, Media literacy, Teaching literature, Identity-focused ELA Teaching, Common Core State Standards, Apps for literacy learning, Teaching about climate change, Teaching language as action On Dec 24, 2019, at 12:47 PM, Annalisa Aguilar > wrote: Hi S?bastien, This sounds like a marvelous conference. I wish I could attend. What does "languaging" mean? I would certainly be interested in receiving copies of the papers presented, if not a list of titles so I might find them. BTW, I have no issue with your announcement post on this thread. Kind regards, Annalisa ________________________________ From: xmca-l-bounces@mailman.ucsd.edu > on behalf of S?bastien Lerique > Sent: Monday, December 23, 2019 1:05 PM To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future Dear all, I'm jumping on the occasion created by Annalisa's email to announce a workshop I am organising which might interest some in this community: it is the "Embodied interactions, Languaging and the Dynamic Medium Workshop" (ELDM2020), an event gathering interests and works in embodiment, languaging, diversity computing and humane technologies, on **18th February in Lyon, France**. As is confirmed in the messages here, recent developments in these communities are ripe for focused conversations, and this workshop will be a coming-together for cross-pollination and explorations of possible common futures. Invited speakers: - Elena Clare Cuffari (Worcester State University) - Mark Dingemanse (Radboud University) - Omar Rizwan (Dynamicland.org) - Jelle van Dijk (University of Twente) There is an open call for proposals until 6th January 2020, and registration opens on 1st January. All the details are available on the main website: https://wehlutyk.gitlab.io/eldm2020/ . I will be delighted to answer any questions that might arise! Best wishes, S?bastien Lerique PS: My sincere apologies for somewhat hijacking the thread with this announcement. I have in fact already sent this message three times to the list, and they seem to be spam-filtered as the announcements neven went through. Annalisa Aguilar > writes: > Hello fellow and distant XMCArs, > > So today I saw this in the Intercept and thought I would share > for your awareness, because of the recent developments that > likely impact you, namely: > > * the neoliberalization of higher academic learning > * the compromise of privacy and civil life in the US and other > countries > * the (apparently) hidden agenda of technology as it hard-wires > biases and control over women, minorities, and other > vulnerable people to reproduce past prejudices and power > structures. > > In my thesis I discuss historical mental models of mind and how > they inform technology design. During reading for my thesis I > had always been bothered about the story of the AI Winter. > > Marvin Minsky, an "august" researcher from MIT labs of that > period, had discredited Frank Rosenblatt's work on Perceptrons > (which was reborn in the neural networks of the 1980's to early > naughts). That act basically neutralized funding of legitimate > research in AI and, through vicious academic politics, stymied > anyone doing research even smelling like Perceptrons. Frank > Rosenblatt died in 1971, likely feeling disgraced and ruined, > never knowing the outcome of his lifework. It is a nightmare no > academic would ever want. > > Thanks to Herbert Dreyfus, we know this story which is discussed > in What Computers Still Can't Do > https://mitpress.mit.edu/books/what-computers-still-cant-do > > Well, it ends up that Minksy has been allegedly tied up with > Jeffery Epstein and his exploitation of young women. > > This has been recently reported in an article by Rodrigo > Ochigame of Brazil, who was a student of Joichi Ito, who ran the > MIT > Media Lab. We know that Ito's projects were funded by none other > than Epstein, and this reveal forced Ito's resignation. Read > about it here: > https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ > > I have not completed reading the article, because I had to stop > just to pass this on to the list, to share. > > One might say that computer technology is by its very nature > going to reproduce power structures, but I would rather say that > our mental models are not serving us to create those technology > tools that we require to create an equitable society. How else > can we free the tools from the power structures, if the only > people who use them are those who perpetuate privilege and > cheat, for example by thwarting academic freedom in its process? > How can we develop equality in society if the tools we > create come from inequitable motivations and interactions? Is it > even possible? > > As I see it, the ethics at MIT Labs reveals concretely how the > Cartesian model of mind, basically normalizes the mind of the > privileged, and why only a holistic mental model provides > safeguards against these biases that lead to these > abuses. Models > such as distributed cognition, CHAT, and similar constructs, > intertwine the threads of thought to the body, to culture, > history, > tool-use, language, and society, because these models > encapsulate how environment develops mind, which in turn > develops > environment and so on. Mind is not separate, in a certain sense, > mind IS material and not disembodied. It is when mind is > portrayed otherwise that the means of legitimizing abuse is > given its nutrition to germinate without check. > > I feel an odd confirmation, as much as I am horrified to learn > this new alleged connection of Minsky to Epstein, how the ways > in > which as a society we fool ourselves with these hyper-rational > models which only reproduce abusive power structures. > > That is how it is done. > > It might also be a reminder to anyone who has been unethical how > history has a way of revealing past deeds. Justice does > come, albeit slowly. > > Kind regards as we near the end of 2019, > > Annalisa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191225/381fbb9d/attachment.html From sebastien.lerique@normalesup.org Wed Dec 25 14:21:39 2019 From: sebastien.lerique@normalesup.org (=?utf-8?Q?S=C3=A9bastien?= Lerique) Date: Wed, 25 Dec 2019 23:21:39 +0100 Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future In-Reply-To: References: <87h81qdczk.fsf@normalesup.org> <5BD4FD22-3B13-43AB-88F9-D86B4DD5838B@umn.edu> Message-ID: <87a77gcah8.fsf@normalesup.org> Delighted to read the great reactions! I'm not yet familiar enough with what I imagine are a variety of uses of the term languaging (one of the reasons for which I wanted to organise this workshop), but my basic understanding, which seems to follow what Richard cited, is that it is a construal of language as mainly an activity (versus an abstract structure or system), that cannot be isolated from the concrete contexts in which it develops. Another relevant reference is Di Paolo, Cuffari and De Jaegher (2018), "Linguistic Bodies" (or the shorter/introductory version: Cuffari, Di Paolo, & Jaegher, 2015, "From Participatory Sense-Making to Language: There and Back Again."), who flesh out a proposal for a theory of languaging grounded in the enactive approach to cognition. > I would certainly be interested in receiving copies of the > papers presented, if not a list of titles so I might find them. If all goes well I'll be recording all the talks too, and will send the link on this list once it's all up. > BTW, I have no issue with your announcement post on this thread. Thank you! Best, S?bastien PS: I had first written on this list in August 2018 to gather thoughts about Dynamicland, and was rather overwhelmed by the answers. Email bankruptcy got the better part of my energy to answer however, so I would like to apologise here for not having followed up at the time. Annalisa Aguilar writes: > When I searched "languaging" I found this: > > "A term coined by Swain (1985) relating to the cognitive process > of negotiating and producing meaningful, comprehensible > output as part of language learning." > > Oddly, I've never heard this word before, I've heard "speaking" > though. I would like to know what the nuanced difference of > this word-use is from "speaking." > > Or is it the act of translating from one language to another, as > in the bridging one does as one becomes fluent in another > language. Is that it? > > Or is it what one does when one is writing a poem and trying to > make a rhyme or have a phrase fit in the meter of the line? > > Are we languaging now? > > Kind regards, > > Annalisa > > ------------------------------------------------------------------------------------------------------ > From: xmca-l-bounces@mailman.ucsd.edu > on behalf of Richard Beach > > Sent: Tuesday, December 24, 2019 12:56 PM > To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Re: The ethics of artificial intelligence, > past present and future > > Annalisa asked, ?What does ?languaging? mean?? > > For more on ?languaging? theory, a primary resource is Per > Linnell?s 2009 book, Rethinking Language, Mind, and World > Dialogically: Interactional and Contextual Theories of Human > Sense-making. Charlotte, NC: Information Age Publishing. > > For research on application of languaging theory to teaching > literacy see Beach & Bloome (Eds.) (2019). Languaging Relations > for Transforming the Literacy and Language Arts Classrooms > (Routledge) and the resource website for Beach & Beauchemin > (2019). Teaching Language as Action in the ELA Classroom > (Routledge). > > Richard Beach, Professor Emeritus of English Education, > University of Minnesota > rbeach@umn.edu > Websites: Digital writing, Media literacy, Teaching literature, > Identity-focused ELA Teaching, Common Core State Standards, > Apps for literacy learning, Teaching about climate change, > Teaching language as action > > On Dec 24, 2019, at 12:47 PM, Annalisa Aguilar > wrote: > > Hi S?bastien, > > This sounds like a marvelous conference. I wish I could attend. > > What does "languaging" mean? > > I would certainly be interested in receiving copies of the > papers presented, if not a list of titles so I might find them. > > BTW, I have no issue with your announcement post on this > thread. > > Kind regards, > > Annalisa > > ------------------------------------------------------------------------------------------------------ > From: xmca-l-bounces@mailman.ucsd.edu > on behalf of S?bastien > Lerique > > Sent: Monday, December 23, 2019 1:05 PM > To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Re: The ethics of artificial intelligence, > past present and future > > Dear all, > > I'm jumping on the occasion created by Annalisa's email to > announce a workshop I am organising which might interest some > in > this community: it is the "Embodied interactions, Languaging > and > the Dynamic Medium Workshop" (ELDM2020), an event gathering > interests and works in embodiment, languaging, diversity > computing > and humane technologies, on **18th February in Lyon, > France**. As > is confirmed in the messages here, recent developments in these > communities are ripe for focused conversations, and this > workshop > will be a coming-together for cross-pollination and > explorations > of possible common futures. > > Invited speakers: > - Elena Clare Cuffari (Worcester State University) > - Mark Dingemanse (Radboud University) > - Omar Rizwan (Dynamicland.org) > - Jelle van Dijk (University of Twente) > > There is an open call for proposals until 6th January 2020, and > registration opens on 1st January. All the details are > available > on the main website: https://wehlutyk.gitlab.io/eldm2020/ . I > will > be delighted to answer any questions that might arise! > > Best wishes, > S?bastien Lerique > > PS: My sincere apologies for somewhat hijacking the thread with > this announcement. I have in fact already sent this message > three > times to the list, and they seem to be spam-filtered as the > announcements neven went through. > > Annalisa Aguilar writes: > > > Hello fellow and distant XMCArs, > > > > So today I saw this in the Intercept and thought I would > > share > > for your awareness, because of the recent developments that > > likely impact you, namely: > > > > * the neoliberalization of higher academic learning > > * the compromise of privacy and civil life in the US and > > other > > countries > > * the (apparently) hidden agenda of technology as it > > hard-wires > > biases and control over women, minorities, and other > > vulnerable people to reproduce past prejudices and power > > structures. > > > > In my thesis I discuss historical mental models of mind and > > how > > they inform technology design. During reading for my thesis I > > had always been bothered about the story of the AI Winter. > > > > Marvin Minsky, an "august" researcher from MIT labs of that > > period, had discredited Frank Rosenblatt's work on > > Perceptrons > > (which was reborn in the neural networks of the 1980's to > > early > > naughts). That act basically neutralized funding of > > legitimate > > research in AI and, through vicious academic politics, > > stymied > > anyone doing research even smelling like Perceptrons. Frank > > Rosenblatt died in 1971, likely feeling disgraced and ruined, > > never knowing the outcome of his lifework. It is a nightmare > > no > > academic would ever want. > > > > Thanks to Herbert Dreyfus, we know this story which is > > discussed > > in What Computers Still Can't Do > > https://mitpress.mit.edu/books/what-computers-still-cant-do > > > > Well, it ends up that Minksy has been allegedly tied up with > > Jeffery Epstein and his exploitation of young women. > > > > This has been recently reported in an article by Rodrigo > > Ochigame of Brazil, who was a student of Joichi Ito, who ran > > the > > MIT > > Media Lab. We know that Ito's projects were funded by none > > other > > than Epstein, and this reveal forced Ito's resignation. Read > > about it here: > > https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ > > > > I have not completed reading the article, because I had to > > stop > > just to pass this on to the list, to share. > > > > One might say that computer technology is by its very nature > > going to reproduce power structures, but I would rather say > > that > > our mental models are not serving us to create those > > technology > > tools that we require to create an equitable society. How > > else > > can we free the tools from the power structures, if the only > > people who use them are those who perpetuate privilege and > > cheat, for example by thwarting academic freedom in its > > process? > > How can we develop equality in society if the tools we > > create come from inequitable motivations and interactions? Is > > it > > even possible? > > > > As I see it, the ethics at MIT Labs reveals concretely how > > the > > Cartesian model of mind, basically normalizes the mind of the > > privileged, and why only a holistic mental model provides > > safeguards against these biases that lead to these > > abuses. Models > > such as distributed cognition, CHAT, and similar constructs, > > intertwine the threads of thought to the body, to culture, > > history, > > tool-use, language, and society, because these models > > encapsulate how environment develops mind, which in turn > > develops > > environment and so on. Mind is not separate, in a certain > > sense, > > mind IS material and not disembodied. It is when mind is > > portrayed otherwise that the means of legitimizing abuse is > > given its nutrition to germinate without check. > > > > I feel an odd confirmation, as much as I am horrified to > > learn > > this new alleged connection of Minsky to Epstein, how the > > ways > > in > > which as a society we fool ourselves with these > > hyper-rational > > models which only reproduce abusive power structures. > > > > That is how it is done. > > > > It might also be a reminder to anyone who has been unethical > > how > > history has a way of revealing past deeds. Justice does > > come, albeit slowly. > > > > Kind regards as we near the end of 2019, > > > > Annalisa From hshonerd@gmail.com Wed Dec 25 14:35:44 2019 From: hshonerd@gmail.com (HENRY SHONERD) Date: Wed, 25 Dec 2019 15:35:44 -0700 Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future In-Reply-To: <87a77gcah8.fsf@normalesup.org> References: <87h81qdczk.fsf@normalesup.org> <5BD4FD22-3B13-43AB-88F9-D86B4DD5838B@umn.edu> <87a77gcah8.fsf@normalesup.org> Message-ID: <289D758E-6F59-4E24-A2C8-C86DFAE293AC@gmail.com> Happy holidays to all, I?ve been following this thread with great interest. Thanks to Analisa in kicking it off, after a long hiatus on the chat. An additional take on ?languaging? is modality, including manual signing. This profiles how language is embodied and points to its gestural roots in its evolution. Henry > On Dec 25, 2019, at 3:21 PM, S?bastien Lerique wrote: > > Delighted to read the great reactions! > > I'm not yet familiar enough with what I imagine are a variety of uses of the term languaging (one of the reasons for which I wanted to organise this workshop), but my basic understanding, which seems to follow what Richard cited, is that it is a construal of language as mainly an activity (versus an abstract structure or system), that cannot be isolated from the concrete contexts in which it develops. > > Another relevant reference is Di Paolo, Cuffari and De Jaegher (2018), "Linguistic Bodies" (or the shorter/introductory version: Cuffari, Di Paolo, & Jaegher, 2015, "From Participatory Sense-Making to Language: There and Back Again."), who flesh out a proposal for a theory of languaging grounded in the enactive approach to cognition. > >> I would certainly be interested in receiving copies of the papers presented, if not a list of titles so I might find them. > > If all goes well I'll be recording all the talks too, and will send the link on this list once it's all up. > >> BTW, I have no issue with your announcement post on this thread. > > Thank you! > > Best, > S?bastien > > PS: I had first written on this list in August 2018 to gather thoughts about Dynamicland, and was rather overwhelmed by the answers. Email bankruptcy got the better part of my energy to answer however, so I would like to apologise here for not having followed up at the time. > > Annalisa Aguilar writes: > >> When I searched "languaging" I found this: >> >> "A term coined by Swain (1985) relating to the cognitive process of negotiating and producing meaningful, comprehensible >> output as part of language learning." >> >> Oddly, I've never heard this word before, I've heard "speaking" though. I would like to know what the nuanced difference of >> this word-use is from "speaking." >> >> Or is it the act of translating from one language to another, as in the bridging one does as one becomes fluent in another >> language. Is that it? >> >> Or is it what one does when one is writing a poem and trying to make a rhyme or have a phrase fit in the meter of the line? >> >> Are we languaging now? >> >> Kind regards, >> >> Annalisa >> >> ------------------------------------------------------------------------------------------------------ >> From: xmca-l-bounces@mailman.ucsd.edu on behalf of Richard Beach >> >> Sent: Tuesday, December 24, 2019 12:56 PM >> To: eXtended Mind, Culture, Activity >> Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future >> >> Annalisa asked, ?What does ?languaging? mean?? >> >> For more on ?languaging? theory, a primary resource is Per Linnell?s 2009 book, Rethinking Language, Mind, and World >> Dialogically: Interactional and Contextual Theories of Human Sense-making. Charlotte, NC: Information Age Publishing. >> >> For research on application of languaging theory to teaching literacy see Beach & Bloome (Eds.) (2019). Languaging Relations >> for Transforming the Literacy and Language Arts Classrooms (Routledge) and the resource website for Beach & Beauchemin >> (2019). Teaching Language as Action in the ELA Classroom (Routledge). >> >> Richard Beach, Professor Emeritus of English Education, University of Minnesota >> rbeach@umn.edu >> Websites: Digital writing, Media literacy, Teaching literature, Identity-focused ELA Teaching, Common Core State Standards, >> Apps for literacy learning, Teaching about climate change, Teaching language as action >> >> On Dec 24, 2019, at 12:47 PM, Annalisa Aguilar wrote: >> >> Hi S?bastien, >> >> This sounds like a marvelous conference. I wish I could attend. >> >> What does "languaging" mean? >> >> I would certainly be interested in receiving copies of the papers presented, if not a list of titles so I might find them. >> >> BTW, I have no issue with your announcement post on this thread. >> >> Kind regards, >> >> Annalisa >> >> ------------------------------------------------------------------------------------------------------ >> From: xmca-l-bounces@mailman.ucsd.edu on behalf of S?bastien Lerique >> >> Sent: Monday, December 23, 2019 1:05 PM >> To: eXtended Mind, Culture, Activity >> Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future >> >> Dear all, >> >> I'm jumping on the occasion created by Annalisa's email to >> announce a workshop I am organising which might interest some in >> this community: it is the "Embodied interactions, Languaging and >> the Dynamic Medium Workshop" (ELDM2020), an event gathering >> interests and works in embodiment, languaging, diversity computing >> and humane technologies, on **18th February in Lyon, France**. As >> is confirmed in the messages here, recent developments in these >> communities are ripe for focused conversations, and this workshop >> will be a coming-together for cross-pollination and explorations >> of possible common futures. >> >> Invited speakers: >> - Elena Clare Cuffari (Worcester State University) >> - Mark Dingemanse (Radboud University) >> - Omar Rizwan (Dynamicland.org) >> - Jelle van Dijk (University of Twente) >> >> There is an open call for proposals until 6th January 2020, and >> registration opens on 1st January. All the details are available >> on the main website: https://wehlutyk.gitlab.io/eldm2020/ . I will >> be delighted to answer any questions that might arise! >> >> Best wishes, >> S?bastien Lerique >> >> PS: My sincere apologies for somewhat hijacking the thread with >> this announcement. I have in fact already sent this message three >> times to the list, and they seem to be spam-filtered as the >> announcements neven went through. >> >> Annalisa Aguilar writes: >> >> > Hello fellow and distant XMCArs, >> > >> > So today I saw this in the Intercept and thought I would > share >> > for your awareness, because of the recent developments that >> > likely impact you, namely: >> > >> > * the neoliberalization of higher academic learning >> > * the compromise of privacy and civil life in the US and > other >> > countries >> > * the (apparently) hidden agenda of technology as it > hard-wires >> > biases and control over women, minorities, and other >> > vulnerable people to reproduce past prejudices and power >> > structures. >> > >> > In my thesis I discuss historical mental models of mind and > how >> > they inform technology design. During reading for my thesis I >> > had always been bothered about the story of the AI Winter. >> > >> > Marvin Minsky, an "august" researcher from MIT labs of that >> > period, had discredited Frank Rosenblatt's work on > Perceptrons >> > (which was reborn in the neural networks of the 1980's to > early >> > naughts). That act basically neutralized funding of > legitimate >> > research in AI and, through vicious academic politics, > stymied >> > anyone doing research even smelling like Perceptrons. Frank >> > Rosenblatt died in 1971, likely feeling disgraced and ruined, >> > never knowing the outcome of his lifework. It is a nightmare > no >> > academic would ever want. >> > >> > Thanks to Herbert Dreyfus, we know this story which is > discussed >> > in What Computers Still Can't Do >> > https://mitpress.mit.edu/books/what-computers-still-cant-do >> > >> > Well, it ends up that Minksy has been allegedly tied up with >> > Jeffery Epstein and his exploitation of young women. >> > >> > This has been recently reported in an article by Rodrigo >> > Ochigame of Brazil, who was a student of Joichi Ito, who ran > the >> > MIT >> > Media Lab. We know that Ito's projects were funded by none > other >> > than Epstein, and this reveal forced Ito's resignation. Read >> > about it here: >> > https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ >> > >> > I have not completed reading the article, because I had to > stop >> > just to pass this on to the list, to share. >> > >> > One might say that computer technology is by its very nature >> > going to reproduce power structures, but I would rather say > that >> > our mental models are not serving us to create those > technology >> > tools that we require to create an equitable society. How > else >> > can we free the tools from the power structures, if the only >> > people who use them are those who perpetuate privilege and >> > cheat, for example by thwarting academic freedom in its > process? >> > How can we develop equality in society if the tools we >> > create come from inequitable motivations and interactions? Is > it >> > even possible? >> > >> > As I see it, the ethics at MIT Labs reveals concretely how > the >> > Cartesian model of mind, basically normalizes the mind of the >> > privileged, and why only a holistic mental model provides >> > safeguards against these biases that lead to these >> > abuses. Models >> > such as distributed cognition, CHAT, and similar constructs, >> > intertwine the threads of thought to the body, to culture, >> > history, >> > tool-use, language, and society, because these models >> > encapsulate how environment develops mind, which in turn >> > develops >> > environment and so on. Mind is not separate, in a certain > sense, >> > mind IS material and not disembodied. It is when mind is >> > portrayed otherwise that the means of legitimizing abuse is >> > given its nutrition to germinate without check. >> > >> > I feel an odd confirmation, as much as I am horrified to > learn >> > this new alleged connection of Minsky to Epstein, how the > ways >> > in >> > which as a society we fool ourselves with these > hyper-rational >> > models which only reproduce abusive power structures. >> > >> > That is how it is done. >> > >> > It might also be a reminder to anyone who has been unethical > how >> > history has a way of revealing past deeds. Justice does >> > come, albeit slowly. >> > >> > Kind regards as we near the end of 2019, >> > >> > Annalisa > From rbeach@umn.edu Wed Dec 25 18:26:46 2019 From: rbeach@umn.edu (Richard Beach) Date: Wed, 25 Dec 2019 20:26:46 -0600 Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future In-Reply-To: <289D758E-6F59-4E24-A2C8-C86DFAE293AC@gmail.com> References: <87h81qdczk.fsf@normalesup.org> <5BD4FD22-3B13-43AB-88F9-D86B4DD5838B@umn.edu> <87a77gcah8.fsf@normalesup.org> <289D758E-6F59-4E24-A2C8-C86DFAE293AC@gmail.com> Message-ID: <7D6E4254-240C-45CC-8DFB-79419C00B8FD@umn.edu> Drawing on Bakhtin, Kenneth Burke (1969), Alton Becker (1991), and others, as well an ?enactivist? perspective on language as a ?medium? (Cowley, 2011; Gee, 2011), a languaging perspective switches the primary unit of analysis from the autonomous individual adhering to linguistic norms to a focus on languaging ?co-actions? for enacting ?in-between? meanings in relations with others (Bertau, 2014; Kim & Bloome, 2016; Linell, 2009; Linell & Markova, 2014). As noted by Henry, these languaging actions also include emotions and embodied actions (Bottineau, 2010; Jensen, 2014) leading to enacting ?We-relationships? based on ?psychological openness and the interpersonal smoothness to bond with people? (Cornejo, 2014, p. 247). Languaging actions is ?a movement between these selves, as forms in specific performances: the sensorial, experienced, perceived forms of the verbal performances in time and space? (Bertau, 2014, p. 530). Students benefit from reflecting on their languaging actions related to the degree to which these actions serve to enhance their ?personhood? within relations. Analysis of Italian college students coping with difficulties in college employed narratives to reflect on their languaging relations with teachers and peers, contributing to growth over time (Esposito & Freda, 2016). This languaging perspective is related to the ethical dimensions under discussion in this listserv given that languaging actions serve to enact ?I-thou? relations (Buber, 1971) constituting supportive, trusting relations with others (Bloome & Beauchemin, 2016; Markova & Linell 2014). A languaging perspective also overlaps with/draws on translanguaging theory and research (Fu et al., 2019; Garc?a & Wei, 2014) that challenges notions of languages as distinct entities. See the attached file for references/further readings as well as the website for my Teaching Language as Action in the ELA Classroom . Richard Beach, Professor Emeritus of English Education, University of Minnesota rbeach@umn.edu Websites: Digital writing , Media?literacy , Teaching literature , Identity-focused ELA Teaching , Common Core?State Standards , Apps for literacy?learning , Teaching?about?climate change , Teaching language as action > On Dec 25, 2019, at 4:35 PM, HENRY SHONERD wrote: > > Happy holidays to all, > I?ve been following this thread with great interest. Thanks to Analisa in kicking it off, after a long hiatus on the chat. An additional take on ?languaging? is modality, including manual signing. This profiles how language is embodied and points to its gestural roots in its evolution. > Henry > > > >> On Dec 25, 2019, at 3:21 PM, S?bastien Lerique wrote: >> >> Delighted to read the great reactions! >> >> I'm not yet familiar enough with what I imagine are a variety of uses of the term languaging (one of the reasons for which I wanted to organise this workshop), but my basic understanding, which seems to follow what Richard cited, is that it is a construal of language as mainly an activity (versus an abstract structure or system), that cannot be isolated from the concrete contexts in which it develops. >> >> Another relevant reference is Di Paolo, Cuffari and De Jaegher (2018), "Linguistic Bodies" (or the shorter/introductory version: Cuffari, Di Paolo, & Jaegher, 2015, "From Participatory Sense-Making to Language: There and Back Again."), who flesh out a proposal for a theory of languaging grounded in the enactive approach to cognition. >> >>> I would certainly be interested in receiving copies of the papers presented, if not a list of titles so I might find them. >> >> If all goes well I'll be recording all the talks too, and will send the link on this list once it's all up. >> >>> BTW, I have no issue with your announcement post on this thread. >> >> Thank you! >> >> Best, >> S?bastien >> >> PS: I had first written on this list in August 2018 to gather thoughts about Dynamicland, and was rather overwhelmed by the answers. Email bankruptcy got the better part of my energy to answer however, so I would like to apologise here for not having followed up at the time. >> >> Annalisa Aguilar writes: >> >>> When I searched "languaging" I found this: >>> >>> "A term coined by Swain (1985) relating to the cognitive process of negotiating and producing meaningful, comprehensible >>> output as part of language learning." >>> >>> Oddly, I've never heard this word before, I've heard "speaking" though. I would like to know what the nuanced difference of >>> this word-use is from "speaking." >>> >>> Or is it the act of translating from one language to another, as in the bridging one does as one becomes fluent in another >>> language. Is that it? >>> >>> Or is it what one does when one is writing a poem and trying to make a rhyme or have a phrase fit in the meter of the line? >>> >>> Are we languaging now? >>> >>> Kind regards, >>> >>> Annalisa >>> >>> ------------------------------------------------------------------------------------------------------ >>> From: xmca-l-bounces@mailman.ucsd.edu on behalf of Richard Beach >>> >>> Sent: Tuesday, December 24, 2019 12:56 PM >>> To: eXtended Mind, Culture, Activity >>> Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future >>> >>> Annalisa asked, ?What does ?languaging? mean?? >>> >>> For more on ?languaging? theory, a primary resource is Per Linnell?s 2009 book, Rethinking Language, Mind, and World >>> Dialogically: Interactional and Contextual Theories of Human Sense-making. Charlotte, NC: Information Age Publishing. >>> >>> For research on application of languaging theory to teaching literacy see Beach & Bloome (Eds.) (2019). Languaging Relations >>> for Transforming the Literacy and Language Arts Classrooms (Routledge) and the resource website for Beach & Beauchemin >>> (2019). Teaching Language as Action in the ELA Classroom (Routledge). >>> >>> Richard Beach, Professor Emeritus of English Education, University of Minnesota >>> rbeach@umn.edu >>> Websites: Digital writing, Media literacy, Teaching literature, Identity-focused ELA Teaching, Common Core State Standards, >>> Apps for literacy learning, Teaching about climate change, Teaching language as action >>> >>> On Dec 24, 2019, at 12:47 PM, Annalisa Aguilar wrote: >>> >>> Hi S?bastien, >>> >>> This sounds like a marvelous conference. I wish I could attend. >>> >>> What does "languaging" mean? >>> >>> I would certainly be interested in receiving copies of the papers presented, if not a list of titles so I might find them. >>> >>> BTW, I have no issue with your announcement post on this thread. >>> >>> Kind regards, >>> >>> Annalisa >>> >>> ------------------------------------------------------------------------------------------------------ >>> From: xmca-l-bounces@mailman.ucsd.edu on behalf of S?bastien Lerique >>> >>> Sent: Monday, December 23, 2019 1:05 PM >>> To: eXtended Mind, Culture, Activity >>> Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future >>> >>> Dear all, >>> >>> I'm jumping on the occasion created by Annalisa's email to >>> announce a workshop I am organising which might interest some in >>> this community: it is the "Embodied interactions, Languaging and >>> the Dynamic Medium Workshop" (ELDM2020), an event gathering >>> interests and works in embodiment, languaging, diversity computing >>> and humane technologies, on **18th February in Lyon, France**. As >>> is confirmed in the messages here, recent developments in these >>> communities are ripe for focused conversations, and this workshop >>> will be a coming-together for cross-pollination and explorations >>> of possible common futures. >>> >>> Invited speakers: >>> - Elena Clare Cuffari (Worcester State University) >>> - Mark Dingemanse (Radboud University) >>> - Omar Rizwan (Dynamicland.org) >>> - Jelle van Dijk (University of Twente) >>> >>> There is an open call for proposals until 6th January 2020, and >>> registration opens on 1st January. All the details are available >>> on the main website: https://wehlutyk.gitlab.io/eldm2020/ . I will >>> be delighted to answer any questions that might arise! >>> >>> Best wishes, >>> S?bastien Lerique >>> >>> PS: My sincere apologies for somewhat hijacking the thread with >>> this announcement. I have in fact already sent this message three >>> times to the list, and they seem to be spam-filtered as the >>> announcements neven went through. >>> >>> Annalisa Aguilar writes: >>> >>>> Hello fellow and distant XMCArs, >>>> >>>> So today I saw this in the Intercept and thought I would > share >>>> for your awareness, because of the recent developments that >>>> likely impact you, namely: >>>> >>>> * the neoliberalization of higher academic learning >>>> * the compromise of privacy and civil life in the US and > other >>>> countries >>>> * the (apparently) hidden agenda of technology as it > hard-wires >>>> biases and control over women, minorities, and other >>>> vulnerable people to reproduce past prejudices and power >>>> structures. >>>> >>>> In my thesis I discuss historical mental models of mind and > how >>>> they inform technology design. During reading for my thesis I >>>> had always been bothered about the story of the AI Winter. >>>> >>>> Marvin Minsky, an "august" researcher from MIT labs of that >>>> period, had discredited Frank Rosenblatt's work on > Perceptrons >>>> (which was reborn in the neural networks of the 1980's to > early >>>> naughts). That act basically neutralized funding of > legitimate >>>> research in AI and, through vicious academic politics, > stymied >>>> anyone doing research even smelling like Perceptrons. Frank >>>> Rosenblatt died in 1971, likely feeling disgraced and ruined, >>>> never knowing the outcome of his lifework. It is a nightmare > no >>>> academic would ever want. >>>> >>>> Thanks to Herbert Dreyfus, we know this story which is > discussed >>>> in What Computers Still Can't Do >>>> https://mitpress.mit.edu/books/what-computers-still-cant-do >>>> >>>> Well, it ends up that Minksy has been allegedly tied up with >>>> Jeffery Epstein and his exploitation of young women. >>>> >>>> This has been recently reported in an article by Rodrigo >>>> Ochigame of Brazil, who was a student of Joichi Ito, who ran > the >>>> MIT >>>> Media Lab. We know that Ito's projects were funded by none > other >>>> than Epstein, and this reveal forced Ito's resignation. Read >>>> about it here: >>>> https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ >>>> >>>> I have not completed reading the article, because I had to > stop >>>> just to pass this on to the list, to share. >>>> >>>> One might say that computer technology is by its very nature >>>> going to reproduce power structures, but I would rather say > that >>>> our mental models are not serving us to create those > technology >>>> tools that we require to create an equitable society. How > else >>>> can we free the tools from the power structures, if the only >>>> people who use them are those who perpetuate privilege and >>>> cheat, for example by thwarting academic freedom in its > process? >>>> How can we develop equality in society if the tools we >>>> create come from inequitable motivations and interactions? Is > it >>>> even possible? >>>> >>>> As I see it, the ethics at MIT Labs reveals concretely how > the >>>> Cartesian model of mind, basically normalizes the mind of the >>>> privileged, and why only a holistic mental model provides >>>> safeguards against these biases that lead to these >>>> abuses. Models >>>> such as distributed cognition, CHAT, and similar constructs, >>>> intertwine the threads of thought to the body, to culture, >>>> history, >>>> tool-use, language, and society, because these models >>>> encapsulate how environment develops mind, which in turn >>>> develops >>>> environment and so on. Mind is not separate, in a certain > sense, >>>> mind IS material and not disembodied. It is when mind is >>>> portrayed otherwise that the means of legitimizing abuse is >>>> given its nutrition to germinate without check. >>>> >>>> I feel an odd confirmation, as much as I am horrified to > learn >>>> this new alleged connection of Minsky to Epstein, how the > ways >>>> in >>>> which as a society we fool ourselves with these > hyper-rational >>>> models which only reproduce abusive power structures. >>>> >>>> That is how it is done. >>>> >>>> It might also be a reminder to anyone who has been unethical > how >>>> history has a way of revealing past deeds. Justice does >>>> come, albeit slowly. >>>> >>>> Kind regards as we near the end of 2019, >>>> >>>> Annalisa >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191225/663cefee/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: Readings on languaging.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 20567 bytes Desc: not available Url : http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191225/663cefee/attachment.bin -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191225/663cefee/attachment-0001.html From annalisa@unm.edu Thu Dec 26 12:45:05 2019 From: annalisa@unm.edu (Annalisa Aguilar) Date: Thu, 26 Dec 2019 20:45:05 +0000 Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future In-Reply-To: <289D758E-6F59-4E24-A2C8-C86DFAE293AC@gmail.com> References: <87h81qdczk.fsf@normalesup.org> <5BD4FD22-3B13-43AB-88F9-D86B4DD5838B@umn.edu> <87a77gcah8.fsf@normalesup.org>, <289D758E-6F59-4E24-A2C8-C86DFAE293AC@gmail.com> Message-ID: Thank you Henri and venerable others, I hope your holiday break is restful and refreshing. And to Richard and S?bastien, thank you for listing references. That's marvelous. I will also be happy to learn about the conference and links resulting forthwith! S?bastien, I would be curious if there were a discussion at your conference about how the word "languaging" came to be and that it might be documented in some fashion. It might be a good intro for others learning about the research for the first time. If it was first used in the 80s and ever since, I regret that your post is the first time I've heard it. I can't believe I wouldn't have heard of it, unless this is something new arising? But perhaps I was just not looking in the right places or I'm forgetting having seen it and am reading it in a new context of the list. Regardless, it's a wonderful development, because this research seems to flow (historically) from developments concerning embodied thinking and other revelations in cognition, such as CHAT and distributed cognition. Although, my initial post concerns ethics of AI, it's remarkable that the thread has turned to languaging, which is embodied, something that AI is not. There will be more discussion about technology ethics and social harms that arise from neglect of very large companies such as Facebook, Google, and the like. Here is one such reporting from the Guardian: https://www.theguardian.com/technology/2019/dec/26/too-big-to-fail-techs-decade-of-scale-and-impunity I hope we can continue to discuss AI and ethics. I'm interested in what people have to contribute on that account. There is an inclination to say that technology is inevitable and that we can't uninvent AI (something I am not proposing). But now that AI is here, how do we prevent being inured to a lack of ethics in research and in application of AI? Kind regards, A n n a l i s a ________________________________ From: xmca-l-bounces@mailman.ucsd.edu on behalf of HENRY SHONERD Sent: Wednesday, December 25, 2019 3:35 PM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future Happy holidays to all, I?ve been following this thread with great interest. Thanks to Analisa in kicking it off, after a long hiatus on the chat. An additional take on ?languaging? is modality, including manual signing. This profiles how language is embodied and points to its gestural roots in its evolution. Henry > On Dec 25, 2019, at 3:21 PM, S?bastien Lerique wrote: > > Delighted to read the great reactions! > > I'm not yet familiar enough with what I imagine are a variety of uses of the term languaging (one of the reasons for which I wanted to organise this workshop), but my basic understanding, which seems to follow what Richard cited, is that it is a construal of language as mainly an activity (versus an abstract structure or system), that cannot be isolated from the concrete contexts in which it develops. > > Another relevant reference is Di Paolo, Cuffari and De Jaegher (2018), "Linguistic Bodies" (or the shorter/introductory version: Cuffari, Di Paolo, & Jaegher, 2015, "From Participatory Sense-Making to Language: There and Back Again."), who flesh out a proposal for a theory of languaging grounded in the enactive approach to cognition. > >> I would certainly be interested in receiving copies of the papers presented, if not a list of titles so I might find them. > > If all goes well I'll be recording all the talks too, and will send the link on this list once it's all up. > >> BTW, I have no issue with your announcement post on this thread. > > Thank you! > > Best, > S?bastien > > PS: I had first written on this list in August 2018 to gather thoughts about Dynamicland, and was rather overwhelmed by the answers. Email bankruptcy got the better part of my energy to answer however, so I would like to apologise here for not having followed up at the time. > > Annalisa Aguilar writes: > >> When I searched "languaging" I found this: >> >> "A term coined by Swain (1985) relating to the cognitive process of negotiating and producing meaningful, comprehensible >> output as part of language learning." >> >> Oddly, I've never heard this word before, I've heard "speaking" though. I would like to know what the nuanced difference of >> this word-use is from "speaking." >> >> Or is it the act of translating from one language to another, as in the bridging one does as one becomes fluent in another >> language. Is that it? >> >> Or is it what one does when one is writing a poem and trying to make a rhyme or have a phrase fit in the meter of the line? >> >> Are we languaging now? >> >> Kind regards, >> >> Annalisa >> >> ------------------------------------------------------------------------------------------------------ >> From: xmca-l-bounces@mailman.ucsd.edu on behalf of Richard Beach >> >> Sent: Tuesday, December 24, 2019 12:56 PM >> To: eXtended Mind, Culture, Activity >> Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future >> >> Annalisa asked, ?What does ?languaging? mean?? >> >> For more on ?languaging? theory, a primary resource is Per Linnell?s 2009 book, Rethinking Language, Mind, and World >> Dialogically: Interactional and Contextual Theories of Human Sense-making. Charlotte, NC: Information Age Publishing. >> >> For research on application of languaging theory to teaching literacy see Beach & Bloome (Eds.) (2019). Languaging Relations >> for Transforming the Literacy and Language Arts Classrooms (Routledge) and the resource website for Beach & Beauchemin >> (2019). Teaching Language as Action in the ELA Classroom (Routledge). >> >> Richard Beach, Professor Emeritus of English Education, University of Minnesota >> rbeach@umn.edu >> Websites: Digital writing, Media literacy, Teaching literature, Identity-focused ELA Teaching, Common Core State Standards, >> Apps for literacy learning, Teaching about climate change, Teaching language as action >> >> On Dec 24, 2019, at 12:47 PM, Annalisa Aguilar wrote: >> >> Hi S?bastien, >> >> This sounds like a marvelous conference. I wish I could attend. >> >> What does "languaging" mean? >> >> I would certainly be interested in receiving copies of the papers presented, if not a list of titles so I might find them. >> >> BTW, I have no issue with your announcement post on this thread. >> >> Kind regards, >> >> Annalisa >> >> ------------------------------------------------------------------------------------------------------ >> From: xmca-l-bounces@mailman.ucsd.edu on behalf of S?bastien Lerique >> >> Sent: Monday, December 23, 2019 1:05 PM >> To: eXtended Mind, Culture, Activity >> Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future >> >> Dear all, >> >> I'm jumping on the occasion created by Annalisa's email to >> announce a workshop I am organising which might interest some in >> this community: it is the "Embodied interactions, Languaging and >> the Dynamic Medium Workshop" (ELDM2020), an event gathering >> interests and works in embodiment, languaging, diversity computing >> and humane technologies, on **18th February in Lyon, France**. As >> is confirmed in the messages here, recent developments in these >> communities are ripe for focused conversations, and this workshop >> will be a coming-together for cross-pollination and explorations >> of possible common futures. >> >> Invited speakers: >> - Elena Clare Cuffari (Worcester State University) >> - Mark Dingemanse (Radboud University) >> - Omar Rizwan (Dynamicland.org) >> - Jelle van Dijk (University of Twente) >> >> There is an open call for proposals until 6th January 2020, and >> registration opens on 1st January. All the details are available >> on the main website: https://wehlutyk.gitlab.io/eldm2020/ . I will >> be delighted to answer any questions that might arise! >> >> Best wishes, >> S?bastien Lerique >> >> PS: My sincere apologies for somewhat hijacking the thread with >> this announcement. I have in fact already sent this message three >> times to the list, and they seem to be spam-filtered as the >> announcements neven went through. >> >> Annalisa Aguilar writes: >> >> > Hello fellow and distant XMCArs, >> > >> > So today I saw this in the Intercept and thought I would > share >> > for your awareness, because of the recent developments that >> > likely impact you, namely: >> > >> > * the neoliberalization of higher academic learning >> > * the compromise of privacy and civil life in the US and > other >> > countries >> > * the (apparently) hidden agenda of technology as it > hard-wires >> > biases and control over women, minorities, and other >> > vulnerable people to reproduce past prejudices and power >> > structures. >> > >> > In my thesis I discuss historical mental models of mind and > how >> > they inform technology design. During reading for my thesis I >> > had always been bothered about the story of the AI Winter. >> > >> > Marvin Minsky, an "august" researcher from MIT labs of that >> > period, had discredited Frank Rosenblatt's work on > Perceptrons >> > (which was reborn in the neural networks of the 1980's to > early >> > naughts). That act basically neutralized funding of > legitimate >> > research in AI and, through vicious academic politics, > stymied >> > anyone doing research even smelling like Perceptrons. Frank >> > Rosenblatt died in 1971, likely feeling disgraced and ruined, >> > never knowing the outcome of his lifework. It is a nightmare > no >> > academic would ever want. >> > >> > Thanks to Herbert Dreyfus, we know this story which is > discussed >> > in What Computers Still Can't Do >> > https://mitpress.mit.edu/books/what-computers-still-cant-do >> > >> > Well, it ends up that Minksy has been allegedly tied up with >> > Jeffery Epstein and his exploitation of young women. >> > >> > This has been recently reported in an article by Rodrigo >> > Ochigame of Brazil, who was a student of Joichi Ito, who ran > the >> > MIT >> > Media Lab. We know that Ito's projects were funded by none > other >> > than Epstein, and this reveal forced Ito's resignation. Read >> > about it here: >> > https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ >> > >> > I have not completed reading the article, because I had to > stop >> > just to pass this on to the list, to share. >> > >> > One might say that computer technology is by its very nature >> > going to reproduce power structures, but I would rather say > that >> > our mental models are not serving us to create those > technology >> > tools that we require to create an equitable society. How > else >> > can we free the tools from the power structures, if the only >> > people who use them are those who perpetuate privilege and >> > cheat, for example by thwarting academic freedom in its > process? >> > How can we develop equality in society if the tools we >> > create come from inequitable motivations and interactions? Is > it >> > even possible? >> > >> > As I see it, the ethics at MIT Labs reveals concretely how > the >> > Cartesian model of mind, basically normalizes the mind of the >> > privileged, and why only a holistic mental model provides >> > safeguards against these biases that lead to these >> > abuses. Models >> > such as distributed cognition, CHAT, and similar constructs, >> > intertwine the threads of thought to the body, to culture, >> > history, >> > tool-use, language, and society, because these models >> > encapsulate how environment develops mind, which in turn >> > develops >> > environment and so on. Mind is not separate, in a certain > sense, >> > mind IS material and not disembodied. It is when mind is >> > portrayed otherwise that the means of legitimizing abuse is >> > given its nutrition to germinate without check. >> > >> > I feel an odd confirmation, as much as I am horrified to > learn >> > this new alleged connection of Minsky to Epstein, how the > ways >> > in >> > which as a society we fool ourselves with these > hyper-rational >> > models which only reproduce abusive power structures. >> > >> > That is how it is done. >> > >> > It might also be a reminder to anyone who has been unethical > how >> > history has a way of revealing past deeds. Justice does >> > come, albeit slowly. >> > >> > Kind regards as we near the end of 2019, >> > >> > Annalisa > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191226/25797e7d/attachment.html From hshonerd@gmail.com Thu Dec 26 17:08:29 2019 From: hshonerd@gmail.com (HENRY SHONERD) Date: Thu, 26 Dec 2019 18:08:29 -0700 Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future In-Reply-To: References: <87h81qdczk.fsf@normalesup.org> <5BD4FD22-3B13-43AB-88F9-D86B4DD5838B@umn.edu> <87a77gcah8.fsf@normalesup.org> <289D758E-6F59-4E24-A2C8-C86DFAE293AC@gmail.com> Message-ID: <1934C7C5-28B2-4729-8061-2B232DFC6C64@gmail.com> Annalisa, Ed, Andy, S?bastian, and Richard,( in that order, if I remember how this thread has developed), Clearly, to me, this thread is a beautiful example of how languaging can work, though I find it seldom works this well, especially in academia. Languaging is doing things with words, as Austin would have it. dialoguing as Bakhtin would have it. I would encourage others to sample from Richard Beaches? link: website for my Teaching Language as Action in the ELA Classroom to see how praxis can work in learning/teaching a second language. I take to heart Annalisa?s misgivings about AI. But consider this: AI has been with us at least as long as the printing press, at least in terms of ?disruption?. Some are afraid of a ?singularity? when AI becomes general, is smarter than humans at everything, then takes overs. We coudln?t have predicted the religious wars that arose out of the Guttenburg Bible and Martin Luther?s 95 Theses, posted all over Europe. The inherent evil of the printing press is immanent in the same way that its virtues are. Technology is an existential threat and an existential promise. It?s what we make of it. But I agree with Annalisa?s basic premise: Descartes had it wrong, and unless we get that right, we are probably doomed. On that cheery note?over and out.:) Henry > On Dec 26, 2019, at 1:45 PM, Annalisa Aguilar wrote: > > Thank you Henri and venerable others, > > I hope your holiday break is restful and refreshing. > > And to Richard and S?bastien, thank you for listing references. That's marvelous. > > I will also be happy to learn about the conference and links resulting forthwith! > > S?bastien, I would be curious if there were a discussion at your conference about how the word "languaging" came to be and that it might be documented in some fashion. It might be a good intro for others learning about the research for the first time. > > If it was first used in the 80s and ever since, I regret that your post is the first time I've heard it. I can't believe I wouldn't have heard of it, unless this is something new arising? But perhaps I was just not looking in the right places or I'm forgetting having seen it and am reading it in a new context of the list. > > Regardless, it's a wonderful development, because this research seems to flow (historically) from developments concerning embodied thinking and other revelations in cognition, such as CHAT and distributed cognition. > > Although, my initial post concerns ethics of AI, it's remarkable that the thread has turned to languaging, which is embodied, something that AI is not. > > There will be more discussion about technology ethics and social harms that arise from neglect of very large companies such as Facebook, Google, and the like. Here is one such reporting from the Guardian: > https://www.theguardian.com/technology/2019/dec/26/too-big-to-fail-techs-decade-of-scale-and-impunity > > I hope we can continue to discuss AI and ethics. I'm interested in what people have to contribute on that account. There is an inclination to say that technology is inevitable and that we can't uninvent AI (something I am not proposing). But now that AI is here, how do we prevent being inured to a lack of ethics in research and in application of AI? > > Kind regards, > > A n n a l i s a > > > From: xmca-l-bounces@mailman.ucsd.edu on behalf of HENRY SHONERD > Sent: Wednesday, December 25, 2019 3:35 PM > To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future > > Happy holidays to all, > I?ve been following this thread with great interest. Thanks to Analisa in kicking it off, after a long hiatus on the chat. An additional take on ?languaging? is modality, including manual signing. This profiles how language is embodied and points to its gestural roots in its evolution. > Henry > > > > > On Dec 25, 2019, at 3:21 PM, S?bastien Lerique wrote: > > > > Delighted to read the great reactions! > > > > I'm not yet familiar enough with what I imagine are a variety of uses of the term languaging (one of the reasons for which I wanted to organise this workshop), but my basic understanding, which seems to follow what Richard cited, is that it is a construal of language as mainly an activity (versus an abstract structure or system), that cannot be isolated from the concrete contexts in which it develops. > > > > Another relevant reference is Di Paolo, Cuffari and De Jaegher (2018), "Linguistic Bodies" (or the shorter/introductory version: Cuffari, Di Paolo, & Jaegher, 2015, "From Participatory Sense-Making to Language: There and Back Again."), who flesh out a proposal for a theory of languaging grounded in the enactive approach to cognition. > > > >> I would certainly be interested in receiving copies of the papers presented, if not a list of titles so I might find them. > > > > If all goes well I'll be recording all the talks too, and will send the link on this list once it's all up. > > > >> BTW, I have no issue with your announcement post on this thread. > > > > Thank you! > > > > Best, > > S?bastien > > > > PS: I had first written on this list in August 2018 to gather thoughts about Dynamicland, and was rather overwhelmed by the answers. Email bankruptcy got the better part of my energy to answer however, so I would like to apologise here for not having followed up at the time. > > > > Annalisa Aguilar writes: > > > >> When I searched "languaging" I found this: > >> > >> "A term coined by Swain (1985) relating to the cognitive process of negotiating and producing meaningful, comprehensible > >> output as part of language learning." > >> > >> Oddly, I've never heard this word before, I've heard "speaking" though. I would like to know what the nuanced difference of > >> this word-use is from "speaking." > >> > >> Or is it the act of translating from one language to another, as in the bridging one does as one becomes fluent in another > >> language. Is that it? > >> > >> Or is it what one does when one is writing a poem and trying to make a rhyme or have a phrase fit in the meter of the line? > >> > >> Are we languaging now? > >> > >> Kind regards, > >> > >> Annalisa > >> > >> ------------------------------------------------------------------------------------------------------ > >> From: xmca-l-bounces@mailman.ucsd.edu on behalf of Richard Beach > >> > >> Sent: Tuesday, December 24, 2019 12:56 PM > >> To: eXtended Mind, Culture, Activity > >> Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future > >> > >> Annalisa asked, ?What does ?languaging? mean?? > >> > >> For more on ?languaging? theory, a primary resource is Per Linnell?s 2009 book, Rethinking Language, Mind, and World > >> Dialogically: Interactional and Contextual Theories of Human Sense-making. Charlotte, NC: Information Age Publishing. > >> > >> For research on application of languaging theory to teaching literacy see Beach & Bloome (Eds.) (2019). Languaging Relations > >> for Transforming the Literacy and Language Arts Classrooms (Routledge) and the resource website for Beach & Beauchemin > >> (2019). Teaching Language as Action in the ELA Classroom (Routledge). > >> > >> Richard Beach, Professor Emeritus of English Education, University of Minnesota > >> rbeach@umn.edu > >> Websites: Digital writing, Media literacy, Teaching literature, Identity-focused ELA Teaching, Common Core State Standards, > >> Apps for literacy learning, Teaching about climate change, Teaching language as action > >> > >> On Dec 24, 2019, at 12:47 PM, Annalisa Aguilar wrote: > >> > >> Hi S?bastien, > >> > >> This sounds like a marvelous conference. I wish I could attend. > >> > >> What does "languaging" mean? > >> > >> I would certainly be interested in receiving copies of the papers presented, if not a list of titles so I might find them. > >> > >> BTW, I have no issue with your announcement post on this thread. > >> > >> Kind regards, > >> > >> Annalisa > >> > >> ------------------------------------------------------------------------------------------------------ > >> From: xmca-l-bounces@mailman.ucsd.edu on behalf of S?bastien Lerique > >> > >> Sent: Monday, December 23, 2019 1:05 PM > >> To: eXtended Mind, Culture, Activity > >> Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future > >> > >> Dear all, > >> > >> I'm jumping on the occasion created by Annalisa's email to > >> announce a workshop I am organising which might interest some in > >> this community: it is the "Embodied interactions, Languaging and > >> the Dynamic Medium Workshop" (ELDM2020), an event gathering > >> interests and works in embodiment, languaging, diversity computing > >> and humane technologies, on **18th February in Lyon, France**. As > >> is confirmed in the messages here, recent developments in these > >> communities are ripe for focused conversations, and this workshop > >> will be a coming-together for cross-pollination and explorations > >> of possible common futures. > >> > >> Invited speakers: > >> - Elena Clare Cuffari (Worcester State University) > >> - Mark Dingemanse (Radboud University) > >> - Omar Rizwan (Dynamicland.org) > >> - Jelle van Dijk (University of Twente) > >> > >> There is an open call for proposals until 6th January 2020, and > >> registration opens on 1st January. All the details are available > >> on the main website: https://wehlutyk.gitlab.io/eldm2020/ . I will > >> be delighted to answer any questions that might arise! > >> > >> Best wishes, > >> S?bastien Lerique > >> > >> PS: My sincere apologies for somewhat hijacking the thread with > >> this announcement. I have in fact already sent this message three > >> times to the list, and they seem to be spam-filtered as the > >> announcements neven went through. > >> > >> Annalisa Aguilar writes: > >> > >> > Hello fellow and distant XMCArs, > >> > > >> > So today I saw this in the Intercept and thought I would > share > >> > for your awareness, because of the recent developments that > >> > likely impact you, namely: > >> > > >> > * the neoliberalization of higher academic learning > >> > * the compromise of privacy and civil life in the US and > other > >> > countries > >> > * the (apparently) hidden agenda of technology as it > hard-wires > >> > biases and control over women, minorities, and other > >> > vulnerable people to reproduce past prejudices and power > >> > structures. > >> > > >> > In my thesis I discuss historical mental models of mind and > how > >> > they inform technology design. During reading for my thesis I > >> > had always been bothered about the story of the AI Winter. > >> > > >> > Marvin Minsky, an "august" researcher from MIT labs of that > >> > period, had discredited Frank Rosenblatt's work on > Perceptrons > >> > (which was reborn in the neural networks of the 1980's to > early > >> > naughts). That act basically neutralized funding of > legitimate > >> > research in AI and, through vicious academic politics, > stymied > >> > anyone doing research even smelling like Perceptrons. Frank > >> > Rosenblatt died in 1971, likely feeling disgraced and ruined, > >> > never knowing the outcome of his lifework. It is a nightmare > no > >> > academic would ever want. > >> > > >> > Thanks to Herbert Dreyfus, we know this story which is > discussed > >> > in What Computers Still Can't Do > >> > https://mitpress.mit.edu/books/what-computers-still-cant-do > >> > > >> > Well, it ends up that Minksy has been allegedly tied up with > >> > Jeffery Epstein and his exploitation of young women. > >> > > >> > This has been recently reported in an article by Rodrigo > >> > Ochigame of Brazil, who was a student of Joichi Ito, who ran > the > >> > MIT > >> > Media Lab. We know that Ito's projects were funded by none > other > >> > than Epstein, and this reveal forced Ito's resignation. Read > >> > about it here: > >> > https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ > >> > > >> > I have not completed reading the article, because I had to > stop > >> > just to pass this on to the list, to share. > >> > > >> > One might say that computer technology is by its very nature > >> > going to reproduce power structures, but I would rather say > that > >> > our mental models are not serving us to create those > technology > >> > tools that we require to create an equitable society. How > else > >> > can we free the tools from the power structures, if the only > >> > people who use them are those who perpetuate privilege and > >> > cheat, for example by thwarting academic freedom in its > process? > >> > How can we develop equality in society if the tools we > >> > create come from inequitable motivations and interactions? Is > it > >> > even possible? > >> > > >> > As I see it, the ethics at MIT Labs reveals concretely how > the > >> > Cartesian model of mind, basically normalizes the mind of the > >> > privileged, and why only a holistic mental model provides > >> > safeguards against these biases that lead to these > >> > abuses. Models > >> > such as distributed cognition, CHAT, and similar constructs, > >> > intertwine the threads of thought to the body, to culture, > >> > history, > >> > tool-use, language, and society, because these models > >> > encapsulate how environment develops mind, which in turn > >> > develops > >> > environment and so on. Mind is not separate, in a certain > sense, > >> > mind IS material and not disembodied. It is when mind is > >> > portrayed otherwise that the means of legitimizing abuse is > >> > given its nutrition to germinate without check. > >> > > >> > I feel an odd confirmation, as much as I am horrified to > learn > >> > this new alleged connection of Minsky to Epstein, how the > ways > >> > in > >> > which as a society we fool ourselves with these > hyper-rational > >> > models which only reproduce abusive power structures. > >> > > >> > That is how it is done. > >> > > >> > It might also be a reminder to anyone who has been unethical > how > >> > history has a way of revealing past deeds. Justice does > >> > come, albeit slowly. > >> > > >> > Kind regards as we near the end of 2019, > >> > > >> > Annalisa > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191226/773eb96a/attachment.html From annalisa@unm.edu Thu Dec 26 19:33:21 2019 From: annalisa@unm.edu (Annalisa Aguilar) Date: Fri, 27 Dec 2019 03:33:21 +0000 Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future In-Reply-To: <1934C7C5-28B2-4729-8061-2B232DFC6C64@gmail.com> References: <87h81qdczk.fsf@normalesup.org> <5BD4FD22-3B13-43AB-88F9-D86B4DD5838B@umn.edu> <87a77gcah8.fsf@normalesup.org> <289D758E-6F59-4E24-A2C8-C86DFAE293AC@gmail.com> , <1934C7C5-28B2-4729-8061-2B232DFC6C64@gmail.com> Message-ID: Thank you, Henry! I was not presenting a math proof, or delivering a sermon, nor presenting a case in a court of law. Just making some observations. I don't think AI will become smarter than us. Is a calculator smarter than us because it does calculations faster? Technology is a tool, not human. In a sense the term Artificial Intelligence is a misnomer. It should be called something else, because intelligence is a property of living things, not machines (AFAIAC). Perhaps in time what will happen is we just call them computers again (which they did on Star Trek), like we always have. I don't consider my phone smart. It's just a phone with a camera and computer in it, etc. Are cars better than us because they can drive faster than we can walk? What about planes? No matter what transpires with computer technology, it will always remain a tool for a human in some capacity. Let's consider the capitalist model where all the robots take over a large swath of jobs humans do. All that will happen is that no one will be able to afford the goods made by the robots. In a sense, the issue I have with the AI Winter is identical to the issue of capitalism, and that's because of the ethics involved, or lack of ethics. I also do not believe in "the singularity," because AI will never become general, only particular to discrete tasks. Will that be useful? Certainly, just as a "smart tool," nothing more. Unless we believe that AI and robots can make a short cut of the long biological history of human evolution, I do not believe it is possible to happen anytime soon. I'm not really sure why we fear the substitution of people with computers. We still have to have people running them. just like with machine tools. Yes there will be disruption, and that's why we should be mindful about ethics because there will be many people harmed if we don't consider what sort of society we want to have, instead of what is foisted upon us. I don't think the printing press was considered evil. It's an object with no sense of right or wrong. It is true what you say, that they could not anticipate the Holy Wars, but that was not BECAUSE of the printing press, that was because of unethical positions by the Catholic Church and its corrupted power over society. The printing press just made it easier to communicate these transgressions. Also, technology is not inevitable. It's so annoying to me when people say it is, as if we are passive in the face of technological developments. Just remember 8-track tapes, Betamax, and the Ford Pinto. There are ways that technology becomes irrelevant or challenged. This is why a democracy is so important. Technology that can report secure votes should be what we are concerned with as far as sophisticated technology we could use. Do we really need a colony on Mars?? Or do we believe that markets are the deciders of where we will go and who we will be. Kind regards, Annalisa ________________________________ From: xmca-l-bounces@mailman.ucsd.edu on behalf of HENRY SHONERD Sent: Thursday, December 26, 2019 6:08 PM To: eXtended Mind, Culture, Activity Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future UNM-IT Warning: This message was sent from outside of the LoboMail system. Do not click on links or open attachments unless you are sure the content is safe. (2.3) Annalisa, Ed, Andy, S?bastian, and Richard,( in that order, if I remember how this thread has developed), Clearly, to me, this thread is a beautiful example of how languaging can work, though I find it seldom works this well, especially in academia. Languaging is doing things with words, as Austin would have it. dialoguing as Bakhtin would have it. I would encourage others to sample from Richard Beaches? link: website for my Teaching Language as Action in the ELA Classroom to see how praxis can work in learning/teaching a second language. I take to heart Annalisa?s misgivings about AI. But consider this: AI has been with us at least as long as the printing press, at least in terms of ?disruption?. Some are afraid of a ?singularity? when AI becomes general, is smarter than humans at everything, then takes overs. We coudln?t have predicted the religious wars that arose out of the Guttenburg Bible and Martin Luther?s 95 Theses, posted all over Europe. The inherent evil of the printing press is immanent in the same way that its virtues are. Technology is an existential threat and an existential promise. It?s what we make of it. But I agree with Annalisa?s basic premise: Descartes had it wrong, and unless we get that right, we are probably doomed. On that cheery note?over and out.:) Henry On Dec 26, 2019, at 1:45 PM, Annalisa Aguilar > wrote: Thank you Henri and venerable others, I hope your holiday break is restful and refreshing. And to Richard and S?bastien, thank you for listing references. That's marvelous. I will also be happy to learn about the conference and links resulting forthwith! S?bastien, I would be curious if there were a discussion at your conference about how the word "languaging" came to be and that it might be documented in some fashion. It might be a good intro for others learning about the research for the first time. If it was first used in the 80s and ever since, I regret that your post is the first time I've heard it. I can't believe I wouldn't have heard of it, unless this is something new arising? But perhaps I was just not looking in the right places or I'm forgetting having seen it and am reading it in a new context of the list. Regardless, it's a wonderful development, because this research seems to flow (historically) from developments concerning embodied thinking and other revelations in cognition, such as CHAT and distributed cognition. Although, my initial post concerns ethics of AI, it's remarkable that the thread has turned to languaging, which is embodied, something that AI is not. There will be more discussion about technology ethics and social harms that arise from neglect of very large companies such as Facebook, Google, and the like. Here is one such reporting from the Guardian: https://www.theguardian.com/technology/2019/dec/26/too-big-to-fail-techs-decade-of-scale-and-impunity I hope we can continue to discuss AI and ethics. I'm interested in what people have to contribute on that account. There is an inclination to say that technology is inevitable and that we can't uninvent AI (something I am not proposing). But now that AI is here, how do we prevent being inured to a lack of ethics in research and in application of AI? Kind regards, A n n a l i s a ________________________________ From: xmca-l-bounces@mailman.ucsd.edu > on behalf of HENRY SHONERD > Sent: Wednesday, December 25, 2019 3:35 PM To: eXtended Mind, Culture, Activity > Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future Happy holidays to all, I?ve been following this thread with great interest. Thanks to Analisa in kicking it off, after a long hiatus on the chat. An additional take on ?languaging? is modality, including manual signing. This profiles how language is embodied and points to its gestural roots in its evolution. Henry > On Dec 25, 2019, at 3:21 PM, S?bastien Lerique > wrote: > > Delighted to read the great reactions! > > I'm not yet familiar enough with what I imagine are a variety of uses of the term languaging (one of the reasons for which I wanted to organise this workshop), but my basic understanding, which seems to follow what Richard cited, is that it is a construal of language as mainly an activity (versus an abstract structure or system), that cannot be isolated from the concrete contexts in which it develops. > > Another relevant reference is Di Paolo, Cuffari and De Jaegher (2018), "Linguistic Bodies" (or the shorter/introductory version: Cuffari, Di Paolo, & Jaegher, 2015, "From Participatory Sense-Making to Language: There and Back Again."), who flesh out a proposal for a theory of languaging grounded in the enactive approach to cognition. > >> I would certainly be interested in receiving copies of the papers presented, if not a list of titles so I might find them. > > If all goes well I'll be recording all the talks too, and will send the link on this list once it's all up. > >> BTW, I have no issue with your announcement post on this thread. > > Thank you! > > Best, > S?bastien > > PS: I had first written on this list in August 2018 to gather thoughts about Dynamicland, and was rather overwhelmed by the answers. Email bankruptcy got the better part of my energy to answer however, so I would like to apologise here for not having followed up at the time. > > Annalisa Aguilar > writes: > >> When I searched "languaging" I found this: >> >> "A term coined by Swain (1985) relating to the cognitive process of negotiating and producing meaningful, comprehensible >> output as part of language learning." >> >> Oddly, I've never heard this word before, I've heard "speaking" though. I would like to know what the nuanced difference of >> this word-use is from "speaking." >> >> Or is it the act of translating from one language to another, as in the bridging one does as one becomes fluent in another >> language. Is that it? >> >> Or is it what one does when one is writing a poem and trying to make a rhyme or have a phrase fit in the meter of the line? >> >> Are we languaging now? >> >> Kind regards, >> >> Annalisa >> >> ------------------------------------------------------------------------------------------------------ >> From: xmca-l-bounces@mailman.ucsd.edu > on behalf of Richard Beach >> > >> Sent: Tuesday, December 24, 2019 12:56 PM >> To: eXtended Mind, Culture, Activity > >> Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future >> >> Annalisa asked, ?What does ?languaging? mean?? >> >> For more on ?languaging? theory, a primary resource is Per Linnell?s 2009 book, Rethinking Language, Mind, and World >> Dialogically: Interactional and Contextual Theories of Human Sense-making. Charlotte, NC: Information Age Publishing. >> >> For research on application of languaging theory to teaching literacy see Beach & Bloome (Eds.) (2019). Languaging Relations >> for Transforming the Literacy and Language Arts Classrooms (Routledge) and the resource website for Beach & Beauchemin >> (2019). Teaching Language as Action in the ELA Classroom (Routledge). >> >> Richard Beach, Professor Emeritus of English Education, University of Minnesota >> rbeach@umn.edu >> Websites: Digital writing, Media literacy, Teaching literature, Identity-focused ELA Teaching, Common Core State Standards, >> Apps for literacy learning, Teaching about climate change, Teaching language as action >> >> On Dec 24, 2019, at 12:47 PM, Annalisa Aguilar > wrote: >> >> Hi S?bastien, >> >> This sounds like a marvelous conference. I wish I could attend. >> >> What does "languaging" mean? >> >> I would certainly be interested in receiving copies of the papers presented, if not a list of titles so I might find them. >> >> BTW, I have no issue with your announcement post on this thread. >> >> Kind regards, >> >> Annalisa >> >> ------------------------------------------------------------------------------------------------------ >> From: xmca-l-bounces@mailman.ucsd.edu > on behalf of S?bastien Lerique >> > >> Sent: Monday, December 23, 2019 1:05 PM >> To: eXtended Mind, Culture, Activity > >> Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future >> >> Dear all, >> >> I'm jumping on the occasion created by Annalisa's email to >> announce a workshop I am organising which might interest some in >> this community: it is the "Embodied interactions, Languaging and >> the Dynamic Medium Workshop" (ELDM2020), an event gathering >> interests and works in embodiment, languaging, diversity computing >> and humane technologies, on **18th February in Lyon, France**. As >> is confirmed in the messages here, recent developments in these >> communities are ripe for focused conversations, and this workshop >> will be a coming-together for cross-pollination and explorations >> of possible common futures. >> >> Invited speakers: >> - Elena Clare Cuffari (Worcester State University) >> - Mark Dingemanse (Radboud University) >> - Omar Rizwan (Dynamicland.org) >> - Jelle van Dijk (University of Twente) >> >> There is an open call for proposals until 6th January 2020, and >> registration opens on 1st January. All the details are available >> on the main website: https://wehlutyk.gitlab.io/eldm2020/ . I will >> be delighted to answer any questions that might arise! >> >> Best wishes, >> S?bastien Lerique >> >> PS: My sincere apologies for somewhat hijacking the thread with >> this announcement. I have in fact already sent this message three >> times to the list, and they seem to be spam-filtered as the >> announcements neven went through. >> >> Annalisa Aguilar > writes: >> >> > Hello fellow and distant XMCArs, >> > >> > So today I saw this in the Intercept and thought I would > share >> > for your awareness, because of the recent developments that >> > likely impact you, namely: >> > >> > * the neoliberalization of higher academic learning >> > * the compromise of privacy and civil life in the US and > other >> > countries >> > * the (apparently) hidden agenda of technology as it > hard-wires >> > biases and control over women, minorities, and other >> > vulnerable people to reproduce past prejudices and power >> > structures. >> > >> > In my thesis I discuss historical mental models of mind and > how >> > they inform technology design. During reading for my thesis I >> > had always been bothered about the story of the AI Winter. >> > >> > Marvin Minsky, an "august" researcher from MIT labs of that >> > period, had discredited Frank Rosenblatt's work on > Perceptrons >> > (which was reborn in the neural networks of the 1980's to > early >> > naughts). That act basically neutralized funding of > legitimate >> > research in AI and, through vicious academic politics, > stymied >> > anyone doing research even smelling like Perceptrons. Frank >> > Rosenblatt died in 1971, likely feeling disgraced and ruined, >> > never knowing the outcome of his lifework. It is a nightmare > no >> > academic would ever want. >> > >> > Thanks to Herbert Dreyfus, we know this story which is > discussed >> > in What Computers Still Can't Do >> > https://mitpress.mit.edu/books/what-computers-still-cant-do >> > >> > Well, it ends up that Minksy has been allegedly tied up with >> > Jeffery Epstein and his exploitation of young women. >> > >> > This has been recently reported in an article by Rodrigo >> > Ochigame of Brazil, who was a student of Joichi Ito, who ran > the >> > MIT >> > Media Lab. We know that Ito's projects were funded by none > other >> > than Epstein, and this reveal forced Ito's resignation. Read >> > about it here: >> > https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ >> > >> > I have not completed reading the article, because I had to > stop >> > just to pass this on to the list, to share. >> > >> > One might say that computer technology is by its very nature >> > going to reproduce power structures, but I would rather say > that >> > our mental models are not serving us to create those > technology >> > tools that we require to create an equitable society. How > else >> > can we free the tools from the power structures, if the only >> > people who use them are those who perpetuate privilege and >> > cheat, for example by thwarting academic freedom in its > process? >> > How can we develop equality in society if the tools we >> > create come from inequitable motivations and interactions? Is > it >> > even possible? >> > >> > As I see it, the ethics at MIT Labs reveals concretely how > the >> > Cartesian model of mind, basically normalizes the mind of the >> > privileged, and why only a holistic mental model provides >> > safeguards against these biases that lead to these >> > abuses. Models >> > such as distributed cognition, CHAT, and similar constructs, >> > intertwine the threads of thought to the body, to culture, >> > history, >> > tool-use, language, and society, because these models >> > encapsulate how environment develops mind, which in turn >> > develops >> > environment and so on. Mind is not separate, in a certain > sense, >> > mind IS material and not disembodied. It is when mind is >> > portrayed otherwise that the means of legitimizing abuse is >> > given its nutrition to germinate without check. >> > >> > I feel an odd confirmation, as much as I am horrified to > learn >> > this new alleged connection of Minsky to Epstein, how the > ways >> > in >> > which as a society we fool ourselves with these > hyper-rational >> > models which only reproduce abusive power structures. >> > >> > That is how it is done. >> > >> > It might also be a reminder to anyone who has been unethical > how >> > history has a way of revealing past deeds. Justice does >> > come, albeit slowly. >> > >> > Kind regards as we near the end of 2019, >> > >> > Annalisa > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191227/40c4293d/attachment.html