[Xmca-l] Re: The ethics of artificial intelligence, past present and future

Annalisa Aguilar annalisa@unm.edu
Sun Dec 22 10:08:34 PST 2019


Hi Bill,

Well, I think it has to do with the ethics. Not the scientific arguments. If I read Dreyfus correctly, there was a lot of back-channeling that stained many legitimate researchers in the area of Rosenblatt's work and this "witch hunting" dried up funding for anything related to his work.

What I'm asserting is that there is something nefarious about the model of mind that is Cartesian, and that inherently it "protects" and legitimizes the downward slope of bad behavior.

Given that Damasio has shown that we feel before we reason (see Descartes' Error), and we require sensing to make good decisions, if we deny sensing information, then we can rationalize whatever we want, essentially suppressing any "inner compass" that affords our own welfare. I may not be remembering this exactly, however the gist is that Damasio's patient, who suffered brain damage in the part of the brain that senses, possessed a disability that did not allow him not able to make good decisions, putting him in harm's way. He was a danger to himself.

If a person has been brought up to deny sensing as important information by which to orient, then does it surprise that someone like Minsky would behave as he did? Maybe I am painting with too broad strokes, but I tend to intuit that we have within us mechanisms that allow us to err on the side of "first do no harm," like the way mirror neurons work.

I believe that culture can either suppress or enhance this biological construct with which we are born. Though there can be those who are not born that way, I intuit that it's not a norm, otherwise we would see a lot of humans running off cliffs like lemmings.

I'm arguing that a culture's model of mind can determine a lot about that culture's behavior. Like most models, if they are closer to the transactional world, then the models will be more "true," if they are less accurate, they will lead many astray.

Consider how the model of hysteria came to be used to control women (and still is if we recollect the 2016 election)

Descartes' model of mind was constructed as an expedient measure to protect scientific research from the draconian persecution of the Church, and a real threat to the lives of the European intellectual class. Now, the model no longer works, yet we still hold on to it. It's over 400 years old!!! We've since moved on from Newtonian physics, but not the mind/body split.

I'm asserting that the model of mind/body split causes harm, and I'm illustrating how it might be so when we consider how it impacts something like the study of Artificial Intelligence and how it is that there is a danger in it when it comes to academic freedom.

If you are willing to say that a scholar's research is different than his personal life when it comes to ethics, I don't accept it, because a person is a whole person and not divided, regardless if such person subscribes or orients to the mind/body split.

Sure, it's entirely possible I'm off the mark, but I'm suggesting that just like racists models create certain interactions in society that are harmful, so it is with models of mind. The story in the Intercept confirms my point of view.

Kind regards,

Annalisa
________________________________
From: xmca-l-bounces@mailman.ucsd.edu <xmca-l-bounces@mailman.ucsd.edu> on behalf of Bill Kerr <billkerr@gmail.com>
Sent: Saturday, December 21, 2019 11:49 PM
To: eXtended Mind, Culture, Activity <xmca-l@mailman.ucsd.edu>
Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future

You have to wonder what computers would be like now if Rosenblatt had been able to pursue his work unfettered by Minsky and others from MIT back then.

Academic freedom must be protected. On that I hope we can agree!

Perceptrons was written by MInsky and Papert in 1969. Many have argued and I agree with them that their other work kicked off an extremely rich field of educational computing (call it the MIT group if you want), which persists in numerous branches today: Scratch3.0, AppInventor, Makey Makey all came out of MIT not to mention the associated theoretical work.

I google Perceptrons and it confirmed what I thought from before: that it made legitimate criticisms of  that path of research. It was a  legitimate dispute between different approaches at that time. I can't evaluate myself because my maths isn't good enough.

It is true that work on Perceptrons dried up for quite a while after that probably because no one could refute the critique by Minsky and Papert. Irrespective, in your words Annalisa we should support their academic freedom to argue their case.

It is also true that  parallel processing perceptrons / neural networks have achieved remarkable things in recent years. It seems  that MInsky / Papert made a legitimate criticism which ended up sidelining what turned out to be another rich research field.

Decades later Minsky, now dead, is accused of having sex with a 17yo at Epstein's compound when he was in his 70s.

Therefore, what "Minsky and others" did before that is now suspect by association. Is that what you are arguing Annalisa?




On Sun, Dec 22, 2019 at 2:37 PM Annalisa Aguilar <annalisa@unm.edu<mailto:annalisa@unm.edu>> wrote:
Hi Ed,

Regarding Dreyfus, I don't recall him asserting the matter of mind or not, though it's been almost 10 years since I read the book.

I am compelled to say that minds are material in the same way that stories are material.

Consider a few analogies.

The book is material, the words are printed ink on the paper of the pages, but without the book present the story will not manifest in the mind of a reader (as long as the book is written in the same language as the reader). Is the story not material if it is located in this book and not in that one? Also the story can exist outside the book, in the memories of a person, but the person is also material.

The light in an electrical light bulb is there when the electricity passes through the filament, and not when the electricity is not there. We know thanks to Einstein that light is energetic material that travels really fast. The filament is gross material, the electricity is subtle as is the light, but the three are material.

I assert that a mind too is subtle energy passing through a brain, which is a conglomerate of neuronal connections of grey matter.

I see the physical and transactional world as material of infinitely different graded properties, subtle to gross, in different combinations of active qualities. In the same way the story resides in the book and the light resides, or emanates, from the light bulb the subtle permeates the gross.

A more perfect illustration is the red hot iron ball. Iron and fire are in the same location, one is gross the other subtle. But both are material. What can happen however is if we do not know the properties of iron (heavy and round) or fire (red and hot) we can superimpose one element upon the other (i.e., assert that fire is heavy and round, while iron is hot and red) and this is easy to do because they are present in the same location perceptually; we cannot remove the iron from the fire or vice versa. (though it is possible if you are a blacksmith you can purge the iron in water, extinguishing the flame, I suppose, but you get my point, I hope.)

With this in mind, is it possible to also assert that ethics is also a material entity? Whereby ethical conduct is that which possesses the most truth for the most harmony for the largest part of society while also holding the same for the individual.

Can ethical conduct have universal laws like physics? If so, it might be an attainable goal to create the ethical algorithm. Yet, the weirdness enters when considering whether it is ethical to train computer to learn and improve an algorithm until it is "perfectly ethical", if what it needs to do to get there is to fail several times before it can actually become perfect. How many failures should there be before it's not ethical to continue training the computer?

I would say it's not ethical to do that, if it means for example surveilling a population with face recognition technology until it is able to perfectly identify a criminal from his or her doppelganger. There will always be the risk of accusing an innocent person, which is not ethical.

Algorithms usually don't take into consideration context. I recall Rosenblatt's work on perceptrons were a way to create context by computers learning about contexts (by sensing). That actually might safer than constructing algorithms.

You have to wonder what computers would be like now if Rosenblatt had been able to pursue his work unfettered by Minsky and others from MIT back then.

Academic freedom must be protected. On that I hope we can agree!

Kind regards,

Annalisa








________________________________
From: xmca-l-bounces@mailman.ucsd.edu<mailto:xmca-l-bounces@mailman.ucsd.edu> <xmca-l-bounces@mailman.ucsd.edu<mailto:xmca-l-bounces@mailman.ucsd.edu>> on behalf of Edward Wall <ewall@umich.edu<mailto:ewall@umich.edu>>
Sent: Saturday, December 21, 2019 2:37 PM
To: eXtended Mind, Culture, Activity <xmca-l@mailman.ucsd.edu<mailto:xmca-l@mailman.ucsd.edu>>
Subject: [Xmca-l] Re: The ethics of artificial intelligence, past present and future


  UNM-IT Warning: This message was sent from outside of the LoboMail system. Do not click on links or open attachments unless you are sure the content is safe. (2.3)

Annalisa

     In my read when Dreyfus wrote the book you reference, he believed that ‘mind' was neither ‘material’ nor ‘mental’ On the other hand, I have often wondered if ‘minds' aren’t ‘material.'

Ed Wall

Imagination was given to man to compensate him for what he is not, and a sense of humor was provided to console him for what he is.

On Dec 21, 2019, at  1:22 PM, Annalisa Aguilar <annalisa@unm.edu<mailto:annalisa@unm.edu>> wrote:

Hello fellow and distant XMCArs,

So today I saw this in the Intercept and thought I would share for your awareness, because of the recent developments that likely impact you, namely:

  *   the neoliberalization of higher academic learning
  *   the compromise of privacy and civil life in the US and other countries
  *   the (apparently) hidden agenda of technology as it hard-wires biases and control over women, minorities, and other vulnerable people to reproduce past prejudices and power structures.

In my thesis I discuss historical mental models of mind and how they inform technology design. During reading for my thesis I had always been bothered about the story of the AI Winter.

Marvin Minsky, an "august" researcher from MIT labs of that period, had discredited Frank Rosenblatt's work on Perceptrons (which was reborn in the neural networks of the 1980's to early naughts). That act basically neutralized funding of legitimate research in AI and, through vicious academic politics, stymied anyone doing research even smelling like Perceptrons. Frank Rosenblatt died in 1971, likely feeling disgraced and ruined, never knowing the outcome of his lifework. It is a nightmare no academic would ever want.

Thanks to Herbert Dreyfus, we know this story which is discussed in What Computers Still Can't Do https://mitpress.mit.edu/books/what-computers-still-cant-do

Well, it ends up that Minksy has been allegedly tied up with Jeffery Epstein and his exploitation of young women.

This has been recently reported in an article by Rodrigo Ochigame of Brazil, who was a student of Joichi Ito, who ran the MIT Media Lab. We know that Ito's projects were funded by none other than Epstein, and this reveal forced Ito's resignation. Read about it here: https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/<https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/?utm_source=The+Intercept+Newsletter&utm_campaign=0277d72712-EMAIL_CAMPAIGN_2019_12_21&utm_medium=email&utm_term=0_e00a5122d3-0277d72712-124483985>

I have not completed reading the article, because I had to stop just to pass this on to the list, to share.

One might say that computer technology is by its very nature going to reproduce power structures, but I would rather say that our mental models are not serving us to create those technology tools that we require to create an equitable society. How else can we free the tools from the power structures, if the only people who use them are those who perpetuate privilege and cheat, for example by thwarting academic freedom in its process? How can we develop equality in society if the tools we create come from inequitable motivations and interactions? Is it even possible?

As I see it, the ethics at MIT Labs reveals concretely how the Cartesian model of mind, basically normalizes the mind of the privileged, and why only a holistic mental model provides safeguards against these biases that lead to these abuses. Models such as distributed cognition, CHAT, and similar constructs, intertwine the threads of thought to the body, to culture, history, tool-use, language, and society, because these models encapsulate how environment develops mind, which in turn develops environment and so on. Mind is not separate, in a certain sense, mind IS material and not disembodied. It is when mind is portrayed otherwise that the means of legitimizing abuse is given its nutrition to germinate without check.

I feel an odd confirmation, as much as I am horrified to learn this new alleged connection of Minsky to Epstein, how the ways in which as a society we fool ourselves with these hyper-rational models which only reproduce abusive power structures.

That is how it is done.

It might also be a reminder to anyone who has been unethical how history has a way of revealing past deeds. Justice does come, albeit slowly.

Kind regards as we near the end of 2019,

Annalisa

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191222/10b24325/attachment.html 


More information about the xmca-l mailing list