[Xmca-l] Re: The ethics of artificial intelligence, past present and future
Andy Blunden
andyb@marxists.org
Sat Dec 21 22:42:22 PST 2019
What is /not/ material?
------------------------------------------------------------
*Andy Blunden*
Hegel for Social Movements <https://brill.com/view/title/54574>
Home Page <https://www.ethicalpolitics.org/ablunden/index.htm>
On 22/12/2019 4:03 pm, Annalisa Aguilar wrote:
> Hi Ed,
>
> Regarding Dreyfus, I don't recall him asserting the matter
> of mind or not, though it's been almost 10 years since I
> read the book.
>
> I am compelled to say that minds are material in the same
> way that stories are material.
>
> Consider a few analogies.
>
> The book is material, the words are printed ink on the
> paper of the pages, but without the book present the story
> will not manifest in the mind of a reader (as long as the
> book is written in the same language as the reader). Is
> the story not material if it is located in this book and
> not in that one? Also the story can exist outside the
> book, in the memories of a person, but the person is also
> material.
>
> The light in an electrical light bulb is there when the
> electricity passes through the filament, and not when the
> electricity is not there. We know thanks to Einstein that
> light is energetic material that travels really fast. The
> filament is gross material, the electricity is subtle as
> is the light, but the three are material.
>
> I assert that a mind too is subtle energy passing through
> a brain, which is a conglomerate of neuronal connections
> of grey matter.
>
> I see the physical and transactional world as material of
> infinitely different graded properties, subtle to gross,
> in different combinations of active qualities. In the same
> way the story resides in the book and the light resides,
> or emanates, from the light bulb the subtle permeates the
> gross.
>
> A more perfect illustration is the red hot iron ball. Iron
> and fire are in the same location, one is gross the other
> subtle. But both are material. What can happen however is
> if we do not know the properties of iron (heavy and round)
> or fire (red and hot) we can superimpose one element upon
> the other (i.e., assert that fire is heavy and round,
> while iron is hot and red) and this is easy to do because
> they are present in the same location perceptually; we
> cannot remove the iron from the fire or vice versa.
> (though it is possible if you are a blacksmith you can
> purge the iron in water, extinguishing the flame, I
> suppose, but you get my point, I hope.)
>
> With this in mind, is it possible to also assert that
> ethics is also a material entity? Whereby ethical conduct
> is that which possesses the most truth for the most
> harmony for the largest part of society while also holding
> the same for the individual.
>
> Can ethical conduct have universal laws like physics? If
> so, it might be an attainable goal to create the ethical
> algorithm. Yet, the weirdness enters when considering
> whether it is ethical to train computer to learn and
> improve an algorithm until it is "perfectly ethical", if
> what it needs to do to get there is to fail several times
> before it can actually become perfect. How many failures
> should there be before it's not ethical to continue
> training the computer?
>
> I would say it's not ethical to do that, if it means for
> example surveilling a population with face recognition
> technology until it is able to perfectly identify a
> criminal from his or her doppelganger. There will always
> be the risk of accusing an innocent person, which is not
> ethical.
>
> Algorithms usually don't take into consideration context.
> I recall Rosenblatt's work on perceptrons were a way to
> create context by computers learning about contexts (by
> sensing). That actually might safer than constructing
> algorithms.
>
> You have to wonder what computers would be like now if
> Rosenblatt had been able to pursue his work unfettered by
> Minsky and others from MIT back then.
>
> Academic freedom must be protected. On that I hope we can
> agree!
>
> Kind regards,
>
> Annalisa
>
>
>
>
>
>
>
>
> ------------------------------------------------------------
> *From:* xmca-l-bounces@mailman.ucsd.edu
> <xmca-l-bounces@mailman.ucsd.edu> on behalf of Edward Wall
> <ewall@umich.edu>
> *Sent:* Saturday, December 21, 2019 2:37 PM
> *To:* eXtended Mind, Culture, Activity
> <xmca-l@mailman.ucsd.edu>
> *Subject:* [Xmca-l] Re: The ethics of artificial
> intelligence, past present and future
>
> * UNM-IT Warning:* This message was sent from outside of
> the LoboMail system. Do not click on links or open
> attachments unless you are sure the content is safe. (2.3)
>
> Annalisa
>
> In my read when Dreyfus wrote the book you reference,
> he believed that ‘mind' was neither ‘material’ nor
> ‘mental’ On the other hand, I have often wondered if
> ‘minds' aren’t ‘material.'
>
> Ed Wall
>
> Imagination was given to man to compensate him for what he
> is not, and a sense of humor was provided to console him
> for what he is.
>
>> On Dec 21, 2019, at 1:22 PM, Annalisa Aguilar
>> <annalisa@unm.edu <mailto:annalisa@unm.edu>> wrote:
>>
>> Hello fellow and distant XMCArs,
>>
>> So today I saw this in the Intercept and thought I would
>> share for your awareness, because of the recent
>> developments that likely impact you, namely:
>>
>> * the neoliberalization of higher academic learning
>> * the compromise of privacy and civil life in the US
>> and other countries
>> * the (apparently) hidden agenda of technology as it
>> hard-wires biases and control over women, minorities,
>> and other vulnerable people to reproduce past
>> prejudices and power structures.
>>
>> In my thesis I discuss historical mental models of mind
>> and how they inform technology design. During reading for
>> my thesis I had always been bothered about the story of
>> the AI Winter.
>>
>> Marvin Minsky, an "august" researcher from MIT labs of
>> that period, had discredited Frank Rosenblatt's work on
>> Perceptrons (which was reborn in the neural networks of
>> the 1980's to early naughts). That act basically
>> neutralized funding of legitimate research in AI and,
>> through vicious academic politics, stymied anyone doing
>> research even smelling like Perceptrons. Frank Rosenblatt
>> died in 1971, likely feeling disgraced and ruined, never
>> knowing the outcome of his lifework. It is a nightmare no
>> academic would ever want.
>>
>> Thanks to Herbert Dreyfus, we know this story which is
>> discussed in What Computers Still Can't
>> Dohttps://mitpress.mit.edu/books/what-computers-still-cant-do
>>
>> Well, it ends up that Minksy has been allegedly tied up
>> with Jeffery Epstein and his exploitation of young women.
>>
>> This has been recently reported in an article by Rodrigo
>> Ochigame of Brazil, who was a student of Joichi Ito, who
>> ran the MIT Media Lab. We know that Ito's projects were
>> funded by none other than Epstein, and this reveal forced
>> Ito's resignation. Read about it
>> here:https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/
>> <https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/?utm_source=The+Intercept+Newsletter&utm_campaign=0277d72712-EMAIL_CAMPAIGN_2019_12_21&utm_medium=email&utm_term=0_e00a5122d3-0277d72712-124483985>
>>
>> I have not completed reading the article, because I had
>> to stop just to pass this on to the list, to share.
>>
>> One might say that computer technology is by its very
>> nature going to reproduce power structures, but I would
>> rather say that our mental models are not serving us to
>> create those technology tools that we require to create
>> an equitable society. How else can we free the tools from
>> the power structures, if the only people who use them are
>> those who perpetuate privilege and cheat, for example by
>> thwarting academic freedom in its process? How can we
>> develop equality in society if the tools we create come
>> from inequitable motivations and interactions? Is it even
>> possible?
>>
>> As I see it, the ethics at MIT Labs reveals concretely
>> how the Cartesian model of mind, basically normalizes the
>> mind of the privileged, and why only a holistic mental
>> model provides safeguards against these biases that lead
>> to these abuses. Models such as distributed cognition,
>> CHAT, and similar constructs, intertwine the threads of
>> thought to the body, to culture, history, tool-use,
>> language, and society, because these models encapsulate
>> how environment develops mind, which in turn develops
>> environment and so on. Mind is not separate, in a certain
>> sense, mind IS material and not disembodied. It is when
>> mind is portrayed otherwise that the means of
>> legitimizing abuse is given its nutrition to germinate
>> without check.
>>
>> I feel an odd confirmation, as much as I am horrified to
>> learn this new alleged connection of Minsky to Epstein,
>> how the ways in which as a society we fool ourselves with
>> these hyper-rational models which only reproduce abusive
>> power structures.
>>
>> That is how it is done.
>>
>> It might also be a reminder to anyone who has been
>> unethical how history has a way of revealing past deeds.
>> Justice does come, albeit slowly.
>>
>> Kind regards as we near the end of 2019,
>>
>> Annalisa
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucsd.edu/pipermail/xmca-l/attachments/20191222/44e27af0/attachment.html
More information about the xmca-l
mailing list