https://subscriber.politicopro.com/article/eenews/2023/07/06/a-faster-supercomputer-will-help-scientists-assess-the-risk-of-controlling-sunlight-00104815
https://subscriber.politicopro.com/article/eenews/2023/07/06/a-faster-supercomputer-will-help-scientists-assess-the-risk-of-controlling-sunlight-00104815
Impoverished people that often systemically suffer from racism as part of what impoverishes them can and in some ways already do suffer further from machine learning technology pressed against them.
The reason I brought up "nonpolitical" as a common techbro conceit elsewhere is applicable here: they can claim (and already do) that it's "just nonpolitical objective data" that is saying poor minorities are poor because they are minorities.
Only someone completely boneheaded and ignorant of the nature of statistics would conclude that because data shows black people are imprisoned more black people are more criminal. So I am not surprised many tech people think that
racism in AI is a real issue that not enough is done to combat
I actually took a fair number of statistics courses, and tried to explain some basic stuff like what margins of error mean and why there's significance to sample sizes of as few as a thousand, but I've had roommates dismiss data that didn't fit what they already believed then immediately embrace something else that fit what they believed.
The most glaring example was "not a racist, but" chudlings talking about "do you know that commit X% of the violent crimes" talking points and I'd counter, just to fuck with them, with statistics about what percentage of violent crimes are done by men in general and they'd go full about how unfair that is.
They wanted to be Dwight Schrute style violent nerd warriors but didn't want to seem like a statistical violence risk.