I've used apps that thought I was a monkey but it recognized white people just fine, I've seen racist chat bots go on about how much they love hitler and hate black people, I've saw ai generated photos that produced racist depictions. What I'm wondering is it the developers making the ai racist or is it the code itself that is racist? I don't believe that any machine has reached sentience obviously, but I have no doubt in my mind that all ai I have experienced is racist, all I ask is why?
The handwritten code generally isn't racist, although it can be.
But AI is trained on hand-fed datasets which is where the real hard hitting racism comes in. Training AI using only white people's faces, using existing crime stats from racist police departments (namely, all of them), those are the sorts of things that can make the incomprehensible algorithm simply be explicitly racist.
An example: say you're training an AI to predict which parts of a city are the most prone to crime. To do so, you might feed it a bunch of input data from other cities, including demographic information (like race). You also feed it the output you're eventually looking for -- crime reports from those same cities. All of this is localized to the smallest geographic unit you can manage (e.g., a neighborhood, a block, or even specific addresses). The idea is to train your AI to see patterns between the input and output data, to the point where it can accurately predict the output when given only input data.
Once your AI has sifted through this training data, you give it only the input data (including race) for City A and have it predict the output -- that is, predict where crime reports will occur. And what do you know, your AI predicts that most crime will occur in black neighborhoods.
Now, an idiot would tell you that a neutral AI shows that black people are going to commit the most crime. But you can see that the AI is not neutral -- it's trained on crime data that comes from racist policies like the War on Drugs, which from the beginning was intended to arrest black people for drug use at a far higher rate than white people despite black and white people using drugs at similar rates. It accomplished this by putting more cops in black communities (more cops = more arrests) and by taking advantage of the baseline racism present in police (if you're white and caught with a joint, the cop might just destroy it; if you're black, you're more likely to get arrested). There's also an economic factor that compounds this racism. Racist policies like redlining have robbed black people of a lot of generational wealth, and wealthier people are more likely to do drugs in the privacy of their own home (because they have bigger homes and more privacy) where they're less likely to get caught.
Your real-world training data is a product of explicitly racist policies, so the outputs from your AI are going to be biased in the same direction, even if your algorithm isn't "black people suck 010110" and even if the people building your AI aren't consciously racist themselves. In short, garbage in, garbage out.
basically just making a robot that tells them to do what theyre already doing, its completely useless because it wont come up with anything new.
It definitely can be used to launder whatever you're already doing through "well the computer said so, and it's not racist!"