I've used apps that thought I was a monkey but it recognized white people just fine, I've seen racist chat bots go on about how much they love hitler and hate black people, I've saw ai generated photos that produced racist depictions. What I'm wondering is it the developers making the ai racist or is it the code itself that is racist? I don't believe that any machine has reached sentience obviously, but I have no doubt in my mind that all ai I have experienced is racist, all I ask is why?
People in here have done a good job of covering racism in the formal training set. But AI also tends to be fine-tuned on examples the devs have immediately handy, pointing it at themselves and their coworkers and double checking whatever they get out of it. That's why the Google traffic algorithm is like 70% accurate world wide but 99% accurate on two highways in the bay area. And it ends up reflecting hiring biases in these companies, where they're mostly white, mostly male, and especially not black. So you end up with a black product manager I met who worked on voice recognition for the XBOX, who used his "white guy voice" at work, and who the machine couldn't understand when he spoke in his natural dialect. Or an Indian friend who worked on the tracking software in the Amazon Go store, and it would glitch out when looking at him because the camera wasn't calibrated properly for skin as dark as him (and he knows this, but the execs they have to make demos for are white, so it doesn't get prioritized).