I've used apps that thought I was a monkey but it recognized white people just fine, I've seen racist chat bots go on about how much they love hitler and hate black people, I've saw ai generated photos that produced racist depictions. What I'm wondering is it the developers making the ai racist or is it the code itself that is racist? I don't believe that any machine has reached sentience obviously, but I have no doubt in my mind that all ai I have experienced is racist, all I ask is why?
wait what
A machine learning model consists of taking all your inputs, applying several pages worth adding them together and multiplying by arbitrary constants, and getting a number out. Machine learning itself consists of methods for rejecting piles of arbitrary constants until you get one that outputs results similar to your training data. Nobody knows what's going on inside, because it's a pile of very arbitrary math, chosen automatically because it mostly does the right thing.
(There are other branches of machine learning that are more insightful, understandable, and explicable. But they're also more limited, and not the stuff that the last several years of ML hype has been about.)
deleted by creator
The current big thing in machine-learning are neural networks, which are vaguely based on how neurons interact (but each node in a neural net is much more rudimentary than an actual neuron) and basically these get trained with data and adjustment algorithms that try to make their outputs look more like the known correct answer for the known data sets, and often they're further trained by having people look at the outputs and say whether it was right or not.
Like, imagine a dog: the dog can be taught to do certain things based on certain stimuli by rewarding it with food when it does what you want, and that training can be anything from teaching it tricks, to teaching it useful behavior like herding or guiding someone, to turning it into an erratic weapon that will maul someone at the slightest incitement. You control the teaching process, but the actual internal mechanisms and what's been learned are entirely inscrutable because they're all encoded deep into a bafflingly complex web of nodes that we barely understand beyond "they work because they work, and there's little electrical bits and some chemicals, it's all real fiddly but it mostly does what it should."
That's what modern AI research is, just teaching really stupid fake dogs that live in computers to do useful tricks, which can nonetheless be very impressive since 100% of the fake brain is dedicated to a specific task instead of having to worry about stuff like thermal regulation, breathing, balance, what smells are, etc.
Machine learning has highly chaotic results. Not in the sense that it's random, but tiny changes to the learning rules can have huge and unpredictable effects on the final behavior.
The individual program, like the one controlling a particular chatbot, is just opaque. No one in the world really knows exactly how it determines what it will say, because its reasons are too complex and not determined by people.