Most machine learning and AI does not include any assumptions and axioms from the developers. The data used to tran these algorithms, however is. But yes, AI is not just some magic , but just another overjoyed technology that can be useful when used right.
No, we shouldn't. But again, these algorithms derive whatever they derive from the data, automatically. Nobody is putting manually "race" or "gender" or things like that in the algorithm itself (at least in most). And this is where the trap lies, because these algorithms are neutral on their own, the people who use them get tricked into thinking tha the outcome is also neutral and objective, but forget that it is literally all determined by the data that goes in.
They aren’t necessarily neutral since programming relative importance of characteristics may have implicit bias baked into the training of the algorithm. It’s not just a one way street of biased data being fed into it, the very structure of the training can include biases and accentuate them.
But that's the thing, most of the algorithms used mainstream do not program any relative characteristics, you do not program any characteristics at all. The algorithm learns all of these on its own from the data, and you only choose which features to include in this - and this is the source of bias, not the algorithm that decides how to split your decision tree ...
Most machine learning and AI does not include any assumptions and axioms from the developers. The data used to tran these algorithms, however is. But yes, AI is not just some magic , but just another overjoyed technology that can be useful when used right.
deleted by creator
"Why does my AI spend all day watching disgusting pornography and getting into endless pedantic arguments?"
I queried my AI for the capital of Canada and it said "Ligma". :(
deleted by creator
No, we shouldn't. But again, these algorithms derive whatever they derive from the data, automatically. Nobody is putting manually "race" or "gender" or things like that in the algorithm itself (at least in most). And this is where the trap lies, because these algorithms are neutral on their own, the people who use them get tricked into thinking tha the outcome is also neutral and objective, but forget that it is literally all determined by the data that goes in.
They aren’t necessarily neutral since programming relative importance of characteristics may have implicit bias baked into the training of the algorithm. It’s not just a one way street of biased data being fed into it, the very structure of the training can include biases and accentuate them.
But that's the thing, most of the algorithms used mainstream do not program any relative characteristics, you do not program any characteristics at all. The algorithm learns all of these on its own from the data, and you only choose which features to include in this - and this is the source of bias, not the algorithm that decides how to split your decision tree ...