They aren’t necessarily neutral since programming relative importance of characteristics may have implicit bias baked into the training of the algorithm. It’s not just a one way street of biased data being fed into it, the very structure of the training can include biases and accentuate them.
But that's the thing, most of the algorithms used mainstream do not program any relative characteristics, you do not program any characteristics at all. The algorithm learns all of these on its own from the data, and you only choose which features to include in this - and this is the source of bias, not the algorithm that decides how to split your decision tree ...
They aren’t necessarily neutral since programming relative importance of characteristics may have implicit bias baked into the training of the algorithm. It’s not just a one way street of biased data being fed into it, the very structure of the training can include biases and accentuate them.
But that's the thing, most of the algorithms used mainstream do not program any relative characteristics, you do not program any characteristics at all. The algorithm learns all of these on its own from the data, and you only choose which features to include in this - and this is the source of bias, not the algorithm that decides how to split your decision tree ...