• A_modicum_of_cheese [he/him,they/them]
    ·
    4 years ago

    But how would all this data be incorporated into credit ratings? Machine learning, of course. It’s black boxes all the way down. There is no way this wouldn't be racist, ableist, classist or any other 'ist Literally inventing Racism 2.0

    • roseateOculi [she/her,none/use name]
      ·
      edit-2
      4 years ago

      Machine Learning does not have to be racist if its done ethically and responsibly. Unfortunately, you are completely right and this will absolutely be all of those things. To train an AI, you have to have data. Inevitably, even if they remove factors like race, gender, etc from the data being processed, those factors will still be incorporated as "latent factors". For example, if you remove gender from the data (which wouldnt happen in real life because why would it) , there will still be systematic bias against women because they noticed some stupid shit like people who spend time on female oriented-websites have lower credit scores but not know or care why. This is exponentially compounded when you take intersectionality into account.

      For example, black women make about 62 cents per dollar a white man would make. This is obviously going to have a negative effect on the credit scores of black women. Taking that into account, when a black woman's browsing history is compared to that of others (assuming she follows "normal" societal/demographic trends), she will be docked points not only for visiting websites frequented by the black community and the female community, but also for websites frequented by the black female community. Even if she was being careful, one search for a womens' hair product geared towards black people could lower her credit score by multiple points. Its so absolutely fucked.

      The problem is less the "black box" aspect of machine learning and more that there is no way to train something to not have bias if the data itself is biased. The program is impartial; it only does what its told based on what it "knows". The bias comes from the fact that there is no way to produce unbiased data in a systematically oppressive system. Machine learning is the answer to a lot of problems. This is absolutely not one of them.