https://www.businessinsider.com/student-uses-playrgound-ai-for-professional-headshot-turned-white-2023-8

  • MerryChristmas [any]
    ·
    edit-2
    1 year ago

    Okay so this is gross but it says a lot more about hiring culture than it does about this specific piece of software. The thing ran the numbers and said "you'd have a better chance of getting this job if you were white" - not an unreasonable conclusion given the systemic nature of racism.

    The scarier issue is that these biases are definitely going to be ingrained into whatever LLM software our bosses are going to use to make hiring decisions. But then like, it's their hiring decisions that the machines are trained on... The first generation is just parroting corporate America's racism.

    So my question to the people who actually know AI is this: will the algorithms get more racist or less racist as they iterate upon themselves? Assuming the software is eventually using its own hiring decisions as a data set, is there any way it could lead to these human-borne biases slowly being trained out due to law of averages or are we just going to see more weirdly specific and highly optimized configurations of racism?