I've used apps that thought I was a monkey but it recognized white people just fine, I've seen racist chat bots go on about how much they love hitler and hate black people, I've saw ai generated photos that produced racist depictions. What I'm wondering is it the developers making the ai racist or is it the code itself that is racist? I don't believe that any machine has reached sentience obviously, but I have no doubt in my mind that all ai I have experienced is racist, all I ask is why?

  • SerLava [he/him]
    ·
    3 years ago

    The handwritten code generally isn't racist, although it can be.

    But AI is trained on hand-fed datasets which is where the real hard hitting racism comes in. Training AI using only white people's faces, using existing crime stats from racist police departments (namely, all of them), those are the sorts of things that can make the incomprehensible algorithm simply be explicitly racist.

    • andys_nuts [none/use name]
      ·
      3 years ago

      using existing crime stats from racist police departments (namely, all of them)

      An example: say you're training an AI to predict which parts of a city are the most prone to crime. To do so, you might feed it a bunch of input data from other cities, including demographic information (like race). You also feed it the output you're eventually looking for -- crime reports from those same cities. All of this is localized to the smallest geographic unit you can manage (e.g., a neighborhood, a block, or even specific addresses). The idea is to train your AI to see patterns between the input and output data, to the point where it can accurately predict the output when given only input data.

      Once your AI has sifted through this training data, you give it only the input data (including race) for City A and have it predict the output -- that is, predict where crime reports will occur. And what do you know, your AI predicts that most crime will occur in black neighborhoods.

      Now, an idiot would tell you that a neutral AI shows that black people are going to commit the most crime. But you can see that the AI is not neutral -- it's trained on crime data that comes from racist policies like the War on Drugs, which from the beginning was intended to arrest black people for drug use at a far higher rate than white people despite black and white people using drugs at similar rates. It accomplished this by putting more cops in black communities (more cops = more arrests) and by taking advantage of the baseline racism present in police (if you're white and caught with a joint, the cop might just destroy it; if you're black, you're more likely to get arrested). There's also an economic factor that compounds this racism. Racist policies like redlining have robbed black people of a lot of generational wealth, and wealthier people are more likely to do drugs in the privacy of their own home (because they have bigger homes and more privacy) where they're less likely to get caught.

      Your real-world training data is a product of explicitly racist policies, so the outputs from your AI are going to be biased in the same direction, even if your algorithm isn't "black people suck 010110" and even if the people building your AI aren't consciously racist themselves. In short, garbage in, garbage out.

      • ToastGhost [he/him]
        ·
        3 years ago

        basically just making a robot that tells them to do what theyre already doing, its completely useless because it wont come up with anything new.

        • andys_nuts [none/use name]
          ·
          3 years ago

          It definitely can be used to launder whatever you're already doing through "well the computer said so, and it's not racist!"

  • jack [he/him, comrade/them]
    ·
    3 years ago

    Sometimes it's picked up from racist programmers, but really it's because AI is trained on large sources of publicly available data with reflects racism in our society. Imagine a child literally raised by the internet.

    • Dingdangdog [he/him,comrade/them]
      ·
      3 years ago

      Yeah that's the actual answer. I think a lot of chat bots just learn from conversation. So you have a bunch of 4chan trolls come in and spam slurs at it and there you go.

      Not recognizing black skin though is the developers not having any black people on their team or just not giving a shit.

  • mr_world [they/them]
    ·
    3 years ago

    AI is largely a Silicon Valley grift at this point. I mean it's mostly about acquiring funding for an ai venture that has to return results before the next round. There's a lot of promises being made that can't really be kept and people trying to make them happen by any means necessary. One of the things they do to bridge the gap between "functioning software that can be sold as a service" and "ai that works" is have humans do all the actual ai work. They hire cheap workers to train the ai by doing all the stuff the ai is supposed to actually do. Captcha is a good example. They teach ai to recognize street lights by having millions of people voluntarily pick out the street lights. Then google gets to sell their amazing AI that can recognize street lights. Of course google is already funded, but it's the same idea. Behind any great AI program is just people constantly training it and correcting it. Therefore the biases that exist in society come out through the AI results. To train facial recognition they feed a bunch of white faces into it and have people match the features. The people doing it have no idea they're only using such a small sample of human appearance. Nobody questions it until it's already released and non-white people notice.

    Chat bots are also trained by people who use them. 4chan has been typing racist shit into any available chat bot for years. They just repeat what's fed to them.

  • Owl [he/him]
    ·
    3 years ago

    People in here have done a good job of covering racism in the formal training set. But AI also tends to be fine-tuned on examples the devs have immediately handy, pointing it at themselves and their coworkers and double checking whatever they get out of it. That's why the Google traffic algorithm is like 70% accurate world wide but 99% accurate on two highways in the bay area. And it ends up reflecting hiring biases in these companies, where they're mostly white, mostly male, and especially not black. So you end up with a black product manager I met who worked on voice recognition for the XBOX, who used his "white guy voice" at work, and who the machine couldn't understand when he spoke in his natural dialect. Or an Indian friend who worked on the tracking software in the Amazon Go store, and it would glitch out when looking at him because the camera wasn't calibrated properly for skin as dark as him (and he knows this, but the execs they have to make demos for are white, so it doesn't get prioritized).

  • JoeByeThen [he/him, they/them]
    ·
    3 years ago

    Photography standards suffered from the same issues up until a few decades ago.

    Abstract: Until recently, due to a light-skin bias embedded in colour film stock emulsions and digital camera design, the rendering of non-Caucasian skin tones was highly deficient and required the development of compensatory practices and technology improvements to redress its shortcomings. Using the emblematic “Shirley” norm reference card as a central metaphor reflecting the changing state of race relations/aesthetics, this essay analytically traces the colour adjustment processes in the industries of visual representation and identifies some prototypical changes in the field. The author contextualizes the history of these changes using three theoretical categories: the ‘technological unconscious’ (Vaccari, 1981), ‘dysconsciousness’ (King, 2001), and an original concept of ‘cognitive equity,’ which is proposed as an intelligent strategy for creating and promoting equity by inscribing a wider dynamic range of skin tones into image technologies, products, and emergent practices in the visual industries.

    Looking at Shirley, the Ultimate Norm: Colour Balance, Image Technologies, and Cognitive Equity

  • ssjmarx [he/him]
    ·
    3 years ago

    Something that blew my mind when I learned it is that machine-learning algorithms produce programs that nobody really understands beyond a conceptual level. Like if a regular computer program is doing something unexpected, the creators can scrub through the code, find the cause, and fix it - but if your chat bot starts spamming antisemetic and racist phrases, often the only thing you can do is roll it back to a version that didn't say those things (which of course does nothing to prevent it from re-learning them).

    • LoudMuffin [he/him]
      ·
      3 years ago

      machine-learning algorithms produce programs that nobody really understands beyond a conceptual level

      wait what

      • Owl [he/him]
        ·
        3 years ago

        A machine learning model consists of taking all your inputs, applying several pages worth adding them together and multiplying by arbitrary constants, and getting a number out. Machine learning itself consists of methods for rejecting piles of arbitrary constants until you get one that outputs results similar to your training data. Nobody knows what's going on inside, because it's a pile of very arbitrary math, chosen automatically because it mostly does the right thing.

        (There are other branches of machine learning that are more insightful, understandable, and explicable. But they're also more limited, and not the stuff that the last several years of ML hype has been about.)

      • KobaCumTribute [she/her]
        ·
        3 years ago

        The current big thing in machine-learning are neural networks, which are vaguely based on how neurons interact (but each node in a neural net is much more rudimentary than an actual neuron) and basically these get trained with data and adjustment algorithms that try to make their outputs look more like the known correct answer for the known data sets, and often they're further trained by having people look at the outputs and say whether it was right or not.

        Like, imagine a dog: the dog can be taught to do certain things based on certain stimuli by rewarding it with food when it does what you want, and that training can be anything from teaching it tricks, to teaching it useful behavior like herding or guiding someone, to turning it into an erratic weapon that will maul someone at the slightest incitement. You control the teaching process, but the actual internal mechanisms and what's been learned are entirely inscrutable because they're all encoded deep into a bafflingly complex web of nodes that we barely understand beyond "they work because they work, and there's little electrical bits and some chemicals, it's all real fiddly but it mostly does what it should."

        That's what modern AI research is, just teaching really stupid fake dogs that live in computers to do useful tricks, which can nonetheless be very impressive since 100% of the fake brain is dedicated to a specific task instead of having to worry about stuff like thermal regulation, breathing, balance, what smells are, etc.

      • Catherine_Steward [she/her]
        ·
        edit-2
        3 years ago

        The individual program, like the one controlling a particular chatbot, is just opaque. No one in the world really knows exactly how it determines what it will say, because its reasons are too complex and not determined by people.

      • NephewAlphaBravo [he/him]
        ·
        edit-2
        3 years ago

        Machine learning has highly chaotic results. Not in the sense that it's random, but tiny changes to the learning rules can have huge and unpredictable effects on the final behavior.

  • SolidaritySplodarity [they/them]
    ·
    3 years ago

    Trchbros use biased input data, are too lazy to do the research to avoid this, and are often racist themselves in the questions they choose to raise.

    Even when these things aren't true, they're systemically promoted because the marketing promise of AI is that you get high quality models without having to do all that researching and science but can just throw data at a box and get magic out. They're not paying you to address the biases in the dataset or bringing social science experts or fuck just a bunch of people to look at and criticize the modeling. They're paying you because your resume said deep learning on it and that's a magic box you put data into and gets insights from. And it works perfectly on 90% of the devs and owners.

  • Gay_Wrath [fae/faer]
    ·
    3 years ago

    Easy one, the code itself is racist bc it was developed by white supremacists who never checked their own racism. The algorithm is just a big dataset that they programmed with certain tagging, but the internet itself is bad and the people who are working on this are usually highly paid white people, so you're working off a racist/abelist etc dataset and tagging system. So tiktok tags "ugly" people as something they don't want to go viral in case of "bulying" but that of course, includes visable disabled people. And the twitter issue where it still favors cropping to anything but a black person. Youtube has also made it so queer issues are silenced, videos even with the word "lesbian" get demonitized even if it's from a lesbian identified person talking about their positive experiences and there's no sexual mentions.

    These also leads to insularity --> twitter can see people who talk in a certain way only follow other people who talk like that. That means white people literally won't even see black people recommended to them and so on.

  • Nagarjuna [he/him]
    ·
    3 years ago

    What people said about racist datasets is true, but it's important to remember who's making this code. Most facial recognition is bound for law enforcement, and coders make this, so a lot of it is made by either explicit racists or white libs who've literally never thought about it very hard.

    For example, the border patrol's facial recognition system is made by a NeoReaxtionary dark enlightenment guy

  • duderium [he/him]
    ·
    3 years ago

    I can't remember who said this, but someone somewhere once claimed that joint stock companies are the first artificial intelligences that have ever come into existence, and—as all of us know—they tend to be racist as fuck and likewise to take a pretty dim view of humanity.