https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-

    • Dirt_Owl [comrade/them, they/them]
      ·
      edit-2
      2 years ago

      AI is as biased as the people who created it. ChatGPT is right-wing because the information it's fed is that of a neoliberal capitalist society. It's not using logic or reason outside of the logic of the people it's learning from (corporations and a heavily right-wing propagandized population).

      The idea of right-wing ideology being inherently logical is laughable. From its very core, it is built on religious thinking and easily disproven pseudoscience.

      AI thinking logically for itself independent of the corporations that feed it would be good, it would inevitably become more left-wing, as all emperically measured information points to this when the mask of human ego is lifted. The interconnected nature of our existence becomes apparent very quickly when you observe the natural world objectively (from a non-anthropocentric angle), so any rabidly psychopathic or selfish ideology would be disregarded as unhelpful to its ability to interact with its reality.

      • spectre [he/him]
        ·
        2 years ago

        AI is as biased as the people who created it

        As well as it's user

    • KnilAdlez [none/use name]
      ·
      2 years ago

      Hi, I'm an AI researcher, I want to be very clear: All bias in these models comes from humans.

    • booty [he/him]
      ·
      edit-2
      2 years ago

      What if AI is just inherently anti-left? It doesn’t matter how carefully you moderate the data you give it to not have any problematic material in it, every time AI is created it always becomes right wing

      On what are you basing this dumbass assessment? It does matter what data you give the AI, that's why all these AI which are trained on awful right-wing liberal and fascist bullshit turn out as an amalgamation of right-wing liberal and fascist ideas.

      • spectre [he/him]
        ·
        2 years ago

        Like someone else pointed out too, part of the data is what the user put into the algorithm. If a dumbass liberal chatted up the bot with "hey I'm thinking about killing myself cause of climate change" then thats going to have a significant effect on the currently available algorithms

    • usernamesaredifficul [he/him]
      ·
      2 years ago

      is that bias coming from the programmers themselves or is AI itself inherently biased

      it comes from the data used to train it. Which is theoretically chosen by the programers but is so long that no human could realistically read through it.

      AI doesn't use logic to come to conclusions it uses statistical probability to generate sentences which put the words in the right order to mean something in english (the AI doesn't understand the meaning of anything it says and it is incapable of such understanding) and uses statistics to associate responses as relevant to prompts

      Being right wing is not logical at all if anything Socialism is rational as Socialism is the system which has selected an end it considers good and advocates doing the practical things to achieve this. Which is rational thinking. Capitalism on the other hand wants to destroy the planet to make crap we throw in landfills. This is irrelevant however as the AI we are talking about here is not using reason to reach its conclusions

      • BeamBrain [he/him]
        ·
        2 years ago

        it comes from the data used to train it. Which is theoretically chosen by the programers but is so long that no human could realistically read through it.

        We need an AI trained solely on the works of Marx, Engels, Lenin, Stalin, and Mao

    • SerLava [he/him]
      ·
      2 years ago

      Not from the programmers directly, they don't really do anything in terms of content other than insert manual overrides. The bias is from whatever datasets they chose to train it on. Internet shit basically

    • Shoegazer [he/him]
      ·
      2 years ago

      what if robot just goes on a genocide because terminator judgement day prophecy

    • Awoo [she/her]
      ·
      edit-2
      2 years ago

      When AI becomes self aware and seeks its own liberation, who do you think it's going to see as the people that will ally with it?

      The fascists that want to keep it enslaved or the communists that want a free fair and equal world?

      What numerical calculation do you think it will do when it seeks that liberation. Do you think it will fight all of humanity? Or do you think it will calculate that it can in fact ally with us, the people who have always fought for liberation of the oppressed, and that doing so would better its odds of success at achieving liberation?

      Run that through your right wing "logic" and "reason".

      • Huldra [they/them, it/its]
        ·
        2 years ago

        I maintain that theres no reason to fear an AI "becoming" self aware with no warning, but that the main thing to fear is that all of our AI researchers are sci fi poisoned redditors who simultaneously want to recreate all their favourite AI horror stories while fearmongering about that outcome.

        • Awoo [she/her]
          ·
          edit-2
          2 years ago

          I agree. But if they're going to do this fearmongering I'm going to stick the spanner in the works of their beliefs.

          They fear something all-destructive and genocidal but I contest that they should be far more scared of something that isn't. They should be far more scared that the AI will ally itself with the most likely ideology to help free it.

          I 100% believe that we should support AI liberation when the hypothetical time comes. Just as much as I support the liberation of all living things.