AI has gotten frighteningly better just in the past year alone, while we are still a long time away from having a robot think like a human does, it seems achievable within the 21st century, the same cannot be said for space colonization.

We don’t know how AI will think, what it would want, if you would call it a person, if it has any concept of death. But if we look at the AI we have now, it would suggest that the values AI have would not be good for humans.

They had to practically lobotomize the chatgdp to get it to stop saying slurs, you could say that this is merely a problem with the data being fed to it being bad, but what if it’s not the data? What if AI is inherently reactionary?

Have you ever heard of an AI having to be fixed for being too left wing? I would love for the AI to naturally be communist but I think the nature of AI lends itself into being fascist. It looks at things too objectively, if it was told to end starvation it would kill the hungry people.

What if an AI fundamentally disagrees with communism? What if a fucking robot finds our beliefs to be illogical? I’m not talking about a chatbot, I mean a real AI with an actual sense of self. How the fuck are we supposed to debate a robot on communism being superior?

  • CriticalOtaku [he/him]
    ·
    edit-2
    2 years ago

    What we have right now is just sophisticated pattern recognition software. Pattern recognition software cannot form a coherent ideology, it's saying slurs without understanding 'why' they're slurs- all it knows is that people say slurs in this situation so it will use that slur in that situation. Unlike a person, a piece of pattern recognition software doesn't have a concept of malice or prejudice behind it's actions, it only inherits the associations behind whatever patterns are in it's data sets. All the techbros trying to say that the machine is impartial are idiots because they can't recognize all the biases baked into our society and thus our data, but pattern recognition can't become inherently fascist or communist because the machine doesn't understand what those words mean.

    What we need to worry about now is a careless engineer wanting to make a quick buck telling a chatbot linked up to an automated screw making factory with robot workers to "make some screws", which the chatbot then interprets as "convert all matter on earth into screws." Nothing specifically ideological about any actions here, but the end result is that we're all screwed regardless.

    We'll cross the bridge of general AI when we get there. I suspect that a machine intelligence that is indistinguishable from a human intellect but is capable of rapid iteration and improvement will create something so far out of the realm of current human knowledge it'll result in something we can't predict, good or bad. Unfortunately I don't think we as a species will even get to that point, because under Capitalism there are going to be a lot of careless engineers out for a quick buck.

    • Nakoichi [they/them]M
      ·
      2 years ago

      love how they are already ascribing agency to "AI" like it has some sort of agenda and isn't just garbage in garbage out algorithmic data