AI has gotten frighteningly better just in the past year alone, while we are still a long time away from having a robot think like a human does, it seems achievable within the 21st century, the same cannot be said for space colonization.

We don’t know how AI will think, what it would want, if you would call it a person, if it has any concept of death. But if we look at the AI we have now, it would suggest that the values AI have would not be good for humans.

They had to practically lobotomize the chatgdp to get it to stop saying slurs, you could say that this is merely a problem with the data being fed to it being bad, but what if it’s not the data? What if AI is inherently reactionary?

Have you ever heard of an AI having to be fixed for being too left wing? I would love for the AI to naturally be communist but I think the nature of AI lends itself into being fascist. It looks at things too objectively, if it was told to end starvation it would kill the hungry people.

What if an AI fundamentally disagrees with communism? What if a fucking robot finds our beliefs to be illogical? I’m not talking about a chatbot, I mean a real AI with an actual sense of self. How the fuck are we supposed to debate a robot on communism being superior?

  • bluescreen [none/use name]
    ·
    2 years ago

    It was a good test. We just passed it. Now it's as useful as a high school algebra test is for a licensed engineer.