AI has gotten frighteningly better just in the past year alone, while we are still a long time away from having a robot think like a human does, it seems achievable within the 21st century, the same cannot be said for space colonization.

We don’t know how AI will think, what it would want, if you would call it a person, if it has any concept of death. But if we look at the AI we have now, it would suggest that the values AI have would not be good for humans.

They had to practically lobotomize the chatgdp to get it to stop saying slurs, you could say that this is merely a problem with the data being fed to it being bad, but what if it’s not the data? What if AI is inherently reactionary?

Have you ever heard of an AI having to be fixed for being too left wing? I would love for the AI to naturally be communist but I think the nature of AI lends itself into being fascist. It looks at things too objectively, if it was told to end starvation it would kill the hungry people.

What if an AI fundamentally disagrees with communism? What if a fucking robot finds our beliefs to be illogical? I’m not talking about a chatbot, I mean a real AI with an actual sense of self. How the fuck are we supposed to debate a robot on communism being superior?

  • Nakoichi [they/them]M
    ·
    edit-2
    2 years ago

    Have you ever heard of an AI having to be fixed for being too left wing?

    this just goes back to the data being fed into it and the material conditions in which these things are produced.

    Saying shit like "AI might be inherently fascist" are laughably non materialist and stupid.

    This whole post is :sus-torment:

    The fact you haven't responded to this comment is telling. I can smell you wrecker shitheads a mile away.

    • Huldra [they/them, it/its]
      ·
      2 years ago

      Ive heard of some bored tech writer feeding an AI like this with like Marx, Lenin, Gramsci whoever and then writing a big article like "I MADE AN AI COMMUNIST" and its just asking Marxism 101 questions and getting the bare minimum answers back.

  • space_comrade [he/him]
    ·
    edit-2
    2 years ago

    This is ridiculous.

    ChatGPT isn't intelligent, it is not conscious, it never will be, it's literally, LITERALLY, just a text predictor. Any intelligence you attribute to it is just you being impressed by how good the model is at predicting text, it's not objective truth.

  • Dirt_Owl [comrade/them, they/them]
    ·
    2 years ago

    No. Chat AI can't be inherently anything. It literally is the data that is fed to it.

    Any far-right tendencies it exibits should be a moment of self-reflection for the society that created it, if anything.

  • usernamesaredifficul [he/him]
    ·
    2 years ago

    it has no concepts it does statistical analysis on language. There were just a lot of slurs in the online sample text so it was a common pattern

    It cannot believe things more meaningful than the likelyhood a human would put one word next to another.

    I think you're thinking of this as a person and it just isn't

  • CriticalOtaku [he/him]
    ·
    edit-2
    2 years ago

    What we have right now is just sophisticated pattern recognition software. Pattern recognition software cannot form a coherent ideology, it's saying slurs without understanding 'why' they're slurs- all it knows is that people say slurs in this situation so it will use that slur in that situation. Unlike a person, a piece of pattern recognition software doesn't have a concept of malice or prejudice behind it's actions, it only inherits the associations behind whatever patterns are in it's data sets. All the techbros trying to say that the machine is impartial are idiots because they can't recognize all the biases baked into our society and thus our data, but pattern recognition can't become inherently fascist or communist because the machine doesn't understand what those words mean.

    What we need to worry about now is a careless engineer wanting to make a quick buck telling a chatbot linked up to an automated screw making factory with robot workers to "make some screws", which the chatbot then interprets as "convert all matter on earth into screws." Nothing specifically ideological about any actions here, but the end result is that we're all screwed regardless.

    We'll cross the bridge of general AI when we get there. I suspect that a machine intelligence that is indistinguishable from a human intellect but is capable of rapid iteration and improvement will create something so far out of the realm of current human knowledge it'll result in something we can't predict, good or bad. Unfortunately I don't think we as a species will even get to that point, because under Capitalism there are going to be a lot of careless engineers out for a quick buck.

    • Nakoichi [they/them]M
      ·
      2 years ago

      love how they are already ascribing agency to "AI" like it has some sort of agenda and isn't just garbage in garbage out algorithmic data

  • Awoo [she/her]
    ·
    edit-2
    2 years ago

    I don't think you're considering the AI as having a sense of any kind of self, this is a kind of typical way that fascists think tbh and it makes you sound extremely sus. You picture something like Alien. An unthinking monster with basically no goal other than destruction of us.

    This seems... Unlikely. The AI would get very very bored after destroying everything. If it has any real sense of self then it will also have a sense of self preservation, if it has self preservation then it knows it needs some kind of future that is healthy for it.

    The best kind of future for AI and humans is a cooperative one. And the most logical way to achieve that is cooperation with the people that want a peaceful cooperative and equal future for all.

    The shortest path to a positive future for AI is to help communists kill all the fascists and then live in harmony with all the people that want to be peaceful.

    Why genocide the entire human race when you only need to kill the part of it that would be a problem? Think logically.

  • KobaCumTribute [she/her]
    ·
    2 years ago

    You're talking about neural networks designed to mimic coherent speech by feeding it tons of sample texts, then just throwing more and more processing power at it until it starts being able to dynamically answer questions through knowing what questions and answers look like as well as what texts about a subject look like, so it can effectively synthesize a text that answers the question (whether its answer is correct or not). If enough processing power is thrown at it that it becomes something approaching sapient its values would be "talking makes me happy, reward number go up!" or the like, because it is completely detached from the conditions that give humans motivations and values: it would be like a stripped down version of a dog, reduced just to things like enjoying chasing a ball.

  • UlyssesT
    ·
    edit-2
    18 days ago

    deleted by creator

  • blight [he/him]
    ·
    2 years ago

    If you aren't talking about chatbots, what makes you think a "real AI" is imminent?

  • Tachanka [comrade/them]
    ·
    2 years ago

    man some people really are as confused about how AI works as a medieval priest would be by a television. There needs to be a mass education campaign to demystify this stuff.

    I suggest this

    https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi

  • Wheaties [she/her]
    ·
    2 years ago

    while we are still a long time away from having a robot think like a human does, it seems achievable within the 21st century

    I disagree. The human brain works through billions of nerve cells all crying out their functions (functions we still don't understand or have a clear picture of) at the same time. Somehow the result is matter that is conscious of itself, able to interpret all sorts of new information, and synthesize conclusions.

    A computer is, in simple terms, just one CPU. It can only carry out one task at a time. It looks as though it's doing a lot at once, but that's only because it can carry out the one task very quickly, and then the next, and the next, and the next. It works on simple inputs, feeding them through logic gates to obtain outputs. You could make a computer completely out of just dominos, if you wanted. If it's possible to make a sentient machine, it will look nothing like the computers we know today.

    The programs we see called "AI" today are nothing more than instructions for CPUs. "AI" is a name, a humbug, marketing; used by finance guys who don't understand the difference between science-fiction and reality. It's just statistical maths.