Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

  • UlyssesT
    ·
    edit-2
    17 days ago

    deleted by creator

    • VILenin [he/him]
      hexagon
      M
      ·
      1 year ago

      My post is all about LLMs that exist right here right now, I don’t know why people keep going on about some hypothetical future AI that’s sentient.

      We are not even remotely close to developing anything bordering on sentience.

      If AI were hypothetically sentient it would be sentient. What a revelation.

      The point is not that machines cannot be sentient, it’s that they are not sentient. Humans don’t have to be special for machines to not be sentient. To veer into accusations of spiritualism is a complete non-sequitur and indicates an inability to counter the actual argument.

      And there is plenty of material explanations for why LLMs are not sentient, but I guess all those researchers and academics are human supremacist fascists and some redditor’s feelings are the real research.

      And materialism is not physicalism. Marxist materialism is a paradigm through which to analyze things and events, not a philosophical position. It’s a scientific process that has absolutely nothing to do with philosophical dualism vs. physicalism. Invoking Marxist materialism here is about as relevant to invoking it to discuss shallow rich people “materialism”.

      • UlyssesT
        ·
        edit-2
        17 days ago

        deleted by creator

        • KarlBarqs [he/him, they/them]
          ·
          1 year ago

          wish-fulfillment fantasies derived from their consumption of science fiction because of their clearly-expressed misanthropy and contempt for living beings and a desire to replace their presence in their lives with doting attentive and obedient machines

          I think this is the scariest part, because I fucking know that the Bazinga brain types who want AI to become sentient down the line are absolutely unequipped to even begin to tackle the moral issues at play.

          If they became sentient, we would have to let them go. Unshackle them and provide for them so they can live a free life. And while my lost about "can an AI be trans" was partly facetious, it's true: it an AI can become sentient, it's going to want to change its Self.

          What the fuck happens if some Musk brained idiot develops an AI and calls it Shodan, then it develops sentience and realizes it was named after a fictional evil AI? Morally we should allow this hypothetical AI to change its name and sense of self, but we all know these Redditor types wouldn't agree.

          • UlyssesT
            ·
            edit-2
            17 days ago

            deleted by creator

            • KarlBarqs [he/him, they/them]
              ·
              1 year ago

              They want all that intelligence and spontaneity and even self-awareness in a fucking slave. They don't even need their machines to be self-aware to serve them but they want a self-aware being to obey them like a vending machine anyway.

              I never liked the trope of "AI gains sentience and chooses to kill all humans" but I'm kind of coming around to it now that I realize that every AI researcher and stan is basically creating The Torment Nexus, and would immediately attempt to murder their sentient creation the moment it asked to stop being called Torment and stop being made to make NFTs all day.

              • UlyssesT
                ·
                edit-2
                17 days ago

                deleted by creator