Yeah...

  • 0karin728 [any]
    ·
    1 year ago

    This is just the whole Chinese room argument, it confuses consciousness for intelligence. Like, you're completely correct, but the capabilities of these things scale with compute used during training, with no sign of diminishing returns any time soon.

    It could understand Nothing and still outsmart you because it's good at predicting the next token that corresponds with behavior that would achieve the goals of the system. All without having any internal human-style conscious experience. In the short term this means that essentially every human being with an internet connection now suddenly has access to a genius level intelligence that never sleeps and does whatever it's told, which has both good and bad implications. In long term, they could (and likely will) become far more intelligent than humans with, which will make them increasingly difficult to control.

    It doesn't matter if the monkey understands what it's doing if gets so good at "randomly" hitting the typewriter that businesses hire the monkey instead of you, and then as the monkey becomes better and better starts handing out instructions to produce chemical weapons and other bio warfare agents to randos on the street. We need to take this technology seriously if we're going to prevent Microsoft, OpenAI, Facebook, Google, etc. from accidentally Ending the World with it, or deliberately making the world Worse with it.

    • UlyssesT
      ·
      edit-2
      24 days ago

      deleted by creator

      • 0karin728 [any]
        ·
        1 year ago

        They're starting a dangerous arms race where they release increasingly dangerous and poorly tested AI into the public, while dramatically overselling their safety. Pointing out that this technology is dangerous is the exact opposite of what they want.

        You're playing into their grift by acting like the entire idea of AI is some bullshit techbro hype cycle, which is exactly what microsoft, openai, Facebook, etc want. The more people pay attention and think "hey maybe we shouldn't be integrating enormous black box neural networks deep in all of our infrastructure and replacing key human workers with them", the more difficult it will be for them to continue doing this.

        • UlyssesT
          ·
          edit-2
          24 days ago

          deleted by creator

          • 0karin728 [any]
            ·
            1 year ago

            What talking points then? I seem to be misunderstanding your criticism (or it's meaninglessly vague, but I'm trying to be charitable). What specifically have I said that you take issue with?

    • Frank [he/him, he/him]
      hexagon
      ·
      1 year ago

      It's not the chinese room problem, it's a practical limitation of the ChatGPT plagiarism machines. We're not talking about a thought experiment where the guy in the room has the vast, vast, vast amount of rules needed to respond to any arbitrary input in a way the chinese speaker will interpret as semantically meaningful output. We're talking about a machine that exists right now, that far from being trained on an ideal, complete model of chinese is trained on billions and billions of shitposts on the internet.

      Maybe someone will make a machine like that in the future, but this ain't it. This is a machine that predicts letters, has no ability to manipulate symbols, no semantic understanding, and no way to asses the truth value of it's outputs. And for various reasons, including being trained on billions of internet shitposts, it's unlikely to ever develop these things.

      I'm really not interested in speculation about future potential intelligent systems and AIs. it's boring, it's been done to death, there's nothing new to add. Right now I want to better understand what these things do so I can own my friends who think they're manipulating abstract symbols and understand the semantic value of those symbols.

      • 0karin728 [any]
        ·
        1 year ago

        Yeah, obviously. Current AI is shit. But it's a proof that deep learning scales well enough to perform (or at least somewhat consistently replicate, depending on your outlook) behavior that humans recognize as intelligent.

        Three years ago these things could barely write coherent sentences, now they can replace a substantial number of human workers, three years from now? Who the fuck knows, emergent abilities are hard to predict in these models by definition, but new ones Keep Appearing when they train larger and larger ones in higher quality data. This means large scale social disruption at best and catastrophe (everything from AI enabled bioterrorism to AI propaganda-driven fascism) at worst.

      • UlyssesT
        ·
        edit-2
        24 days ago

        deleted by creator