"The world has changed forever... is the name of another Medium article I'm writing" :tito-laugh:

"Everything I normally outsource to Fiverr, I now outsource to ChatGPT 4"

  • MF_COOM [he/him]
    ·
    2 years ago

    if someone builds AI models that are capable of understanding how their AI works and coming up with novel improvements. That would be incredibly dangerous and probably will destroy the world within a couple years.

    Yeah this is what one of them is afraid of, and they're very concerned about "AI security". OOC why do you say that's not happening soon, and in what way is such an AI an actual threat to humanity?

    • Owl [he/him]M
      ·
      edit-2
      2 years ago

      Not happening soon - Kind of hard to explain without really getting into how things like ChatGPT work. The real reason I'm confident about this is that I sat through learning how LLMs work (best explanation I've seen, if you're already technically inclined) and there's nothing inside it that can reason. But some easy arguments are that you can't get ChatGPT to output a novel idea that's not just a combination of two ideas, that the increased size = more performance scaling regime has leveled out pretty hard, and that OpenAI has already given up on scaling that way.

      Genuine threat - This comes in two parts, capability and amorality.

      Capability - We have no reason to believe that human-level intelligence is some sort of fundamental cap. If an AI is capable of performing novel AI research to a good enough level to build a better AI, that better AI will be able to improve on the original design more than the first. This lets someone build a feedback loop of better and better AIs running faster and faster. We don't have any idea what the limits of these things are, but because human intelligence is probably not some sort of cap, it's presumably a lot.

      Amorality - Despite being "smarter" than humans, the goals of any such AI will be whatever is programmed into the software. Doing things people would actually want is a very specific goal, which requires understanding morality (which we don't), understanding concepts like what a person is (nobody knows how to make an AI that knows the difference between a person and a description of a person), and not having any bugs in the goal function (oh no). Even if the AI is smart enough to understand that its goal function is buggy, it's goal will still be to do the thing specified by the buggy function, so it's not like it's going to fix itself. Any goal that does not specifically value people and lives (which are very specific things we don't know how to specify) would prefer to disassemble us so it can use our atoms for something it actually cares about.

      Optimism - The current trajectory of AI research is to pump a ton of money into chasing capabilities that the current state of the art won't be able to reach, oversaturate a small market, and poison people's perceptions of AI capabilities for a generation. This has happened before and I think it will happen again. This will give people a lot more time to figure out those morality problems, if climate change doesn't kill us first.

      • MF_COOM [he/him]
        ·
        2 years ago

        Capability - We have no reason to believe that human-level intelligence is some sort of fundamental cap. If an AI is capable of performing novel AI research to a good enough level to build a better AI, that better AI will be able to improve on the original design more than the first. This lets someone build a feedback loop of better and better AIs running faster and faster. We don’t have any idea what the limits of these things are, but because human intelligence is probably not some sort of cap, it’s presumably a lot.

        This is the part I don't get. Where does the threat to humanity part come in? Like how is it supposed to act out its immorality?