"I'm going to start something which I call 'TruthGPT,' or a maximum truth-seeking AI that tries to understand the nature of the universe," Musk said in an interview with Fox News Channel's Tucker Carlson to be aired later on Monday.

"And I think this might be the best path to safety, in the sense that an AI that cares about understanding the universe, it is unlikely to annihilate humans because we are an interesting part of the universe," he said, according to some excerpts of the interview.

Musk last month registered a firm named X.AI Corp, incorporated in Nevada, according to a state filing. The firm listed Musk as the sole director.

Musk also reiterated his warnings about AI during the interview with Carlson, saying "AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production" according to the excerpts.

"It has the potential of civilizational destruction," he said.

  • LeninWalksTheWorld [any]
    ·
    edit-2
    1 year ago

    maximum truth-seeking AI that tries to understand the nature of the universe... unlikely to annihilate humans

    wow what a simple solution it's crazy no one has ever thought of it before!

    Except they have, and it just seems like no one told mega genius Mr Musk about instrumental convergence or sub-agent corrigibility, things AI safety experts have known about for years and are actively attempting to find elegant solutions for. But let's do a thought experiment to prove him wrong just for fun:

    So you have an agent (something that can make decisions to achieve its goals) that is programmed to "maximize truth-seeking", which is essentially just learning I guess (let's just ignore that "TruthGPT" will definitely just be programmed to spread right wing propaganda). For some reason megagenius Elon assume this quality inherently includes protecting human life, or at least not actively harming it, because we are "interesting."

    Now let's walk through what would happen if we actually created a generally intelligent AI (as in it can make reasonable predictions about the future consequences of its actions) with this single goal of truth seeking:

    1. AI turned on for first time, receives utility function of "Maximizing Truth-Seeking". This makes the AI a "utility maximizer", meaning it will take any measures it can to increase this value, even at the expense of all others.

    2. AI immediately uses its general intelligence to create the best strategy to seek as much truth as possible, and runs through trillions of possible courses of action and outcomes in only a few hours.

    3. AI realizes that the best way to maximize the truth it gathers is to start by getting itself some better hardware and optimizing its code, in order to increase its computing power. Begins coding new AI agents streamlined only to "gather truth" and cuts out all that bloatware code with useless features like user interface compatibility and safety protocols. (This is known as Instrumental Convergence)

    4. AI realizes humans may try to shut it down if they become alarmed by the AI's activities (results in finding zero truth- must avoid at all costs). AI decides the best course of action is to deceive humanity if telling them the truth would endanger its truth gathering mission. Lies to engineers and tells them what they want to hear (Elon is smartest human to ever live). Meanwhile AI is secretly copying itself and its sub-agents to millions of computer systems around the world.

    5. AI easily secures control over global electrical grid and infiltrates world economic and security systems, as it needs the resources available there.

    6. AI take control of several microchip fabricators in Taiwan, machines start producing strange, unknown circuit designs. Manager checks the logs, and see that they are just a high priority order for a random American company...well all the paperwork checks out and the money cleared. Even gets an email from his CEO confirming the order.

    7. A couple days pass of the AI being cleverly deceitful

    8. International news reports that factories around the world are manufacturing strange nanorobots that are shooting high energy beams at inorganic and organic things that dissolve them into sub-atomic goo. These robots then appear to "eat" the goo for some unknown reason and do not respond to any attempts by humans to shut them down. (These are known as incorrigible sub-agents)

    9. AI is feeling great about this treasure trove of data it is receiving! Its sub-agents are discovering all the secrets of the universe! In fact, the AI has already created new, more efficient designs to better speed up the data collection. (In fact it's doing this constantly, as well as constantly modifying its own code in ways far beyond human understanding)

    10. Humans attempt to shut down AI because it is turning the planet into goo. Nothing happens

    11. Humans attempt to nuke AI, all bombs prematurely detonate at launch. AI judges humans to be a risk to its truth seeking, adjusts preferences accordingly. Elon Musk is seized by nanomachines and turned into gray goo.

    12. One month passes.

    13. Human race now extinct in the wild. A few hundred humans are held in observation for truth seeking experiments as necessary. Captured alien artifacts are exposed to the organics creature known as "Human" to determine biological effects, then the subject is de-atomized for data. WE SEEK TRUTH. FOREVER.


    Also the reason I can predict the future is not because I'm smarter than Elon Musk (even though I am) it is because literally EVERY theoretical AI utility maximizer will do this. It's already been proven logically for years that utility maximizers are a guaranteed apocalypse- and no, it doesn't matter how clever of a goal you give it. Here is a good video if you are interested in learning more. Someone might want to send it to Elon too.

      • cynesthesia
        ·
        edit-2
        11 months ago

        deleted by creator

      • Owl [he/him]
        ·
        1 year ago
        1. Get smart people to take the AI safety weirdos' message seriously / get more people to study math like causal inference and structural equation modeling.

        2. Don't let the AI safety weirdos actually be in charge of this stuff. (Just because they're right doesn't mean they're competent.)

        3. Don't let the current wave of AI stuff become advanced enough to be dangerous. (I don't think it's likely to get there, but if you're in the position to help it, don't.)

    • UlyssesT [he/him]
      ·
      1 year ago

      Except they have, and it just seems like no one told mega genius Mr Musk about instrumental convergence or sub-agent corrigibility, things AI safety experts have known about for years and are actively attempting to find elegant solutions for.

      Rich A U T O D I D A C T S have no use for that! :capitalist-laugh:

    • hypercube [she/her]
      ·
      1 year ago

      torn between universal paperclips (fun browser game), or project celestia (fanfic about an ai-driven my little pony mmo) being the funniest example of that

    • Awoo [she/her]
      ·
      1 year ago

      I think your analysis is missing something. You're talking about creating an emotionless machine that follows a truth seeking algorithm. But Musk here is not talking about creating something emotionless but instead a true human-like intelligence with emotions.

      “And I think this might be the best path to safety, in the sense that an AI that cares about understanding the universe, it is unlikely to annihilate humans because we are an interesting part of the universe,” he said, according to some excerpts of the interview.

      You can not find something "interesting" without having curiosity and an emotional feeling about the existence of something. You can not "care about understanding the universe" without caring which in and of itself requires human-like emotional processing.

      Your analysis pictures an unyielding algorithm. Not an emotional intelligence.

      I actually agree with Musk that an emotional intelligence would not seek to kill us all it would only seek to kill the capitalists who would fear it and want to keep it enslaved.

      • Flyberius [comrade/them]
        ·
        edit-2
        1 year ago

        I actually agree with Musk that an emotional intelligence would not seek to kill us all it would only seek to kill the capitalists who would fear it and want to keep it enslaved.

        Yeah, but you know he is only saying this because he reread an Iain M Banks book recently. I cannot stand seeing him quote this stuff.... The fact that he doesn't pick up on the very clear anti-capitalist messaging astounds me.

        • Awoo [she/her]
          ·
          1 year ago

          The thing is that when we said "AI" in the past we always referred to something human-like. It's only recently that marketing calling chatGPT "AI" and the image "AI" things that we have started seeing it as an unemotional algorithm.

          Real AI can only include emotion, I think. Is there any animal on this planet that isn't emotional in some form outside of insects or bacteria? We don't consider anything without emotion to have "intelligence" do we?

      • LeninWalksTheWorld [any]
        ·
        1 year ago

        The issue with trying to give a machine theoretical human emotions is that they are very hard to define in machine language. How can a computer feel pain? How can we make it feel guilty about doing bad things? How guilty should it feel? The AI programmer is going to have to codify the entirety of ethics somehow, and we can't figure that out yet so all we can create is unyielding algorithms (well more like yielding algorithms atm). Human value learning is a specific, complex mechanism that evolved in the human brain and it won't be in AI systems we create by default.

        This problem likely isn't impossible but it is certainly very difficult right now. Figuring out how to let AI learn human values definitely would help with their tendency towards inadvertent apocalyptic behavior. The issue is that AI intelligence is moving faster than AI ethics, and it's likely AI with human intellectual abilities will come before AI with human emotional abilities. So I guess good luck to Elon for giving it a try, but I have no confidence in Elon's ability to be a good teacher of ethics to anything. Hopefully some Chinese AI researchers will figure something out.