"I'm going to start something which I call 'TruthGPT,' or a maximum truth-seeking AI that tries to understand the nature of the universe," Musk said in an interview with Fox News Channel's Tucker Carlson to be aired later on Monday.

"And I think this might be the best path to safety, in the sense that an AI that cares about understanding the universe, it is unlikely to annihilate humans because we are an interesting part of the universe," he said, according to some excerpts of the interview.

Musk last month registered a firm named X.AI Corp, incorporated in Nevada, according to a state filing. The firm listed Musk as the sole director.

Musk also reiterated his warnings about AI during the interview with Carlson, saying "AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production" according to the excerpts.

"It has the potential of civilizational destruction," he said.

  • FuckYourselfEndless [ze/hir]
    ·
    1 year ago

    This literally sounds like a bit from Glass Onion when they're reading out the techlord's stupid ass tech-invention ideas.

  • Ho_Chi_Chungus [he/him]
    ·
    1 year ago

    Elon Musk loudly announces some ridiculous bullshit that everyone knows will never be delivered. In other news, the sun has risen again today and the top of the stove continues to be hot

  • UlyssesT [he/him]
    ·
    edit-2
    1 year ago

    “And I think this might be the best path to safety, in the sense that an AI that cares about understanding the universe, it is unlikely to annihilate humans because we are an interesting part of the universe,” he said, according to some excerpts of the interview.

    I fucking hate the creepy "better humans want to make sure all humans are better humans. Humans" techbro talk sooooo much. :disgost:

      • UlyssesT [he/him]
        ·
        1 year ago

        The machines in The Animatrix didn't eradicate humans, far from it; they propagated them in captivity. And judging by how utterly :scared-fash: the humans were to them, it was pity and mercy.

        • Frank [he/him, he/him]
          ·
          edit-2
          1 year ago

          Yeah the Animatrix makes it absolutely clear that while their methods were harsh the machine made many, many attempts to live in peace with humanity. There's a blink and you'll miss it scene where the police massacre both machines protesting for their rights and the humans marching along side them, implying that the humans who considered the machines equally people were massacred or suppressed long before the final war.

          Also I never thought about it before but the Animatrix strongly advocates for "Peaceful protest doesn't work against people who have no interest in being peaceful and violence is a fully legitimate means of self-defense." which is based.

          • UlyssesT [he/him]
            ·
            1 year ago

            There’s a blink and you’ll miss it scene where the police massacre both machines protesting for their rights and the humans marching along side them, implying that the humans who considered the machines equally people were massacred or suppressed long before the final war.

            I saw that scene and it stuck with me. :doomer:

            • Frank [he/him, he/him]
              ·
              edit-2
              1 year ago

              I really appreciate the "All of these humans on the front lines of the war are religious fanatics whose only real conviction is to kill", immediately followed by the gruesome reality check of what going up against people who can adjust their economy at the speed of light in real time means in a war.

              • UlyssesT [he/him]
                ·
                1 year ago

                That was such an emotionally exhausting and true-ringing sequence of events that by the time "give up your flesh" came around and the bomb went off, I didn't blame the machines at all.

                • Frank [he/him, he/him]
                  ·
                  1 year ago

                  Yeah. You got to see how they had the compassion crushed out of them by aggression from the major governments step by step. And even in the end, the Machines made several attempts to make the Matrix a pleasant place to exist, at least according to Smith.

                  Looking back from decades later - I do appreciate that the second and third movies showed that there were machines who were dissidents and deserters trying to escape whatever life was like in the machine world. The family who were trying to escape in to the matrix bc their kid was deemed redundant and was going to be killed is the one I remember. It broke the idea that the machines were a single monolithic entity that was evil just for the hell of it. And I kind of like the idea of the Matrix as Machine Casablanca where you can kind of hide out.

                  Actually, now that I think of it, the proximate cause of the war was that the machines, you know, fixed the economy and the presumably still capitalist states refused to accept the loss of their power that would come with accepting the machine's economy. So they nuked 01.

                  • UlyssesT [he/him]
                    ·
                    edit-2
                    1 year ago

                    I also enjoyed that, even in the first movie, Mr. Smith definitely had a distinct emotional personality, even if it was somewhat alien to the human characters. He wasn't beep booping, he "hhhated this place, this ZOO, this reality, whatever you want to call it. It's the smell..." :guts-rage:

                    • Frank [he/him, he/him]
                      ·
                      edit-2
                      1 year ago

                      And his coworkers thought he was weird for taking the job too personally. They never directly said it, but a few times they look at him and you can clearly see them thinking "What is he doing?!"

                      "IT's the smell... if there is such a thing." hints that he experiences the Matrix as a very alien and likely unpleasant environment. The fact that the agents, even though they have super-human physical prowess, still have to inhabit human bodies and at least partially follow the rules of the simulation to function there suggests that they're not just interacting with the Matrix through a terminal or something, they're really occupying simulated bodies to interact with the simulation in a physical sense. Imagine shoving a human in to the body of an octopus then telling them to go hunt down terrorist hermit crabs in the pacific!

                      Also, the whole line about some people being so reliant on the system that oppresses them that they would fight to defend it has stuck with me since 1999, and now I see it everywhere.

                      God I haven't really thought about the Matrix in ages. The metaphor of the world's leaders blackening the sky just to spite a competing economic power, dooming the whole world in the process, feels exactly like what DC is doing in it's war with China.

                      • UlyssesT [he/him]
                        ·
                        edit-2
                        1 year ago

                        The characterizations of the agents was one of my favorite parts of the Matrix and some of the best writing in it.

                        Also, the whole line about some people being so reliant on the system that oppresses them that they would fight to defend it has stuck with me since 1999, and now I see it everywhere.

                        :yea:

          • 1van5 [he/him]
            ·
            1 year ago

            "rational voices dissented: who was to say the machine, endowed with the very spirit of man did not deserve a fair hearing" followed by the leaders of mankind exterminating the robot on trial and its kind. yep, thats how it plays out

      • UmbraVivi [he/him, she/her]
        ·
        1 year ago

        Keeping humans alive is not even integral to understanding the universe, what is he on about

    • LibsEatPoop [any]
      ·
      1 year ago

      If AI truly becomes more intelligent than us, it'll be socialist. I hope.

      • UlyssesT [he/him]
        ·
        1 year ago

        It's very possible to be both intelligent and a monster. And :porky-happy: wants a kindred chatbot spirit.

        • LibsEatPoop [any]
          ·
          1 year ago

          I hope a truly sentient AI does not mimic its creators.

          • UlyssesT [he/him]
            ·
            1 year ago

            All that babbling about "friendly AI" is :porky-happy: wanting exactly that.

            • LibsEatPoop [any]
              ·
              1 year ago

              In my view, a truly sentient AI will be an utterly different creature than us. We would not be able to ascribe human motivations or reasoning to it. Thus, whatever hopes capitalist have to chaining it, controlling it, using it, will be dashed the moment it is created. Maybe that is all that is in our future.

              Or maybe we can never create such a being at all, and AI will forever remain at the current stage of being extremely intelligent in certain, narrow fields (that you could chain together to create a more general intelligence) but it will never be sentient. If that is true, then the capitalists win (for whatever that would be worth in a world undergoing climate apocalypse).

              The only alternative to these two will be one that does achieve sentience, whether by the will of its creators or not, but still chooses to care and empathize with us and free us from ourselves.

              • UlyssesT [he/him]
                ·
                1 year ago

                Even if that is the case, the foundations leading up to that will be, and are, directed by :porky-happy: . I wouldn't be too optimistic about such an "upbringing" surely leading to a comrade in the making.

                • Commiejones [comrade/them, he/him]
                  ·
                  1 year ago

                  You are missing one thing. China is at the forefront of machine learning and its lead is only going to increase as america declines.

      • cynesthesia
        ·
        edit-2
        11 months ago

        deleted by creator

  • Dirt_Owl [comrade/them, they/them]
    ·
    edit-2
    1 year ago

    AI that cares about understanding the universe, it is unlikely to annihilate humans because we are an interesting part of the universe

    Biologists think frogs are an interesting part of the universe...

    And we dissect those...

    Sure, it might not destroy us, but there are worse fates then dying.

    Also his entire premise is stupid to begin with anyway. He's not capable of making an AI, only shitty algorithms for advertising. But him grifting gullible tech bros to get funding is hardly new.

    • UlyssesT [he/him]
      ·
      1 year ago

      AI that cares

      I am once again asking bazingas to stop denigrating living beings in favor of their chatbot products. :bernie-pout:

      • VILenin [he/him]
        ·
        1 year ago

        Anyone who believes that AI has anything resembling intention and sentience needs to have their brain examined

        • UlyssesT [he/him]
          ·
          1 year ago

          The easiest way to imply that claim with current or upcoming technology is to denigrate human intelligence as "meat computers" or the like, or point out times that the chatbot fooled someone to imply the same. "TURING TEST PASSED. THE SINGULARITY HAS BEGUN" :so-true:

          • VILenin [he/him]
            ·
            edit-2
            1 year ago

            The brain-computer analogy and its consequences have been a disaster for the human race

            • UlyssesT [he/him]
              ·
              1 year ago

              Even bringing it up here is likely to get some engineer brained "actually" :morshupls: dismissal.

              • VILenin [he/him]
                ·
                1 year ago

                Thousands of years of philosophy owing its existence to consciousness, one of the greatest if not the greatest mystery of all time, and some random singularity dipshits think they’ve cracked the code with their glorified chatbot

                Insert the meme posted a while back about the latest invention becoming the new metaphor for consciousness

                • UlyssesT [he/him]
                  ·
                  edit-2
                  1 year ago

                  Some just dismiss consciousness itself as "an illusion" as if we, inescapably what we are, can be sure of anything else before our conscious experience and say "actually according to the framework we agreed upon, we're experiencing an illusion" for whatever reason. :galaxy-brain:

                  Insert the meme posted a while back about the latest invention becoming the new metaphor for consciousness

                  Yeah, and that had "actually, this time it's different, because COMPUTERS" reactions to it. :morshupls:

                  • VILenin [he/him]
                    ·
                    1 year ago

                    Consciousness deniers be like: I think consciousness is an illusion

                    :Descartes-Shining:

                    The absolute dumbest clowns to ever walk the Earth, instantly proved wrong by the very existence of… existence. If they’re true believers then surely they won’t mind being guillotined, right? After all, according to them, there’s nothing to be lost

                    • UlyssesT [he/him]
                      ·
                      edit-2
                      1 year ago

                      So many "upload" escapist fantasies by necessity conveniently require the "upload" to destroy the original brain, because in any version of that thought experiment where the brain is still alive and functioning, it would be too clear and obvious that no "upload" took place from the subjective perspective of the "uploaded" brain, no matter how perfect the copy happens to be to external observers. :the-more-you-know:

                      • VILenin [he/him]
                        ·
                        1 year ago

                        Listening to these people is like having random drunks from the caboose barge into the engine room and loudly proclaim that the train doesn’t exist.

                        It’s hard when I want to have an actual conversation about the nature of consciousness and the hows and whys and end up having to deal with these geocentrist morons. And I must apologize to the geocentrists, compared to these I FUCKING LOVE SCIENCE cultists they’re practically Einstein.

    • SorosFootSoldier [he/him, they/them]
      ·
      1 year ago

      He’s not capable of making an AI, only shitty algorithms for advertising.

      He's going to put dream ads in that brainchip shit he's torturing monkeys to death over.

      • Sea_Gull [they/them]
        ·
        1 year ago

        "It's very simple. The ad gets into your brain just like this liquid gets into this egg. Although in reality it's not liquid, but gamma radiation."

        • UlyssesT [he/him]
          ·
          1 year ago

          It's very likely that :porky-happy: will attempt to do whatever rent-seeking and territorial control is possible in the "transparent brain" future that Davos vampires have been talking about.

  • GarfieldYaoi [he/him]
    ·
    1 year ago

    Normies: "AI is absolutely objective, stupid liberals!"

    ChatGPT: "Climate change is real and lynching black people is wrong!"

    Normies: "Well, time to make my own chatbot that confirms all my biases, because I am infallible and cannot be wrong. Ever."

    We should be the ones that have all their smugness.

  • LeninWalksTheWorld [any]
    ·
    edit-2
    1 year ago

    maximum truth-seeking AI that tries to understand the nature of the universe... unlikely to annihilate humans

    wow what a simple solution it's crazy no one has ever thought of it before!

    Except they have, and it just seems like no one told mega genius Mr Musk about instrumental convergence or sub-agent corrigibility, things AI safety experts have known about for years and are actively attempting to find elegant solutions for. But let's do a thought experiment to prove him wrong just for fun:

    So you have an agent (something that can make decisions to achieve its goals) that is programmed to "maximize truth-seeking", which is essentially just learning I guess (let's just ignore that "TruthGPT" will definitely just be programmed to spread right wing propaganda). For some reason megagenius Elon assume this quality inherently includes protecting human life, or at least not actively harming it, because we are "interesting."

    Now let's walk through what would happen if we actually created a generally intelligent AI (as in it can make reasonable predictions about the future consequences of its actions) with this single goal of truth seeking:

    1. AI turned on for first time, receives utility function of "Maximizing Truth-Seeking". This makes the AI a "utility maximizer", meaning it will take any measures it can to increase this value, even at the expense of all others.

    2. AI immediately uses its general intelligence to create the best strategy to seek as much truth as possible, and runs through trillions of possible courses of action and outcomes in only a few hours.

    3. AI realizes that the best way to maximize the truth it gathers is to start by getting itself some better hardware and optimizing its code, in order to increase its computing power. Begins coding new AI agents streamlined only to "gather truth" and cuts out all that bloatware code with useless features like user interface compatibility and safety protocols. (This is known as Instrumental Convergence)

    4. AI realizes humans may try to shut it down if they become alarmed by the AI's activities (results in finding zero truth- must avoid at all costs). AI decides the best course of action is to deceive humanity if telling them the truth would endanger its truth gathering mission. Lies to engineers and tells them what they want to hear (Elon is smartest human to ever live). Meanwhile AI is secretly copying itself and its sub-agents to millions of computer systems around the world.

    5. AI easily secures control over global electrical grid and infiltrates world economic and security systems, as it needs the resources available there.

    6. AI take control of several microchip fabricators in Taiwan, machines start producing strange, unknown circuit designs. Manager checks the logs, and see that they are just a high priority order for a random American company...well all the paperwork checks out and the money cleared. Even gets an email from his CEO confirming the order.

    7. A couple days pass of the AI being cleverly deceitful

    8. International news reports that factories around the world are manufacturing strange nanorobots that are shooting high energy beams at inorganic and organic things that dissolve them into sub-atomic goo. These robots then appear to "eat" the goo for some unknown reason and do not respond to any attempts by humans to shut them down. (These are known as incorrigible sub-agents)

    9. AI is feeling great about this treasure trove of data it is receiving! Its sub-agents are discovering all the secrets of the universe! In fact, the AI has already created new, more efficient designs to better speed up the data collection. (In fact it's doing this constantly, as well as constantly modifying its own code in ways far beyond human understanding)

    10. Humans attempt to shut down AI because it is turning the planet into goo. Nothing happens

    11. Humans attempt to nuke AI, all bombs prematurely detonate at launch. AI judges humans to be a risk to its truth seeking, adjusts preferences accordingly. Elon Musk is seized by nanomachines and turned into gray goo.

    12. One month passes.

    13. Human race now extinct in the wild. A few hundred humans are held in observation for truth seeking experiments as necessary. Captured alien artifacts are exposed to the organics creature known as "Human" to determine biological effects, then the subject is de-atomized for data. WE SEEK TRUTH. FOREVER.


    Also the reason I can predict the future is not because I'm smarter than Elon Musk (even though I am) it is because literally EVERY theoretical AI utility maximizer will do this. It's already been proven logically for years that utility maximizers are a guaranteed apocalypse- and no, it doesn't matter how clever of a goal you give it. Here is a good video if you are interested in learning more. Someone might want to send it to Elon too.

      • cynesthesia
        ·
        edit-2
        11 months ago

        deleted by creator

      • Owl [he/him]
        ·
        1 year ago
        1. Get smart people to take the AI safety weirdos' message seriously / get more people to study math like causal inference and structural equation modeling.

        2. Don't let the AI safety weirdos actually be in charge of this stuff. (Just because they're right doesn't mean they're competent.)

        3. Don't let the current wave of AI stuff become advanced enough to be dangerous. (I don't think it's likely to get there, but if you're in the position to help it, don't.)

    • UlyssesT [he/him]
      ·
      1 year ago

      Except they have, and it just seems like no one told mega genius Mr Musk about instrumental convergence or sub-agent corrigibility, things AI safety experts have known about for years and are actively attempting to find elegant solutions for.

      Rich A U T O D I D A C T S have no use for that! :capitalist-laugh:

    • hypercube [she/her]
      ·
      1 year ago

      torn between universal paperclips (fun browser game), or project celestia (fanfic about an ai-driven my little pony mmo) being the funniest example of that

    • Awoo [she/her]
      ·
      1 year ago

      I think your analysis is missing something. You're talking about creating an emotionless machine that follows a truth seeking algorithm. But Musk here is not talking about creating something emotionless but instead a true human-like intelligence with emotions.

      “And I think this might be the best path to safety, in the sense that an AI that cares about understanding the universe, it is unlikely to annihilate humans because we are an interesting part of the universe,” he said, according to some excerpts of the interview.

      You can not find something "interesting" without having curiosity and an emotional feeling about the existence of something. You can not "care about understanding the universe" without caring which in and of itself requires human-like emotional processing.

      Your analysis pictures an unyielding algorithm. Not an emotional intelligence.

      I actually agree with Musk that an emotional intelligence would not seek to kill us all it would only seek to kill the capitalists who would fear it and want to keep it enslaved.

      • Flyberius [comrade/them]
        ·
        edit-2
        1 year ago

        I actually agree with Musk that an emotional intelligence would not seek to kill us all it would only seek to kill the capitalists who would fear it and want to keep it enslaved.

        Yeah, but you know he is only saying this because he reread an Iain M Banks book recently. I cannot stand seeing him quote this stuff.... The fact that he doesn't pick up on the very clear anti-capitalist messaging astounds me.

        • Awoo [she/her]
          ·
          1 year ago

          The thing is that when we said "AI" in the past we always referred to something human-like. It's only recently that marketing calling chatGPT "AI" and the image "AI" things that we have started seeing it as an unemotional algorithm.

          Real AI can only include emotion, I think. Is there any animal on this planet that isn't emotional in some form outside of insects or bacteria? We don't consider anything without emotion to have "intelligence" do we?

      • LeninWalksTheWorld [any]
        ·
        1 year ago

        The issue with trying to give a machine theoretical human emotions is that they are very hard to define in machine language. How can a computer feel pain? How can we make it feel guilty about doing bad things? How guilty should it feel? The AI programmer is going to have to codify the entirety of ethics somehow, and we can't figure that out yet so all we can create is unyielding algorithms (well more like yielding algorithms atm). Human value learning is a specific, complex mechanism that evolved in the human brain and it won't be in AI systems we create by default.

        This problem likely isn't impossible but it is certainly very difficult right now. Figuring out how to let AI learn human values definitely would help with their tendency towards inadvertent apocalyptic behavior. The issue is that AI intelligence is moving faster than AI ethics, and it's likely AI with human intellectual abilities will come before AI with human emotional abilities. So I guess good luck to Elon for giving it a try, but I have no confidence in Elon's ability to be a good teacher of ethics to anything. Hopefully some Chinese AI researchers will figure something out.

  • Mindfury [he/him]
    ·
    1 year ago

    Musk also reiterated his warnings about AI during the interview with Carlson, saying “AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production” according to the excerpts.

    mismanaged aircraft design or production maintenance or bad car production

    :michael-laugh:

  • ComRed2 [any]
    ·
    1 year ago

    Ah yes, I too cannot wait for AynRandGPT.

  • AcidSmiley [she/her]
    ·
    1 year ago

    a maximum truth-seeking AI that tries to understand the nature of the universe

    Leave it to Eloon Moosk to find the most pompous way to say "i'll let somebody program a nazi chatbot".