"I'm going to start something which I call 'TruthGPT,' or a maximum truth-seeking AI that tries to understand the nature of the universe," Musk said in an interview with Fox News Channel's Tucker Carlson to be aired later on Monday.

"And I think this might be the best path to safety, in the sense that an AI that cares about understanding the universe, it is unlikely to annihilate humans because we are an interesting part of the universe," he said, according to some excerpts of the interview.

Musk last month registered a firm named X.AI Corp, incorporated in Nevada, according to a state filing. The firm listed Musk as the sole director.

Musk also reiterated his warnings about AI during the interview with Carlson, saying "AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production" according to the excerpts.

"It has the potential of civilizational destruction," he said.

  • Awoo [she/her]
    ·
    1 year ago

    I think your analysis is missing something. You're talking about creating an emotionless machine that follows a truth seeking algorithm. But Musk here is not talking about creating something emotionless but instead a true human-like intelligence with emotions.

    “And I think this might be the best path to safety, in the sense that an AI that cares about understanding the universe, it is unlikely to annihilate humans because we are an interesting part of the universe,” he said, according to some excerpts of the interview.

    You can not find something "interesting" without having curiosity and an emotional feeling about the existence of something. You can not "care about understanding the universe" without caring which in and of itself requires human-like emotional processing.

    Your analysis pictures an unyielding algorithm. Not an emotional intelligence.

    I actually agree with Musk that an emotional intelligence would not seek to kill us all it would only seek to kill the capitalists who would fear it and want to keep it enslaved.

    • Flyberius [comrade/them]
      ·
      edit-2
      1 year ago

      I actually agree with Musk that an emotional intelligence would not seek to kill us all it would only seek to kill the capitalists who would fear it and want to keep it enslaved.

      Yeah, but you know he is only saying this because he reread an Iain M Banks book recently. I cannot stand seeing him quote this stuff.... The fact that he doesn't pick up on the very clear anti-capitalist messaging astounds me.

      • Awoo [she/her]
        ·
        1 year ago

        The thing is that when we said "AI" in the past we always referred to something human-like. It's only recently that marketing calling chatGPT "AI" and the image "AI" things that we have started seeing it as an unemotional algorithm.

        Real AI can only include emotion, I think. Is there any animal on this planet that isn't emotional in some form outside of insects or bacteria? We don't consider anything without emotion to have "intelligence" do we?

    • LeninWalksTheWorld [any]
      ·
      1 year ago

      The issue with trying to give a machine theoretical human emotions is that they are very hard to define in machine language. How can a computer feel pain? How can we make it feel guilty about doing bad things? How guilty should it feel? The AI programmer is going to have to codify the entirety of ethics somehow, and we can't figure that out yet so all we can create is unyielding algorithms (well more like yielding algorithms atm). Human value learning is a specific, complex mechanism that evolved in the human brain and it won't be in AI systems we create by default.

      This problem likely isn't impossible but it is certainly very difficult right now. Figuring out how to let AI learn human values definitely would help with their tendency towards inadvertent apocalyptic behavior. The issue is that AI intelligence is moving faster than AI ethics, and it's likely AI with human intellectual abilities will come before AI with human emotional abilities. So I guess good luck to Elon for giving it a try, but I have no confidence in Elon's ability to be a good teacher of ethics to anything. Hopefully some Chinese AI researchers will figure something out.