"I'm going to start something which I call 'TruthGPT,' or a maximum truth-seeking AI that tries to understand the nature of the universe," Musk said in an interview with Fox News Channel's Tucker Carlson to be aired later on Monday.

"And I think this might be the best path to safety, in the sense that an AI that cares about understanding the universe, it is unlikely to annihilate humans because we are an interesting part of the universe," he said, according to some excerpts of the interview.

Musk last month registered a firm named X.AI Corp, incorporated in Nevada, according to a state filing. The firm listed Musk as the sole director.

Musk also reiterated his warnings about AI during the interview with Carlson, saying "AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production" according to the excerpts.

"It has the potential of civilizational destruction," he said.

    • cynesthesia
      ·
      edit-2
      11 months ago

      deleted by creator

    • Owl [he/him]
      ·
      1 year ago
      1. Get smart people to take the AI safety weirdos' message seriously / get more people to study math like causal inference and structural equation modeling.

      2. Don't let the AI safety weirdos actually be in charge of this stuff. (Just because they're right doesn't mean they're competent.)

      3. Don't let the current wave of AI stuff become advanced enough to be dangerous. (I don't think it's likely to get there, but if you're in the position to help it, don't.)