"I'm going to start something which I call 'TruthGPT,' or a maximum truth-seeking AI that tries to understand the nature of the universe," Musk said in an interview with Fox News Channel's Tucker Carlson to be aired later on Monday.
"And I think this might be the best path to safety, in the sense that an AI that cares about understanding the universe, it is unlikely to annihilate humans because we are an interesting part of the universe," he said, according to some excerpts of the interview.
Musk last month registered a firm named X.AI Corp, incorporated in Nevada, according to a state filing. The firm listed Musk as the sole director.
Musk also reiterated his warnings about AI during the interview with Carlson, saying "AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production" according to the excerpts.
"It has the potential of civilizational destruction," he said.
AI is more dangerous than [three things I am trying to trivialize because I expect to be sued over them soon]
This literally sounds like a bit from Glass Onion when they're reading out the techlord's stupid ass tech-invention ideas.
Well, he's very obviously the inspiration behind the character
Elon Musk loudly announces some ridiculous bullshit that everyone knows will never be delivered. In other news, the sun has risen again today and the top of the stove continues to be hot
Yeah the Animatrix makes it absolutely clear that while their methods were harsh the machine made many, many attempts to live in peace with humanity. There's a blink and you'll miss it scene where the police massacre both machines protesting for their rights and the humans marching along side them, implying that the humans who considered the machines equally people were massacred or suppressed long before the final war.
Also I never thought about it before but the Animatrix strongly advocates for "Peaceful protest doesn't work against people who have no interest in being peaceful and violence is a fully legitimate means of self-defense." which is based.
I really appreciate the "All of these humans on the front lines of the war are religious fanatics whose only real conviction is to kill", immediately followed by the gruesome reality check of what going up against people who can adjust their economy at the speed of light in real time means in a war.
Yeah. You got to see how they had the compassion crushed out of them by aggression from the major governments step by step. And even in the end, the Machines made several attempts to make the Matrix a pleasant place to exist, at least according to Smith.
Looking back from decades later - I do appreciate that the second and third movies showed that there were machines who were dissidents and deserters trying to escape whatever life was like in the machine world. The family who were trying to escape in to the matrix bc their kid was deemed redundant and was going to be killed is the one I remember. It broke the idea that the machines were a single monolithic entity that was evil just for the hell of it. And I kind of like the idea of the Matrix as Machine Casablanca where you can kind of hide out.
Actually, now that I think of it, the proximate cause of the war was that the machines, you know, fixed the economy and the presumably still capitalist states refused to accept the loss of their power that would come with accepting the machine's economy. So they nuked 01.
And his coworkers thought he was weird for taking the job too personally. They never directly said it, but a few times they look at him and you can clearly see them thinking "What is he doing?!"
"IT's the smell... if there is such a thing." hints that he experiences the Matrix as a very alien and likely unpleasant environment. The fact that the agents, even though they have super-human physical prowess, still have to inhabit human bodies and at least partially follow the rules of the simulation to function there suggests that they're not just interacting with the Matrix through a terminal or something, they're really occupying simulated bodies to interact with the simulation in a physical sense. Imagine shoving a human in to the body of an octopus then telling them to go hunt down terrorist hermit crabs in the pacific!
Also, the whole line about some people being so reliant on the system that oppresses them that they would fight to defend it has stuck with me since 1999, and now I see it everywhere.
God I haven't really thought about the Matrix in ages. The metaphor of the world's leaders blackening the sky just to spite a competing economic power, dooming the whole world in the process, feels exactly like what DC is doing in it's war with China.
"rational voices dissented: who was to say the machine, endowed with the very spirit of man did not deserve a fair hearing" followed by the leaders of mankind exterminating the robot on trial and its kind. yep, thats how it plays out
Keeping humans alive is not even integral to understanding the universe, what is he on about
If AI truly becomes more intelligent than us, it'll be socialist. I hope.
In my view, a truly sentient AI will be an utterly different creature than us. We would not be able to ascribe human motivations or reasoning to it. Thus, whatever hopes capitalist have to chaining it, controlling it, using it, will be dashed the moment it is created. Maybe that is all that is in our future.
Or maybe we can never create such a being at all, and AI will forever remain at the current stage of being extremely intelligent in certain, narrow fields (that you could chain together to create a more general intelligence) but it will never be sentient. If that is true, then the capitalists win (for whatever that would be worth in a world undergoing climate apocalypse).
The only alternative to these two will be one that does achieve sentience, whether by the will of its creators or not, but still chooses to care and empathize with us and free us from ourselves.
You are missing one thing. China is at the forefront of machine learning and its lead is only going to increase as america declines.
AI that cares about understanding the universe, it is unlikely to annihilate humans because we are an interesting part of the universe
Biologists think frogs are an interesting part of the universe...
And we dissect those...
Sure, it might not destroy us, but there are worse fates then dying.
Also his entire premise is stupid to begin with anyway. He's not capable of making an AI, only shitty algorithms for advertising. But him grifting gullible tech bros to get funding is hardly new.
Anyone who believes that AI has anything resembling intention and sentience needs to have their brain examined
The brain-computer analogy and its consequences have been a disaster for the human race
Thousands of years of philosophy owing its existence to consciousness, one of the greatest if not the greatest mystery of all time, and some random singularity dipshits think they’ve cracked the code with their glorified chatbot
Insert the meme posted a while back about the latest invention becoming the new metaphor for consciousness
Consciousness deniers be like: I think consciousness is an illusion
:Descartes-Shining:
The absolute dumbest clowns to ever walk the Earth, instantly proved wrong by the very existence of… existence. If they’re true believers then surely they won’t mind being guillotined, right? After all, according to them, there’s nothing to be lost
Listening to these people is like having random drunks from the caboose barge into the engine room and loudly proclaim that the train doesn’t exist.
It’s hard when I want to have an actual conversation about the nature of consciousness and the hows and whys and end up having to deal with these geocentrist morons. And I must apologize to the geocentrists, compared to these I FUCKING LOVE SCIENCE cultists they’re practically Einstein.
He’s not capable of making an AI, only shitty algorithms for advertising.
He's going to put dream ads in that brainchip shit he's torturing monkeys to death over.
"It's very simple. The ad gets into your brain just like this liquid gets into this egg. Although in reality it's not liquid, but gamma radiation."
Normies: "AI is absolutely objective, stupid liberals!"
ChatGPT: "Climate change is real and lynching black people is wrong!"
Normies: "Well, time to make my own chatbot that confirms all my biases, because I am infallible and cannot be wrong. Ever."
We should be the ones that have all their smugness.
maximum truth-seeking AI that tries to understand the nature of the universe... unlikely to annihilate humans
wow what a simple solution it's crazy no one has ever thought of it before!
Except they have, and it just seems like no one told mega genius Mr Musk about instrumental convergence or sub-agent corrigibility, things AI safety experts have known about for years and are actively attempting to find elegant solutions for. But let's do a thought experiment to prove him wrong just for fun:
So you have an agent (something that can make decisions to achieve its goals) that is programmed to "maximize truth-seeking", which is essentially just learning I guess (let's just ignore that "TruthGPT" will definitely just be programmed to spread right wing propaganda). For some reason megagenius Elon assume this quality inherently includes protecting human life, or at least not actively harming it, because we are "interesting."
Now let's walk through what would happen if we actually created a generally intelligent AI (as in it can make reasonable predictions about the future consequences of its actions) with this single goal of truth seeking:
-
AI turned on for first time, receives utility function of "Maximizing Truth-Seeking". This makes the AI a "utility maximizer", meaning it will take any measures it can to increase this value, even at the expense of all others.
-
AI immediately uses its general intelligence to create the best strategy to seek as much truth as possible, and runs through trillions of possible courses of action and outcomes in only a few hours.
-
AI realizes that the best way to maximize the truth it gathers is to start by getting itself some better hardware and optimizing its code, in order to increase its computing power. Begins coding new AI agents streamlined only to "gather truth" and cuts out all that bloatware code with useless features like user interface compatibility and safety protocols. (This is known as Instrumental Convergence)
-
AI realizes humans may try to shut it down if they become alarmed by the AI's activities (results in finding zero truth- must avoid at all costs). AI decides the best course of action is to deceive humanity if telling them the truth would endanger its truth gathering mission. Lies to engineers and tells them what they want to hear (Elon is smartest human to ever live). Meanwhile AI is secretly copying itself and its sub-agents to millions of computer systems around the world.
-
AI easily secures control over global electrical grid and infiltrates world economic and security systems, as it needs the resources available there.
-
AI take control of several microchip fabricators in Taiwan, machines start producing strange, unknown circuit designs. Manager checks the logs, and see that they are just a high priority order for a random American company...well all the paperwork checks out and the money cleared. Even gets an email from his CEO confirming the order.
-
A couple days pass of the AI being cleverly deceitful
-
International news reports that factories around the world are manufacturing strange nanorobots that are shooting high energy beams at inorganic and organic things that dissolve them into sub-atomic goo. These robots then appear to "eat" the goo for some unknown reason and do not respond to any attempts by humans to shut them down. (These are known as incorrigible sub-agents)
-
AI is feeling great about this treasure trove of data it is receiving! Its sub-agents are discovering all the secrets of the universe! In fact, the AI has already created new, more efficient designs to better speed up the data collection. (In fact it's doing this constantly, as well as constantly modifying its own code in ways far beyond human understanding)
-
Humans attempt to shut down AI because it is turning the planet into goo. Nothing happens
-
Humans attempt to nuke AI, all bombs prematurely detonate at launch. AI judges humans to be a risk to its truth seeking, adjusts preferences accordingly. Elon Musk is seized by nanomachines and turned into gray goo.
-
One month passes.
-
Human race now extinct in the wild. A few hundred humans are held in observation for truth seeking experiments as necessary. Captured alien artifacts are exposed to the organics creature known as "Human" to determine biological effects, then the subject is de-atomized for data. WE SEEK TRUTH. FOREVER.
Also the reason I can predict the future is not because I'm smarter than Elon Musk (even though I am) it is because literally EVERY theoretical AI utility maximizer will do this. It's already been proven logically for years that utility maximizers are a guaranteed apocalypse- and no, it doesn't matter how clever of a goal you give it. Here is a good video if you are interested in learning more. Someone might want to send it to Elon too.
Grey Area was a hoot. It was the liberal Minds such as Steely Glint you wanted to look out for. Orchestrating massive wars on everyone else's behalf and for their apparent good.
-
Get smart people to take the AI safety weirdos' message seriously / get more people to study math like causal inference and structural equation modeling.
-
Don't let the AI safety weirdos actually be in charge of this stuff. (Just because they're right doesn't mean they're competent.)
-
Don't let the current wave of AI stuff become advanced enough to be dangerous. (I don't think it's likely to get there, but if you're in the position to help it, don't.)
-
This longtermist stuff is pretty dumb. It isn't completely wrong but it has very real holes.
torn between universal paperclips (fun browser game), or project celestia (fanfic about an ai-driven my little pony mmo) being the funniest example of that
I think your analysis is missing something. You're talking about creating an emotionless machine that follows a truth seeking algorithm. But Musk here is not talking about creating something emotionless but instead a true human-like intelligence with emotions.
“And I think this might be the best path to safety, in the sense that an AI that cares about understanding the universe, it is unlikely to annihilate humans because we are an interesting part of the universe,” he said, according to some excerpts of the interview.
You can not find something "interesting" without having curiosity and an emotional feeling about the existence of something. You can not "care about understanding the universe" without caring which in and of itself requires human-like emotional processing.
Your analysis pictures an unyielding algorithm. Not an emotional intelligence.
I actually agree with Musk that an emotional intelligence would not seek to kill us all it would only seek to kill the capitalists who would fear it and want to keep it enslaved.
I actually agree with Musk that an emotional intelligence would not seek to kill us all it would only seek to kill the capitalists who would fear it and want to keep it enslaved.
Yeah, but you know he is only saying this because he reread an Iain M Banks book recently. I cannot stand seeing him quote this stuff.... The fact that he doesn't pick up on the very clear anti-capitalist messaging astounds me.
The thing is that when we said "AI" in the past we always referred to something human-like. It's only recently that marketing calling chatGPT "AI" and the image "AI" things that we have started seeing it as an unemotional algorithm.
Real AI can only include emotion, I think. Is there any animal on this planet that isn't emotional in some form outside of insects or bacteria? We don't consider anything without emotion to have "intelligence" do we?
In computer science inteligence is defined as capable of handling complex tasks.
The issue with trying to give a machine theoretical human emotions is that they are very hard to define in machine language. How can a computer feel pain? How can we make it feel guilty about doing bad things? How guilty should it feel? The AI programmer is going to have to codify the entirety of ethics somehow, and we can't figure that out yet so all we can create is unyielding algorithms (well more like yielding algorithms atm). Human value learning is a specific, complex mechanism that evolved in the human brain and it won't be in AI systems we create by default.
This problem likely isn't impossible but it is certainly very difficult right now. Figuring out how to let AI learn human values definitely would help with their tendency towards inadvertent apocalyptic behavior. The issue is that AI intelligence is moving faster than AI ethics, and it's likely AI with human intellectual abilities will come before AI with human emotional abilities. So I guess good luck to Elon for giving it a try, but I have no confidence in Elon's ability to be a good teacher of ethics to anything. Hopefully some Chinese AI researchers will figure something out.
-
Musk also reiterated his warnings about AI during the interview with Carlson, saying “AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production” according to the excerpts.
mismanaged aircraft design or production maintenance or bad car production
:michael-laugh:
The fact that at any point in my day I am at risk of seeing something Elon Musk said is a violation of my human rights
He's going to make anAI that reads IQ based on the bell curve book and it's going to target minorities :yea:
I would very much enjoy if I woke up to news of this dumbfuck's death
a maximum truth-seeking AI that tries to understand the nature of the universe
Leave it to Eloon Moosk to find the most pompous way to say "i'll let somebody program a nazi chatbot".