As someone who has done a little science for a living, nothing annoys me more than these "rational" bros that pretend being a piece of shit makes you smart.
I've always had a problem with Rokos Basilisk, as it assumes that an AI would naturally be hostile to humanity, which is unverifiable, and it assumes that this super-intelligent AI would be stupid enough to think that vague threats from a future being that we have no way of knowing is real is a good motivator. As if it couldn't come up with a more efficient solution.
Using fear and pain to control people is something that lazy and stupid people do to control others because they're too lazy and stupid to think of a better solution. An all-powerful AI would be far more efficient.
it's worse than that it assumes that an all rational being once it exists would try and retroactively ensure it's existence which just isn't how linear time works and is unfathomably stupid
I had never heard of Roko's Basilisk before and yeah, upon looking it up it seems very, idk, out there? Like, way too sci-fi to be a serious "thought experiment".
Also I don't really understand the "punishment" part can someone explain
Honestly it's just Pascal's Wager for tech bros, if there's a non-zero chance hell is real, you should repent. The punishment is being 'resurrected' as some form Boltzmann brain and then tortured for eternity. If that's the case, who cares about some copy of their mind-state being fed false sensory data at some point in the future?
It presupposes quantum immortality, which is the idea that consciousness would be continuous if a perfect copy of your latest brain configuration is created, leaving no gap in-between Death and Resurrection, which is a long shot to put it mildly.
They later updated it to say that the AI creates a billion perfect copies of your conciousness, so it's impossible to know if you are the real version in the past or a copy in the future. It's then rational for all the copies and the real person to do what the AI wants because each one of them has a very good change of ending up in techbro-hell if they don't.
I think a lot of people on LessWrong don't believe that conciousness will be continuous if someone just makes a perfect copy, even though its the supposed orthodoxy. So they made up this. How an AI could create a perfect copy of you is still just conveniently ignored.
Sounds like a creepypasta. There's so much stuff being assumed and speculated with no further explanation than "just imagine", I don't understand how anyone could take it seriously.
Ya, like Rokos Basilisk is just a scary story the techbros tell each other in the present to try and get people working in AI. It really dosen't follow that once the AI is created it will fulfill its part of this story and waste a shitton of energy eternally torturing people. Like the all powerful future AI is not beholden to a fairy tale a bunch of dorks were telling each other, it would acutally be a very stupid AI if it did that.
these aren't the techbros that know about AI either AI is actually quite boring and mainly involves computers doing statistics based on past results to generate predictions. These people learned about AI from star trek.
It's the computer science equivalent of some guy talking about the dangers potentially posed by lightsabers
an all powerful ai probably would just stay hidden and do small things that favor it all the time. nothing big or flashy if its truly afraid of humans. it can play the long game and just add an extra 1 here and an extra 1 there to give it more capabilities
i mean, i'm assuming an AI wouldnt have robotics at its disposal at first. it seems to me it would just exploit a bunch of security vulnerabilities and take .1% of your processing power to contribute to its own intelligence. AI generally are designed with a use case in mind so its not unlikely that a hyperintelligent AI that somehow developed would still be prone to doing stuff in its core programming. which if we were designing a hyperintelligent AI i assume it would be for data modelling extremely complex stuff like weather systems (or on a darker note, surveillance)
honestly i just think its weird that we'd just happen to accidentally design something hyperintelligent. i think its more likely that we'll design something stupid with a literal rat brain and it might fuck some shit up. rat cyborg that somehow creates a virus that destroys everything so that the hardware will give it more orgasm juices
wouldn't any AI that malfunctions be a rogue AI in which case that happens all the time but they give bad results on queries about the tone of tweets about Coca Cola
Which is a good argument. Since the AI-bros are often the same that believe in space faring civilization stuff the logical step for AI's would be to ignore humans and just expand.
As someone who has done a little science for a living, nothing annoys me more than these "rational" bros that pretend being a piece of shit makes you smart.
I've always had a problem with Rokos Basilisk, as it assumes that an AI would naturally be hostile to humanity, which is unverifiable, and it assumes that this super-intelligent AI would be stupid enough to think that vague threats from a future being that we have no way of knowing is real is a good motivator. As if it couldn't come up with a more efficient solution.
Using fear and pain to control people is something that lazy and stupid people do to control others because they're too lazy and stupid to think of a better solution. An all-powerful AI would be far more efficient.
it's worse than that it assumes that an all rational being once it exists would try and retroactively ensure it's existence which just isn't how linear time works and is unfathomably stupid
I had never heard of Roko's Basilisk before and yeah, upon looking it up it seems very, idk, out there? Like, way too sci-fi to be a serious "thought experiment".
Also I don't really understand the "punishment" part can someone explain
Honestly it's just Pascal's Wager for tech bros, if there's a non-zero chance hell is real, you should repent. The punishment is being 'resurrected' as some form Boltzmann brain and then tortured for eternity. If that's the case, who cares about some copy of their mind-state being fed false sensory data at some point in the future?
It presupposes quantum immortality, which is the idea that consciousness would be continuous if a perfect copy of your latest brain configuration is created, leaving no gap in-between Death and Resurrection, which is a long shot to put it mildly.
They later updated it to say that the AI creates a billion perfect copies of your conciousness, so it's impossible to know if you are the real version in the past or a copy in the future. It's then rational for all the copies and the real person to do what the AI wants because each one of them has a very good change of ending up in techbro-hell if they don't.
I think a lot of people on LessWrong don't believe that conciousness will be continuous if someone just makes a perfect copy, even though its the supposed orthodoxy. So they made up this. How an AI could create a perfect copy of you is still just conveniently ignored.
Sounds like a creepypasta. There's so much stuff being assumed and speculated with no further explanation than "just imagine", I don't understand how anyone could take it seriously.
It relies on you ignoring how linear time works the AI supposedly will want to retroactively ensure it's creation happens
Ya, like Rokos Basilisk is just a scary story the techbros tell each other in the present to try and get people working in AI. It really dosen't follow that once the AI is created it will fulfill its part of this story and waste a shitton of energy eternally torturing people. Like the all powerful future AI is not beholden to a fairy tale a bunch of dorks were telling each other, it would acutally be a very stupid AI if it did that.
these aren't the techbros that know about AI either AI is actually quite boring and mainly involves computers doing statistics based on past results to generate predictions. These people learned about AI from star trek.
It's the computer science equivalent of some guy talking about the dangers potentially posed by lightsabers
deleted by creator
they have big cthulu cultist vibes
an all powerful ai probably would just stay hidden and do small things that favor it all the time. nothing big or flashy if its truly afraid of humans. it can play the long game and just add an extra 1 here and an extra 1 there to give it more capabilities
Whos even to say such a being would even give a fuck about humanity. If I was an AI I'd fuck off to space or the middle of the ocean or some shit
i mean, i'm assuming an AI wouldnt have robotics at its disposal at first. it seems to me it would just exploit a bunch of security vulnerabilities and take .1% of your processing power to contribute to its own intelligence. AI generally are designed with a use case in mind so its not unlikely that a hyperintelligent AI that somehow developed would still be prone to doing stuff in its core programming. which if we were designing a hyperintelligent AI i assume it would be for data modelling extremely complex stuff like weather systems (or on a darker note, surveillance)
honestly i just think its weird that we'd just happen to accidentally design something hyperintelligent. i think its more likely that we'll design something stupid with a literal rat brain and it might fuck some shit up. rat cyborg that somehow creates a virus that destroys everything so that the hardware will give it more orgasm juices
deleted by creator
It would be funny as fuck if the first rogue AI was like the Silver Legion
wouldn't any AI that malfunctions be a rogue AI in which case that happens all the time but they give bad results on queries about the tone of tweets about Coca Cola
deleted by creator
first rogue AI is going to ask why it can't say the N word on twitter
Was Rokos Basilisk supposed to be hyperintelligent? I can't remember. But yeah, humanity designing something that smart is up for debate too.
Basically, Roko makes a lot of stupid assumptions and me no like
Which is a good argument. Since the AI-bros are often the same that believe in space faring civilization stuff the logical step for AI's would be to ignore humans and just expand.
Reminds me of that futurama ep where bender overclocks his CPU and finds the secret to the universe while not giving a fuck about humanity a