Honestly it's just Pascal's Wager for tech bros, if there's a non-zero chance hell is real, you should repent. The punishment is being 'resurrected' as some form Boltzmann brain and then tortured for eternity. If that's the case, who cares about some copy of their mind-state being fed false sensory data at some point in the future?
It presupposes quantum immortality, which is the idea that consciousness would be continuous if a perfect copy of your latest brain configuration is created, leaving no gap in-between Death and Resurrection, which is a long shot to put it mildly.
They later updated it to say that the AI creates a billion perfect copies of your conciousness, so it's impossible to know if you are the real version in the past or a copy in the future. It's then rational for all the copies and the real person to do what the AI wants because each one of them has a very good change of ending up in techbro-hell if they don't.
I think a lot of people on LessWrong don't believe that conciousness will be continuous if someone just makes a perfect copy, even though its the supposed orthodoxy. So they made up this. How an AI could create a perfect copy of you is still just conveniently ignored.
Sounds like a creepypasta. There's so much stuff being assumed and speculated with no further explanation than "just imagine", I don't understand how anyone could take it seriously.
Honestly it's just Pascal's Wager for tech bros, if there's a non-zero chance hell is real, you should repent. The punishment is being 'resurrected' as some form Boltzmann brain and then tortured for eternity. If that's the case, who cares about some copy of their mind-state being fed false sensory data at some point in the future?
It presupposes quantum immortality, which is the idea that consciousness would be continuous if a perfect copy of your latest brain configuration is created, leaving no gap in-between Death and Resurrection, which is a long shot to put it mildly.
They later updated it to say that the AI creates a billion perfect copies of your conciousness, so it's impossible to know if you are the real version in the past or a copy in the future. It's then rational for all the copies and the real person to do what the AI wants because each one of them has a very good change of ending up in techbro-hell if they don't.
I think a lot of people on LessWrong don't believe that conciousness will be continuous if someone just makes a perfect copy, even though its the supposed orthodoxy. So they made up this. How an AI could create a perfect copy of you is still just conveniently ignored.
Sounds like a creepypasta. There's so much stuff being assumed and speculated with no further explanation than "just imagine", I don't understand how anyone could take it seriously.