They later updated it to say that the AI creates a billion perfect copies of your conciousness, so it's impossible to know if you are the real version in the past or a copy in the future. It's then rational for all the copies and the real person to do what the AI wants because each one of them has a very good change of ending up in techbro-hell if they don't.
I think a lot of people on LessWrong don't believe that conciousness will be continuous if someone just makes a perfect copy, even though its the supposed orthodoxy. So they made up this. How an AI could create a perfect copy of you is still just conveniently ignored.
They later updated it to say that the AI creates a billion perfect copies of your conciousness, so it's impossible to know if you are the real version in the past or a copy in the future. It's then rational for all the copies and the real person to do what the AI wants because each one of them has a very good change of ending up in techbro-hell if they don't.
I think a lot of people on LessWrong don't believe that conciousness will be continuous if someone just makes a perfect copy, even though its the supposed orthodoxy. So they made up this. How an AI could create a perfect copy of you is still just conveniently ignored.