Check out the idea of quantum archaeology . It describes a lot of what you're talking about (forgive the post's mention of absolute ghoul Ray Kurzweil; it's kind of unavoidable when talking about anything related to transhumanism).
I skimmed the article, but the basic idea is trying to simulate a person's life until you get a simulation consistent with everything we already know about the person, right?
It's strange no one brings up the ethical concerns about the process (not the result). Like simulating Holocaust with nanometer accuracy including the brains of it's every victim million times until you get a version of Anne Frank that writes the same diary word for word has, you know, implications.
No more so than are already inherent to our own existence, I would think. If you're simulating a reality with the goal of eventually (and accurately) recreating a person's consciousness at their point of death, then from the simulated consciousness' perspective it'll always be their first time experiencing everything. An accurately-recreated Anne Frank would be one who experienced her life once, not millions of times over. Hell, for all we know we could be simulated recreations of long-dead people, and at the moment of death whoever is running the program could just copy-paste us up from the past simulation into the Matrioshka Brain or whatever.
An accurately-recreated Anne Frank would be one who experienced her life once, not millions of times over.
Yeah, but by the nature of the process you are also going to get a million of not-accurately-recreated Annes who each experience their life once (or at least until they decide they want to write Faust fanfiction instead of a diary as they are supposed to).
Yeah, but by the nature of the process you are also going to get a million of not-accurately-recreated Annes who each experience their life once (or at least until they decide they want to write Faust fanfiction instead of a diary as they are supposed to).
True, but in this scenario you'd assume that as soon as the people (beings?) running the program detected some kind of discrepancy between a simulation instance and information in historical record, they'd terminate that instance. If we keep to the anthropic principle, this means only the 100% accurate Anne would have any memory of having ever existed in the first place.
Not to mention that for this type of thing to be any more than speculative science fantasy, you'd need computational and predictive power far beyond anything available today. I'd like to think that the process wouldn't involve as much trial-and-error guesswork as we might think from our own technological context.
What I'm trying to say is that even if you've terminated the instance after it used a different synonym in the diary and the memories of them being simulated are deleted, you have still subjected another copy to Holocaust.
you’d need computational and predictive power far beyond anything available today
It's seems like part of the suggestion is that we simulate everything until we get it right, but the other possibility the author is hoping for is that some fundamental advancement in understanding of physics will let us straight up look into the past and copy people's brain states.
Funny titbit. In one of the Culture novels, Excession, I think. The Minds (superpowerful AIs that run everything) could simulate entire planets worth of people on molecular scale and get incredibly accurate predictions out of it, but (generally) don't, because that would be functionally the same thing as doing to the people the things you're simulating.
What I’m trying to say is that even if you’ve terminated the instance after it used a different synonym in the diary and the memories of them being simulated are deleted, you have still subjected another copy to Holocaust.
That's a fair point. Though I think in trying to reconcile that concern with the possibility - maybe moral duty? - of resurrecting everyone who's ever lived and died, you'd run up against the existential questions of what defines experience, memory and the "self." Can it be said that a being has been subjected to any kind of suffering, if immediately afterwards all memory and physical evidence of that suffering is completely erased? If you were to somehow have all your memories of your life up to this point erased tonight when you when you went to bed, would the "you" that woke up tomorrow without any of your current memories still regret any suffering the "you" reading this now has experienced?
Honestly your point may be a good argument for not even bothering with quantum archaeology at all, because only beings that exist have the capacity to remember their past sufferings. Except that if conscious existence is inevitable (non-existing things can't experience non-existence, I don't think), than so is suffering, so at some point I think it might just become a kind of arbitrary moral algebra. What's worse? Subjecting millions of consciousnesses to suffering, or not allowing millions of consciousnesses a second chance at a better existence when their original lives were probably spent mostly in suffering?
People keep telling me to read The Culture series but I've just never gotten around to it, lol.
Huh. My intuition is that inflicting suffering on a conscious being at bad no matter whether they remember it or not, whether there is someone to regret it or not. Like in the end we're all going to die and our memories are going to get lost. Making us suffer was still wrong in retrospect.
There is this dude whom the Nolan movie Memento was based on. He can't form long term memories anymore and forgets anything new after like ten minutes. If I were to kidnap and torture him a bit and return him home it would be like if nothing happened in half an hour. Still seem like an incredibly shitty thing to do.
If analyse the whole thing from the perspective of simple negative utilitarianism (where the only rule is to minimise suffering) you get a sort of antinatalism. There's no point of ressurecting people because dead people can't suffer while ressurected might, even without considering additional suffering during simulation.
If go with usual utilitarianism (maximise pleasure minus suffering) you might be compelled to ressurect people considering you new world will give them more net pleasure than the suffering they experience in simulation. On the other hand in the framework like this it would make more sense just to make (or clone) more people conventional way because those people would not have to deal with painful simulation to get born.
A recently popular form is preference utilitarianism. The idea is maximising not pleasure specifically but getting people what they think they want. Like if I decided I wanted to be hit in the balls the ethical action is to hit me in the balls even if you're sure it will only cause me suffering. In this context you'd have to ask the future ressurecty whether they want to deal with all the shit in the simulation to be reborn in the cool utopian future. Obviously if the person didn't leave a specific will you'd have to run them through the simulation to ask them this question which in itself would be a contradiction.
How much of a ghoul is he? I always stumble on bits of his writing when I read about transhumanism but I don't know anything about his politics or whatever.
He's a multi-millionaire libertarian finance capitalist type, like a lot of the big names in the silicon valley transhumanist movement. He's also the guy responsible, maybe more than anyone else, for shaping the culture of that movement. His goal is not to better the human condition for everyone, but for himself specifically. He's said before that he basically thinks, if the poors get bionic eyeballs and the reverse-aging pills sometime down the line, great, but it's not a priority.
Check out the idea of quantum archaeology . It describes a lot of what you're talking about (forgive the post's mention of absolute ghoul Ray Kurzweil; it's kind of unavoidable when talking about anything related to transhumanism).
I skimmed the article, but the basic idea is trying to simulate a person's life until you get a simulation consistent with everything we already know about the person, right?
It's strange no one brings up the ethical concerns about the process (not the result). Like simulating Holocaust with nanometer accuracy including the brains of it's every victim million times until you get a version of Anne Frank that writes the same diary word for word has, you know, implications.
No more so than are already inherent to our own existence, I would think. If you're simulating a reality with the goal of eventually (and accurately) recreating a person's consciousness at their point of death, then from the simulated consciousness' perspective it'll always be their first time experiencing everything. An accurately-recreated Anne Frank would be one who experienced her life once, not millions of times over. Hell, for all we know we could be simulated recreations of long-dead people, and at the moment of death whoever is running the program could just copy-paste us up from the past simulation into the Matrioshka Brain or whatever.
Yeah, but by the nature of the process you are also going to get a million of not-accurately-recreated Annes who each experience their life once (or at least until they decide they want to write Faust fanfiction instead of a diary as they are supposed to).
True, but in this scenario you'd assume that as soon as the people (beings?) running the program detected some kind of discrepancy between a simulation instance and information in historical record, they'd terminate that instance. If we keep to the anthropic principle, this means only the 100% accurate Anne would have any memory of having ever existed in the first place.
Not to mention that for this type of thing to be any more than speculative science fantasy, you'd need computational and predictive power far beyond anything available today. I'd like to think that the process wouldn't involve as much trial-and-error guesswork as we might think from our own technological context.
What I'm trying to say is that even if you've terminated the instance after it used a different synonym in the diary and the memories of them being simulated are deleted, you have still subjected another copy to Holocaust.
It's seems like part of the suggestion is that we simulate everything until we get it right, but the other possibility the author is hoping for is that some fundamental advancement in understanding of physics will let us straight up look into the past and copy people's brain states.
Funny titbit. In one of the Culture novels, Excession, I think. The Minds (superpowerful AIs that run everything) could simulate entire planets worth of people on molecular scale and get incredibly accurate predictions out of it, but (generally) don't, because that would be functionally the same thing as doing to the people the things you're simulating.
That's a fair point. Though I think in trying to reconcile that concern with the possibility - maybe moral duty? - of resurrecting everyone who's ever lived and died, you'd run up against the existential questions of what defines experience, memory and the "self." Can it be said that a being has been subjected to any kind of suffering, if immediately afterwards all memory and physical evidence of that suffering is completely erased? If you were to somehow have all your memories of your life up to this point erased tonight when you when you went to bed, would the "you" that woke up tomorrow without any of your current memories still regret any suffering the "you" reading this now has experienced?
Honestly your point may be a good argument for not even bothering with quantum archaeology at all, because only beings that exist have the capacity to remember their past sufferings. Except that if conscious existence is inevitable (non-existing things can't experience non-existence, I don't think), than so is suffering, so at some point I think it might just become a kind of arbitrary moral algebra. What's worse? Subjecting millions of consciousnesses to suffering, or not allowing millions of consciousnesses a second chance at a better existence when their original lives were probably spent mostly in suffering?
People keep telling me to read The Culture series but I've just never gotten around to it, lol.
Huh. My intuition is that inflicting suffering on a conscious being at bad no matter whether they remember it or not, whether there is someone to regret it or not. Like in the end we're all going to die and our memories are going to get lost. Making us suffer was still wrong in retrospect.
There is this dude whom the Nolan movie Memento was based on. He can't form long term memories anymore and forgets anything new after like ten minutes. If I were to kidnap and torture him a bit and return him home it would be like if nothing happened in half an hour. Still seem like an incredibly shitty thing to do.
If analyse the whole thing from the perspective of simple negative utilitarianism (where the only rule is to minimise suffering) you get a sort of antinatalism. There's no point of ressurecting people because dead people can't suffer while ressurected might, even without considering additional suffering during simulation.
If go with usual utilitarianism (maximise pleasure minus suffering) you might be compelled to ressurect people considering you new world will give them more net pleasure than the suffering they experience in simulation. On the other hand in the framework like this it would make more sense just to make (or clone) more people conventional way because those people would not have to deal with painful simulation to get born.
A recently popular form is preference utilitarianism. The idea is maximising not pleasure specifically but getting people what they think they want. Like if I decided I wanted to be hit in the balls the ethical action is to hit me in the balls even if you're sure it will only cause me suffering. In this context you'd have to ask the future ressurecty whether they want to deal with all the shit in the simulation to be reborn in the cool utopian future. Obviously if the person didn't leave a specific will you'd have to run them through the simulation to ask them this question which in itself would be a contradiction.
How much of a ghoul is he? I always stumble on bits of his writing when I read about transhumanism but I don't know anything about his politics or whatever.
He's a multi-millionaire libertarian finance capitalist type, like a lot of the big names in the silicon valley transhumanist movement. He's also the guy responsible, maybe more than anyone else, for shaping the culture of that movement. His goal is not to better the human condition for everyone, but for himself specifically. He's said before that he basically thinks, if the poors get bionic eyeballs and the reverse-aging pills sometime down the line, great, but it's not a priority.