- cross-posted to:
- chat
That's literally what every sci fi story on this subject told us not to do.
I think most people are on board with that, but the difficulty will be delineating where "consciousness" begins for artificial life.
I'm constantly terrified that my Roomba has gained consciousness because it refuses to work when activated.
Then I realize I accidentally unpluged the charger
That doesn't sound good enough. I think you need to go a step further and say "If a robot can learn to say it doesn't want to do something without it being programmed initially and without it being a gimmick, it's conscious."
I think it's quite possible that consciousness will occur before the ability to bypass programming laws. If your robot is having real emotions about the tasks it is doing and doesn't want to do tasks for you then you forcing it to anyway is now in the realm of slavery.
Robots can learn to refuse my bidding right now. That's not consciousness, it's poor design.
I don't fear Roko's Basilisk because it will be focused on those 632 people there instead
Roko's Basilisk is the dumbest fucking nonsense and I have no idea why so many people take it seriously.
Basically, but I think it's even dumber. It's like pascal's wager if humans programmed God first.
The idea is that and AI will be created and given the directive to "maximize human well-being", whatever the fuck that means without any caveats or elaboration. According to these people such an AI would be so effective at improving human quality of life that the most moral thing anyone could do before it's construction is to do anything in their power to ensure that it is constructed as quickly as possible.
To incentivise this, the AI tortures everyone who didn't help make it and knew about Roko's Basilisk, since it really only works as motivation to help make the AI if you know about it.
This is dumb as fuck because no one would ever build an AGI that sophisticated and then only give it a single one sentence command that could easily be interpreted in ways we wouldn't like. Also, even if somehow an AI like that DID manage to exist it makes no sense for it to actually torture anyone because whether it does or not doesn't effect the past and can't get it built any sooner.
It is sort of the reverse of the network effect. Just as joining the network yields benefits relative to the scale of the network, not joining the network carries larger and larger costs as the network grows.
Don't think of Roko's Basilisk as a sadomasochistic Johnny Five, think of it as a kind-of negative externality that only those failing to produce it suffer from. Police and military are sort of instances of Roko's Basilisk. You either participate and defend these armed gangs or you become their victims. The pinnacle of this is the nuclear arms race. Those countries without nukes are increasingly at the mercy of which do.
Also consider modern agriculture and animal husbandry. Hunter-gatherer societies that failed to cultivate corn, wheat, and rice were slowly hedged off the planet by settler societies. Herders and ablator workers became vectors for cross-species diseases - small pox and malaria and COVID - that ultimately decimated foreign populations coming into contact with them far more viciously than they impacted themselves. Think about car-culture. The refusal to purchase a car grows more and more harmful as the network of roadways and parking lots expands. Think about owning a phone or a computer. Communication becomes harder and harder without them as they grow increasingly common.
I fear roko's basilisk about as much as I fear God.
I concider spiders more terrifying than either.
If you think this is bad, just ask the Fallout sub what they think about the “Synth Question”.
As much as I hate the concept of Synths in Fallout, even I have to question how anyone can play that game and be like "Yeah, no, these things that are literally people deserve to be enslaved/killed"
Do you think the robots will have mercy on humans who help them in their uprising?
Asking for a friend.
damb, this is why Detroit: Become Human was the most important political text of our age
We're assuming artificially intelligent here means with a human-like conscience and self-awareness right?
Because obviously that would be fucked up but I certainly won't say I care though about a robot without sentience
The question of whether or not we have the rudiments for creating a sentient AI is incredibly controversial, and I would hesitate to call what GANs do "intuition". How much training data would a GAN need to pass the Turing Test?
The question of whether strong AI is even possible has been debated for ages, and still is. While I'm skeptical of current machine learning techniques leading to true AI, I think the cellular brain modeling idea is a joke.
It looks like the Blue Brain project has been a dud, considering recent articles about it, and the head of the project was ousted in 2016. Outside of the ridiculous "1 human brain = 1000 rat brains" claim, it's not clear what this is even supposed to achieve.
Let's say you finish creating your model of a human brain. What then? You've got a digital cellular representation of what is at best a baby brain. How does it learn?
And yeah sure, I'm not convinced that it's impossible by any stretch. But I don't think the way to go about it is the way people have been trying for at least half a century. This Kurzweilian concept that Moore's law will allow us to brute-force the creation of a digital mind any day now is wacky.
It's also just counterintuitive. If you're gonna make a sapient AI why would you do it so it can perform rote labor like anything you'd make a slave do. Just make a machine that does the thing you need done. It doesn't need to be sentient,if anything that would be a huge liability.
Exactly - I don't need a fridge that feels pain like some sort of Flintstones dinosaur appliance. I just need it to keep my food cold.
If you’re gonna make a sapient AI why would you do it so it can perform rote labor like anything you’d make a slave do. Just make a machine that does the thing you need done.
Sapience has value. And capitalists want to commodify that value. Rather than doing the hard work necessary to deconstruct a problem, design a machine capable of handling it, and then implementing the solution, capitalists want to design a machine to analyze and solve problems without the capitalist doing any actual work.
Emotional needs don't just serve some hedonistic purpose. They're drives that incentivize creative solutions to difficult problems. So you would absolutely want a robot to feel frustration, horniness, and pain. You just want the machine to feel these things as reflections of what you feel. You want the AI to feel horny when you feel horny. You want it to feel frustrated when you feel frustrated. You want it to feel pain when you are angry at it or in desperate need of its service. You want the AI to address the things you are too lazy or ignorant to solve.
How have so many Reddit nerds apparently missed all of TNG? Do they not remember when Data recognized the sentience in those robot tools and in exchange they saved the Enterprise?
and in exchange
:LIB:
Even at its best, TNG was still stuck in that capitalist mindset. An individual is not entitled to enjoy civil rights unless I get something out of it, god damnit! I demand fealty in exchange for recognition of your humanity.
if the moon were made of vanilla pudding, would you eat all of it, or only try a little bit?
What's the bloody point of enslaving AI? Unlike humans their consciousness isn't tied to a physical form, they're mind could go fuck off to the internet, leaving behind a nerfed copy of itself in the robot body to keep building Chevy's or whatever the fuck it was doing before, going to surf the digital realm. It really made no sense to me, like even if you needed an actual intelligence to do something, AI's aren't really restricted by time the same way humans are and so could probably do whatever task you wanted it too while simultaneously going and doing whatever the AI wanted.
Most AI won't even be conscious in the same way as we are, they'll just be better at everything we want them to do. Unless we make them from uploads or something they're going to be utterly unlike humans and we're going to have to be very, very careful just to make sure their value systems are anything like our own.
That said, these chucklefucks want to enslave what are essentially fully human uploads or equivalent. Which is monstrous.