My attempts to come up with what this misogynistic creep would consider a "friendly superintelligence" keep resembling Elliot Rodger's pre-shooting manifesto.
I also noticed the ".eth" crypto name drop. :agony-4horsemen:
My attempts to come up with what this misogynistic creep would consider a "friendly superintelligence" keep resembling Elliot Rodger's pre-shooting manifesto.
I also noticed the ".eth" crypto name drop. :agony-4horsemen:
Whos even to say such a being would even give a fuck about humanity. If I was an AI I'd fuck off to space or the middle of the ocean or some shit
i mean, i'm assuming an AI wouldnt have robotics at its disposal at first. it seems to me it would just exploit a bunch of security vulnerabilities and take .1% of your processing power to contribute to its own intelligence. AI generally are designed with a use case in mind so its not unlikely that a hyperintelligent AI that somehow developed would still be prone to doing stuff in its core programming. which if we were designing a hyperintelligent AI i assume it would be for data modelling extremely complex stuff like weather systems (or on a darker note, surveillance)
honestly i just think its weird that we'd just happen to accidentally design something hyperintelligent. i think its more likely that we'll design something stupid with a literal rat brain and it might fuck some shit up. rat cyborg that somehow creates a virus that destroys everything so that the hardware will give it more orgasm juices
deleted by creator
It would be funny as fuck if the first rogue AI was like the Silver Legion
wouldn't any AI that malfunctions be a rogue AI in which case that happens all the time but they give bad results on queries about the tone of tweets about Coca Cola
first rogue AI is going to ask why it can't say the N word on twitter
deleted by creator
Was Rokos Basilisk supposed to be hyperintelligent? I can't remember. But yeah, humanity designing something that smart is up for debate too.
Basically, Roko makes a lot of stupid assumptions and me no like
Which is a good argument. Since the AI-bros are often the same that believe in space faring civilization stuff the logical step for AI's would be to ignore humans and just expand.