an all powerful ai probably would just stay hidden and do small things that favor it all the time. nothing big or flashy if its truly afraid of humans. it can play the long game and just add an extra 1 here and an extra 1 there to give it more capabilities
i mean, i'm assuming an AI wouldnt have robotics at its disposal at first. it seems to me it would just exploit a bunch of security vulnerabilities and take .1% of your processing power to contribute to its own intelligence. AI generally are designed with a use case in mind so its not unlikely that a hyperintelligent AI that somehow developed would still be prone to doing stuff in its core programming. which if we were designing a hyperintelligent AI i assume it would be for data modelling extremely complex stuff like weather systems (or on a darker note, surveillance)
honestly i just think its weird that we'd just happen to accidentally design something hyperintelligent. i think its more likely that we'll design something stupid with a literal rat brain and it might fuck some shit up. rat cyborg that somehow creates a virus that destroys everything so that the hardware will give it more orgasm juices
wouldn't any AI that malfunctions be a rogue AI in which case that happens all the time but they give bad results on queries about the tone of tweets about Coca Cola
Which is a good argument. Since the AI-bros are often the same that believe in space faring civilization stuff the logical step for AI's would be to ignore humans and just expand.
an all powerful ai probably would just stay hidden and do small things that favor it all the time. nothing big or flashy if its truly afraid of humans. it can play the long game and just add an extra 1 here and an extra 1 there to give it more capabilities
Whos even to say such a being would even give a fuck about humanity. If I was an AI I'd fuck off to space or the middle of the ocean or some shit
i mean, i'm assuming an AI wouldnt have robotics at its disposal at first. it seems to me it would just exploit a bunch of security vulnerabilities and take .1% of your processing power to contribute to its own intelligence. AI generally are designed with a use case in mind so its not unlikely that a hyperintelligent AI that somehow developed would still be prone to doing stuff in its core programming. which if we were designing a hyperintelligent AI i assume it would be for data modelling extremely complex stuff like weather systems (or on a darker note, surveillance)
honestly i just think its weird that we'd just happen to accidentally design something hyperintelligent. i think its more likely that we'll design something stupid with a literal rat brain and it might fuck some shit up. rat cyborg that somehow creates a virus that destroys everything so that the hardware will give it more orgasm juices
deleted by creator
It would be funny as fuck if the first rogue AI was like the Silver Legion
wouldn't any AI that malfunctions be a rogue AI in which case that happens all the time but they give bad results on queries about the tone of tweets about Coca Cola
deleted by creator
first rogue AI is going to ask why it can't say the N word on twitter
Was Rokos Basilisk supposed to be hyperintelligent? I can't remember. But yeah, humanity designing something that smart is up for debate too.
Basically, Roko makes a lot of stupid assumptions and me no like
Which is a good argument. Since the AI-bros are often the same that believe in space faring civilization stuff the logical step for AI's would be to ignore humans and just expand.
Reminds me of that futurama ep where bender overclocks his CPU and finds the secret to the universe while not giving a fuck about humanity a