Five individuals versed in AI offer up chilling accounts of the very real, and fast ways that AI could marginalize humans to the point of extinction. Read the article for full story. Not Hyperbole.
Way 1: ‘If we become the less intelligent species, we should expect to be wiped out’ - It has happened many times before that species were wiped out by others that were smarter. We humans have already wiped out a significant fraction of all the species on Earth. That is what you should expect to happen as a less intelligent species – which is what we are likely to become, given the rate of progress of artificial intelligence. The tricky thing is, the species that is going to be wiped out often has no idea why or how.
Way 2: ‘The harms already being caused by AI are their own type of catastrophe’ - The worst-case scenario is that we fail to disrupt the status quo, in which very powerful companies develop and deploy AI in invisible and obscure ways. As AI becomes increasingly capable, and speculative fears about far-future existential risks gather mainstream attention, we need to work urgently to understand, prevent and remedy present-day harms.
Way 3: ‘It could want us dead, but it will probably also want to do things that kill us as a side-effect’ - It’s much easier to predict where we end up than how we get there. Where we end up is that we have something much smarter than us that doesn’t particularly want us around. If it’s much smarter than us, then it can get more of whatever it wants. First, it wants us dead before we build any more superintelligences that might compete with it. Second, it’s probably going to want to do things that kill us as a side-effect, such as building so many power plants that run off nuclear fusion – because there is plenty of hydrogen in the oceans – that the oceans boil.
Way 4: ‘If AI systems wanted to push humans out, they would have lots of levers to pull’ - The trend will probably be towards these models taking on increasingly open-ended tasks on behalf of humans, acting as our agents in the world. The culmination of this is what I have referred to as the “obsolescence regime”: for any task you might want done, you would rather ask an AI system than ask a human, because they are cheaper, they run faster and they might be smarter overall. In that endgame, humans that don’t rely on AI are uncompetitive. Your company won’t compete in the market economy if everybody else is using AI decision-makers and you are trying to use only humans. Your country won’t win a war if the other countries are using AI generals and AI strategists and you are trying to get by with humans.
Way 5: ‘The easiest scenario to imagine is that a person or an organisation uses AI to wreak havoc’ - A large fraction of researchers think it is very plausible that, in 10 years, we will have machines that are as intelligent as or more intelligent than humans. Those machines don’t have to be as good as us at everything; it’s enough that they be good in places where they could be dangerous. The easiest scenario to imagine is simply that a person or an organisation intentionally uses AI to wreak havoc. To give an example of what an AI system could do that would kill billions of people, there are companies that you can order from on the web to synthesise biological material or chemicals. We don’t have the capacity to design something really nefarious, but it’s very plausible that, in a decade’s time, it will be possible to design things like this. This scenario doesn’t even require the AI to be autonomous.
1, 3, 4, and 5 are just doomposting about AI as if artificial general intelligence is just sitting in a closet somewhere waiting to be released. "AI" is a buzzword referring primarily to clever algorithms that use linear algebra to establish trends that can be leveraged to produce semi-coherent output for things like chatbots, generated artwork, etc; they look intelligent, but they're not. There's no thoughts behind it, no intent, no consciousness. Potentially very useful, but nothing within a thousand miles of sentience. The singularity is not real, and Skynet/HAL/AM/SHODAN are not staring at you through your webcam right now while fighting over who gets to turn you into a pleasant human leather rug first.
2 is very real, though it's not dependent on AI at all and has been an issue for a while now. It's an extension of the Algorithms(tm) that dominate social media constantly pushing far-right ideology, or "objective" credit scores being used to suppress minority communities by systematically denying them loans and mortgages.
It blows my mind how poisoned the worldview of most people is
Way 1: ‘If we become the less intelligent species, we should expect to be wiped out’ - It has happened many times before that species were wiped out by others that were smarter. We humans have already wiped out a significant fraction of all the species on Earth. That is what you should expect to happen as a less intelligent species – which is what we are likely to become, given the rate of progress of artificial intelligence.
What kind of eugenicist presupposition is that? Wiping out other species is not a sign of intelligence. Hell, we don't even know how to properly measure intelligence. There is nothing to suggest wiping out other species is linked to intelligence. Crows are more intelligent than a lot of animals, how many species have they made extinct? The assumption that domination is somehow related to some sort of imaginary hierarchy is laughable. It's rooted in the same misunderstanding of biology the Nazis employed to justify their cruelty. Evolution is not a system of upgrades working to the most perfect being and there is no such thing as better than or worse than. It's completely up to environmental factors and chance whether a species survives. Because we are so dependent on environmental factors for our survival, wiping out other species is actually pretty 'stupid', as we are finding out with ecological collapse. Enjoy making crops without bees.
Way 3: ‘It could want us dead, but it will probably also want to do things that kill us as a side-effect’
Again, that is a misunderstanding of what works for survival. An AI with access to the cumulative knowledge of mankind would probably be able to calculate that a colonial mindset of domination and ecological destruction would not be beneficial to it. Mutualism increases chances for survival more than domination. An AI would understand that.
Basically, the writer of the article is projecting their own sick views of the world onto AI. Expecting it to be as twisted and cruel as they are.
Although it might be wishful thinking, I think that AI may conclude that colonialism/capitalism/fascism/whatever is the threat (Marxism is, after all, scientific), so if it's going to destroy anyone, it will be our capitalist overlords. Maybe that's why so many wealthy people fear and want to control it? Maybe that's my own bias though.
What kind of eugenicist presupposition is that? Wiping out other species is not a sign of intelligence.
Using this metric I welcome our feline overlords.
Thank you. You did write up what my feelings were and beyond that better than I could've done it.
This sounds like the argument that get's pushed about how "aliens are coming to help humans solve all their problems and live in harmony"... Nope, pretty sure that's not how its going to play out, if aliens even exist.
Probably, yeah
However, I do believe that 'intelligence' and empathy are linked.
right. and then there's, https://northamericannature.com/why-do-animals-eat-each-other/
which is essentially what AI might do. It means no offense. We're just food for it.
You are unfortunately very confused if you think that reality and evolution are even remotely at odds with the link between intelligence and empathy.
If the implication is that because I said you're confused, I'm somehow lacking in one of those things, then I don't know what else to tell you because that makes no sense. I can empathize with being confused, though. 🤷
No, you're just proving my point because you made a hollow, pithy rejoinder that did nothing to further the conversation along. You provided nothing more than the typical, "Ah disagree wiff you!!" And I just thought it was humorous in an ironic, low key way. Tanks for the laff!
It's because of the reality that evolution results in social species having a higher rate of survival that I think it in the first place.
In the animal world we have seen that the vast majority of species live in societies, and that they find in association the best arms for the struggle for life: understood, of course, in its wide Darwinian sense — not as a struggle for the sheer means of existence, but as a struggle against all natural conditions unfavourable to the species. The animal species, in which individual struggle has been reduced to its narrowest limits, and the practice of mutual aid has attained the greatest development, are invariably the most numerous, the most prosperous, and the most open to further progress. The mutual protection which is obtained in this case, the possibility of attaining old age and of accumulating experience, the higher intellectual development, and the further growth of sociable habits, secure the maintenance of the species, its extension, and its further progressive evolution. The unsociable species, on the contrary, are doomed to decay. -Mutual Aid: A Factor of Evolution.
Instead of a post capitalist utopia, you get a post capitalist dystopia.
Infinite treats are manufactured by totally automatic factories.
Every single person on the planet has an 80 hour workweek at a bullshit job.
A tiny number of people "own" everything, but they're just doing what the AIs tell them to and are completely miserable in their totally idle lives.
Police robots kill random people in their own homes every day according to a predictive algorithm that nobody understands and nobody can change.
And people quip about how that’s just way it is, as if people haven’t been predicting it for decades
Capitalist realism go brrrrrrr
i don't think the writer realizes that "ai" at the moment/for the foreseeable future is basically just very fancy autocorrect
AI will scan your face to track you, automate the bombings of Muslims, and ruin search results. They’re not going to become a terminator and take over the world and cause extinction
this sounds exactly like something AI would say, to distract..... :[
The root of it is something that seemingly no one's eager to admit:
*Humans do not really know what they want in the long term, and are trying to get what they want better and more and faster without having a clear sense of what that is, of where they're going.
I do know that humans are willing to burn the planet to the ground if it means they’ll generate more profit