Five individuals versed in AI offer up chilling accounts of the very real, and fast ways that AI could marginalize humans to the point of extinction. Read the article for full story. Not Hyperbole.

Way 1: ‘If we become the less intelligent species, we should expect to be wiped out’ - It has happened many times before that species were wiped out by others that were smarter. We humans have already wiped out a significant fraction of all the species on Earth. That is what you should expect to happen as a less intelligent species – which is what we are likely to become, given the rate of progress of artificial intelligence. The tricky thing is, the species that is going to be wiped out often has no idea why or how.

Way 2: ‘The harms already being caused by AI are their own type of catastrophe’ - The worst-case scenario is that we fail to disrupt the status quo, in which very powerful companies develop and deploy AI in invisible and obscure ways. As AI becomes increasingly capable, and speculative fears about far-future existential risks gather mainstream attention, we need to work urgently to understand, prevent and remedy present-day harms.

Way 3: ‘It could want us dead, but it will probably also want to do things that kill us as a side-effect’ - It’s much easier to predict where we end up than how we get there. Where we end up is that we have something much smarter than us that doesn’t particularly want us around. If it’s much smarter than us, then it can get more of whatever it wants. First, it wants us dead before we build any more superintelligences that might compete with it. Second, it’s probably going to want to do things that kill us as a side-effect, such as building so many power plants that run off nuclear fusion – because there is plenty of hydrogen in the oceans – that the oceans boil.

Way 4: ‘If AI systems wanted to push humans out, they would have lots of levers to pull’ - The trend will probably be towards these models taking on increasingly open-ended tasks on behalf of humans, acting as our agents in the world. The culmination of this is what I have referred to as the “obsolescence regime”: for any task you might want done, you would rather ask an AI system than ask a human, because they are cheaper, they run faster and they might be smarter overall. In that endgame, humans that don’t rely on AI are uncompetitive. Your company won’t compete in the market economy if everybody else is using AI decision-makers and you are trying to use only humans. Your country won’t win a war if the other countries are using AI generals and AI strategists and you are trying to get by with humans.

Way 5: ‘The easiest scenario to imagine is that a person or an organisation uses AI to wreak havoc’ - A large fraction of researchers think it is very plausible that, in 10 years, we will have machines that are as intelligent as or more intelligent than humans. Those machines don’t have to be as good as us at everything; it’s enough that they be good in places where they could be dangerous. The easiest scenario to imagine is simply that a person or an organisation intentionally uses AI to wreak havoc. To give an example of what an AI system could do that would kill billions of people, there are companies that you can order from on the web to synthesise biological material or chemicals. We don’t have the capacity to design something really nefarious, but it’s very plausible that, in a decade’s time, it will be possible to design things like this. This scenario doesn’t even require the AI to be autonomous.

  • UlyssesT [he/him]
    ·
    1 year ago

    You've already presented LessWrong-tier alarmist techbro babbling as the tread topic where maybe one of the fears presented has any basis outside of science-fantasy speculation at this time.

    You willfully ignored counterpoints and documented evidence from other users that doesn't fit your edgy nihilistic nonsense.

    Go away.