None of the central people concerned with AI risk, associated with LW or otherwise, has ever said that we should expect to see AI having negative effects autonomously before it all goes to hell.
Well isn't that convenient? None of today's AIs problems actually matter (don't give other people money!). Or at least not nearly as much as AI going to biblical levels of apocalypse without warning unless we deep thinkers think deep thoughts and save us all preemptively (give us money!)
Well isn't that convenient? None of today's AIs problems actually matter (don't give other people money!). Or at least not nearly as much as AI going to biblical levels of apocalypse without warning unless we deep thinkers think deep thoughts and save us all preemptively (give us money!)