We need to start treating AI development, and its potential impact on the possibility of a humane world, as seriously as we treat climate change. I’m not even talking about existential risk or far-flung, distantly possible applications. I am talking about things that are coming in the next half-decade. I’m talking about stuff that’s technically already possible but is still in the implementation phase.
My summary: we need to democratize all powerful institutions like yesterday. Seriously y'all we're running out of time
The author isn’t talking about singularity doomerism or whatever.
Replacing programmers already sort of implies some degree of singularity, I can see AI assistants putting some pressure on some programmers, probably will tighten budgets a lot, but fundamentally replacing what programmers do already puts us real close to self-developing AIs, which yeah, you have to enter this conversation of singularity or whatever.
Replacing programmers with AI, while spooky as heck, is still not enough to start zooming off into a technological singularity. That requires that AIs are specifically replacing AI programmers, and doing at least as good of a job at it as the handful of AI researchers that push the field forward.
Replacing programmers already sort of implies some degree of singularity, I can see AI assistants putting some pressure on some programmers, probably will tighten budgets a lot, but fundamentally replacing what programmers do already puts us real close to self-developing AIs, which yeah, you have to enter this conversation of singularity or whatever.
Replacing programmers with AI, while spooky as heck, is still not enough to start zooming off into a technological singularity. That requires that AIs are specifically replacing AI programmers, and doing at least as good of a job at it as the handful of AI researchers that push the field forward.