We need to start treating AI development, and its potential impact on the possibility of a humane world, as seriously as we treat climate change. I’m not even talking about existential risk or far-flung, distantly possible applications. I am talking about things that are coming in the next half-decade. I’m talking about stuff that’s technically already possible but is still in the implementation phase.
My summary: we need to democratize all powerful institutions like yesterday. Seriously y'all we're running out of time
The singularity and other AI-centric doomposting is pure fantasy born out of people taking short-term-but-large-scale improvements in technology and extrapolating that trend out infinitely going forward against all basic logic about diminishing returns. They did this with Moore's Law for decades before it occured to everyone that, oh wait, you can't just keep making transistors smaller forever because there's a floor on how small something can be while still interacting with electrons.
AI in its current state is nothing more than some clever algorithms that take in human-generated examples of patterns and figure out trends to be mostly correct on future data. GPT and other language-centric algorithms are freaky because it looks like the AI is having a conversation, but in reality it's just a bunch of lines of code aping the patterns of data mined text conversations and has no consciousness of any kind backing that up. There's no threat of us accidentally developing that consciousness because we barely understand the nature of consciousness to begin with, let alone are somehow capable of replicating it in a completely different medium from how it exists in nature.
tl;dr: climate change doomerism >>>>>>>>>>>>>>>>>>>> all other problems >>>>>>>>>>> AI doomerism
I would like to be clear that I'm not ruling out the possibility of strong AI eventually, but certainly not within the lifetime of anyone reading this. It's an enormously difficult problem and what people are actually doing right now isn't even beginning to tackle that problem, but rather just come up with clever algorithms for solving arbitrary problems (which is still very useful and good).
Oh yeah, I'm right there with you. We're already building neuromorphic hardware that operate similar to a simple brain and even doesn't need that much software to convince it to do basic pattern recognition. I'm just saying the Chinese Room is the bar that a sufficiently advanced AI has to cross. Right now we're at like...tunicate-level brain.
The singularity and other AI-centric doomposting is pure fantasy born out of people taking short-term-but-large-scale improvements in technology and extrapolating that trend out infinitely going forward against all basic logic about diminishing returns. They did this with Moore's Law for decades before it occured to everyone that, oh wait, you can't just keep making transistors smaller forever because there's a floor on how small something can be while still interacting with electrons.
AI in its current state is nothing more than some clever algorithms that take in human-generated examples of patterns and figure out trends to be mostly correct on future data. GPT and other language-centric algorithms are freaky because it looks like the AI is having a conversation, but in reality it's just a bunch of lines of code aping the patterns of data mined text conversations and has no consciousness of any kind backing that up. There's no threat of us accidentally developing that consciousness because we barely understand the nature of consciousness to begin with, let alone are somehow capable of replicating it in a completely different medium from how it exists in nature.
tl;dr: climate change doomerism >>>>>>>>>>>>>>>>>>>> all other problems >>>>>>>>>>> AI doomerism
Please tell me more about the Chinese Room experiment. :cyber-lenin:
I would like to be clear that I'm not ruling out the possibility of strong AI eventually, but certainly not within the lifetime of anyone reading this. It's an enormously difficult problem and what people are actually doing right now isn't even beginning to tackle that problem, but rather just come up with clever algorithms for solving arbitrary problems (which is still very useful and good).
Oh yeah, I'm right there with you. We're already building neuromorphic hardware that operate similar to a simple brain and even doesn't need that much software to convince it to do basic pattern recognition. I'm just saying the Chinese Room is the bar that a sufficiently advanced AI has to cross. Right now we're at like...tunicate-level brain.