Depends on what you mean. It's not useless like crypto, and it has very different societal implications than crypto does. However, every last article like this one about either crypto or AI is uninformed or knowingly false, motivated speculation.
From the perspective of a copywriter, AI is automating away a job. From the perspective of a capitalist, AI is both useful automation and a useful scare tactic against labor, all the moreso for it's wildly inflated reputation. From the perspective of mass media, I guess the main implication of AI is yet another goldrush for them to encourage people to dive into so that capitalists can mine the prospective miners.
That's the aspect where a flippant dismissal is equally appropriate for both: They're both subject to a coordinated effort to mystify and essentialize them in the eye of the public. LLMs can not "figure out" anything, but news outlets keep pretending that we've coded some kind of digital person
If you're looking at AI as just LLMs like ChatGPT, yeah you're probably right about them being overhyped. I think a lot of people on this site and on the left in general are writing AI off because the hype surrounding it is largely being propped up by annoying tech-dudes/Elon shills. For background, I worked as a research assistant on AI projects in my undergrad and did my thesis on using AI to automate chemical syntheses. There are a lot of legitimately good (and potentially horrifying) use cases for stuff like deep neural networks and reinforcement learning. There aren't many good parallels to describe just how quickly the field has been advancing, especially in the last year alone. We are still decades if not centuries away from genuine artificial consciousness, but in the meantime a lot of things are gonna significantly change for better and for worse.
I don't think it's just LLMs, but I do think it's reasonable to think of it as automating data-science. I mean that to describe how I see its scope of applications, rather than a bah humbug, and of course the scope of data-science is huge. Being able to automate specific analyses sounds like game changer
That's all sort of esoteric though - to most people, AI is being deliberately presented over and over in human-intelligence terms, and that's how they learn to understand it. Like, none of my coworkers approach me to ask what I think about AI's potential uses in chemistry, content moderation, copywriting, statistical modeling, catching relations between data that people aren't very well suited to catching, or anything else plausible. They want to know if their geekest coworker is worried about The Singularity
So for the general public, who unironically seem to think of AI in a "Skynet/not Skynet" dichotomy, I tell them "not Skynet"
Flippant dismissal of AI is actually bad
Depends on what you mean. It's not useless like crypto, and it has very different societal implications than crypto does. However, every last article like this one about either crypto or AI is uninformed or knowingly false, motivated speculation.
From the perspective of a copywriter, AI is automating away a job. From the perspective of a capitalist, AI is both useful automation and a useful scare tactic against labor, all the moreso for it's wildly inflated reputation. From the perspective of mass media, I guess the main implication of AI is yet another goldrush for them to encourage people to dive into so that capitalists can mine the prospective miners.
That's the aspect where a flippant dismissal is equally appropriate for both: They're both subject to a coordinated effort to mystify and essentialize them in the eye of the public. LLMs can not "figure out" anything, but news outlets keep pretending that we've coded some kind of digital person
If you're looking at AI as just LLMs like ChatGPT, yeah you're probably right about them being overhyped. I think a lot of people on this site and on the left in general are writing AI off because the hype surrounding it is largely being propped up by annoying tech-dudes/Elon shills. For background, I worked as a research assistant on AI projects in my undergrad and did my thesis on using AI to automate chemical syntheses. There are a lot of legitimately good (and potentially horrifying) use cases for stuff like deep neural networks and reinforcement learning. There aren't many good parallels to describe just how quickly the field has been advancing, especially in the last year alone. We are still decades if not centuries away from genuine artificial consciousness, but in the meantime a lot of things are gonna significantly change for better and for worse.
deleted by creator
I don't think it's just LLMs, but I do think it's reasonable to think of it as automating data-science. I mean that to describe how I see its scope of applications, rather than a bah humbug, and of course the scope of data-science is huge. Being able to automate specific analyses sounds like game changer
That's all sort of esoteric though - to most people, AI is being deliberately presented over and over in human-intelligence terms, and that's how they learn to understand it. Like, none of my coworkers approach me to ask what I think about AI's potential uses in chemistry, content moderation, copywriting, statistical modeling, catching relations between data that people aren't very well suited to catching, or anything else plausible. They want to know if their geekest coworker is worried about The Singularity
So for the general public, who unironically seem to think of AI in a "Skynet/not Skynet" dichotomy, I tell them "not Skynet"