• Bobby_DROP_TABLES [he/him]
    ·
    1 year ago

    If you're looking at AI as just LLMs like ChatGPT, yeah you're probably right about them being overhyped. I think a lot of people on this site and on the left in general are writing AI off because the hype surrounding it is largely being propped up by annoying tech-dudes/Elon shills. For background, I worked as a research assistant on AI projects in my undergrad and did my thesis on using AI to automate chemical syntheses. There are a lot of legitimately good (and potentially horrifying) use cases for stuff like deep neural networks and reinforcement learning. There aren't many good parallels to describe just how quickly the field has been advancing, especially in the last year alone. We are still decades if not centuries away from genuine artificial consciousness, but in the meantime a lot of things are gonna significantly change for better and for worse.

    • UlyssesT
      ·
      edit-2
      18 days ago

      deleted by creator

    • MoneyIsTheDeepState [comrade/them,he/him]
      ·
      1 year ago

      I don't think it's just LLMs, but I do think it's reasonable to think of it as automating data-science. I mean that to describe how I see its scope of applications, rather than a bah humbug, and of course the scope of data-science is huge. Being able to automate specific analyses sounds like game changer

      That's all sort of esoteric though - to most people, AI is being deliberately presented over and over in human-intelligence terms, and that's how they learn to understand it. Like, none of my coworkers approach me to ask what I think about AI's potential uses in chemistry, content moderation, copywriting, statistical modeling, catching relations between data that people aren't very well suited to catching, or anything else plausible. They want to know if their geekest coworker is worried about The Singularity

      So for the general public, who unironically seem to think of AI in a "Skynet/not Skynet" dichotomy, I tell them "not Skynet"