I know it’s marketing. Which is why I think they should kill themselves. Perpetuating the “AI will ruin humanity” narrative - not because of violating labor laws and committing war crimes, but because le terminator - deserves nothing but scorn and death. I WISH the evil AI was real so these people would be tormented for eternity.

  • TheCaconym [any]
    ·
    8 months ago

    LLMs do great if you don't consider them to be always right. Review the result same as if it was a random post

    That was my point: IMO this makes them useless for almost the entirety of use cases.

    LLMs can even assist with coding

    I covered that as well.

    And yes, llama for example runs on off-the-shelf consumer computers. Almost nobody except online geeks use LLMs like this - certainly not most corporations. They all send critical data to third parties online instead.

    Image generation I can see a lot more use cases. LLMs, again, I can see a few paltry ones but nowhere near what the hype is currently pretending will be viable.

    • Railcar8095@lemm.ee
      ·
      8 months ago

      I see your point, but it's a bit short sighted to think "it's not perfect now, therefore useless". Even when it's consistently giving better answers than a human, it still still make mistakes.

      3 years ago we wouldn't have this conversation, it would sound like science fiction.

      BTW, I am not saying the hype for current stuff is correct, just that in the future we will keep this tech, unlike some other failed ones.

      • usernamesaredifficul [he/him]
        ·
        8 months ago

        Even when it's consistently giving better answers than a human, it still still make mistakes.

        it's not giving consistently better answers than a human it gives answers consistently on the level of a 12 year old writing a report and rewording the wikipedia article

        • seeking_perhaps [he/him]
          ·
          8 months ago

          yea its like a 12 year old with infinite google time. like yea, sometimes it will spit out the right answer, but it doesn't know enough to know why that answer is right or how to check it.

      • TreadOnMe [none/use name]
        ·
        edit-2
        8 months ago

        It is highly doubtful we will keep this tech or ever use it at-scale for anything actually useful. The most I've seen it for is rapid photo touch-ups in graphic design, but tech for that has existed for years.

        What it could possibly be used for effectively is bias studies, but because everyone is obsessed with it replicating 'truth', something that even basic human language and culture is ill-equipped for, it will never actually work as intended. We will waste billions of dollar on it when we could be using that money to create detailed and specific statistical models for events that actually reflect those events as close as we can scientifically.