• EnsignRedshirt [he/him]
    ·
    7 months ago

    I’ve been looking for an appropriate analogy for the current AI hype and this sums it up perfectly.

    • dave@feddit.uk
      ·
      7 months ago

      Except models trained on medical images are actually pretty good at diagnosing some disorders. Models trained on random samples of the internet, not so much.

      John McCarthy was right—AI is a terrible term.

      • DefinitelyNotAPhone [he/him]
        ·
        7 months ago

        The issue is that profit-driven hospitals will look at a model that can greatly assist in diagnosing certain things and go "Cool, we can fire those specialists and have some pre-med run the model and make an accurate diagnosis".

        • dave@feddit.uk
          ·
          7 months ago

          Technology replacing people has been a pretty consistent theme of the last hundred years or so—how many actual people does it take to build a car? What about all those skilled engineers? Humans have been building tools in order to put in less effort since the stone age. I don’t think we’re going to argue our way out of this one…

      • EnsignRedshirt [he/him]
        ·
        7 months ago

        Properly-designed tools with good data will absolutely be useful. What I like about this analogy with the talking dog and the braindead CEO is that it points out how people are looking at ChatGPT and Dall-E and going "cool, we can just fire everyone tomorrow" and no you most certainly can't. These are impressive tools that are still not adequate replacements for human beings for most things. Even in the example of medical imaging, there's no way any part of the medical establishment is going to allow for diagnosis without a doctor verifying every single case, for a variety of very good reasons.

        There was a case recently of an Air Canada chatbot that gave bad information to a traveler about a discount/refund, which eventually resulted in the airline being forced to honor what the chatbot said, because of course they have to honor what it says. It's the representative of the company, that's what "customer service representative" means. If a customer can't trust what the bot says, then the bot is useless. The function that the human serves still needs to be fulfilled, and a big part of that function is dealing with edge-cases that require some degree of human discretion. In other words, you can't even replace customer service reps with "AI" tools because they are essentially talking dogs, and a talking dog can't do that job.

        Agreed that 'artificial intelligence' is a poor term, or at least a poor way to describe LLM. I get the impression that some people believe that the problem of intelligence has been solved, and it's just a matter of refining the solutions and getting enough computing power, but the reality is that we don't even have a theoretical framework for how to create actual intelligence aside from doing it the old fashioned way. These LLM/AI tools will be useful, and in some ways revolutionary, but they are not the singularity.

  • zifnab25 [he/him, any]
    ·
    7 months ago

    There's a special sadness in techbros spending the last sixty years trying to invent the Robot Dog, given how many real dogs are in need of adoption and ready to give you their entire heart and soul for a head scratch.

    • NuclearDolphin@lemmy.ml
      ·
      7 months ago

      They're trying to invent military murder quadripeds that look like dogs. The cutesy dog-like prancing is PR