• Thorry84@feddit.nl
    ·
    27 days ago

    This is pretty dumb, machine learning algorithms (fuck off with calling it AI) are especially good at seeing signs of disease in data such as xrays, CT and MRI scans. It's the one place they really help save time and prevent mistakes. And even if it's just to flag shit for a second opinion by a doctor and not to replace the doctor, that's still super useful. Pattern recognition is hard and these kinds of algorithms are very good at them if provided the right source data to work off.

    If only the media and big corps would stop claiming LLMs are general AI, then maybe people would stop using them for stuff it's clearly not good at and not meant for.

    • jsomae@lemmy.ml
      ·
      edit-2
      27 days ago

      This isn't dumb. This is a very good study as it is helping to remind people that these fancy new tools aren't good at everything. The media reporting on this is doing a service.

      Edit: my bad making two responses

    • jsomae@lemmy.ml
      ·
      27 days ago

      Can't stop people calling it AI. People have called video game bots AI since the 90s, even in industry. Any algorithm is a form of artificial intelligence, really. LLMs and machine vision are multipurpose, though I agree that general-purpose is still a stretch.

  • ResoluteCatnap@lemmy.ml
    ·
    27 days ago

    As others have said, you don't need (and shouldn't use) a LLM for a classification task like this. There are machine learning models that can handle this and identify underlying patterns that humans can not easily detect. And yes, they can get accuracy and precision scores much higher than 50%

    What an incredibly stupid article.