• AtmosphericRiversCuomo [none/use name]
    ·
    8 days ago

    Yeah you're right! What use is having the entirety of medical knowledge in every language REGURGITATED at you in a context aware fashion to someone who can't afford a doctor? After all it's not cognition in the same way that I do it.

    How many shitty doctors getting nudged towards a better outcome for real people does this tech need to demonstrate to offset the OCEAN BOILING costs of this tech do you think?

    • plinky [he/him]
      hexagon
      ·
      edit-2
      8 days ago

      at least 3 millions.

      Cite your sources mate, ai driven image recognition of lung issues is kind of a semi-joke in the field.

      Majority of shit health outcomes is not missing esoteric cancer on an image, it's an overworked nurse missing bone fracture, it's not getting urea/blood analysis done in time, it's a doctor prescribing antibiotics without pro biotics afterwards, it's a drug being locked by ip in poor country or drug costing too much cause johnson acquisition spent that much money for patent or nuts pricing of clinical trials. Developing new working drug costs like 40 mil, trialing it costs 2 billion in fda. Now you do tell me how ai making 40 mil to 20 mil will make it cheaper.

      Majority of healthcare work is, you know, work. Patient care, surgery, not fucking doctor house, md finding right drug. 95 % cases could be solved by honest web md, congrats. who will set your broken arm? Will ai do mri scan of acl? Maybe x-ray? A dipshit can look at an image and say that's wrong, ai can tell you you should put it in a cast and avoid lateral movements for a month, so what then?

    • MoreAmphibians [none/use name]
      ·
      8 days ago

      Can't wait to pick up my prescription for hyperactivated antibiotics.

      https://www.cio.com/article/3593403/patients-may-suffer-from-hallucinations-of-ai-medical-transcription-tools.html

      How often do you think use of AI improves medical outcomes vs makes them worse? It's always super-effective in the advertising but when used in real life it seems to be below 50%. So we're boiling the oceans to make medical outcomes worse.

      To answer your question, AI would need to demonstrate improved medical outcomes at least 50% of the time (in actual use) for me to even consider looking at it being useful.

        • ferristriangle [he/him]
          ·
          8 days ago

          They've provided a source, indicating that they have done investigation into the issue.

          The quote isn't "If you don't do the specific investigation that I want you to do and come to the same conclusion that I have, then no right to speak."

          If you believe their investigation led them to an erroneous position, it is now incumbent on you to make that case and provide your supporting evidence.

          • Cysioland@lemmygrad.ml
            ·
            edit-2
            8 days ago

            Y'all are suffering because of the lack of downvotes, so you need to actually dunk on someone instead of downvoting and moving on

              • Cysioland@lemmygrad.ml
                ·
                7 days ago

                ChatGPT is censored, this calls for some more advanced LLMing, perhaps even a finetune based on the Hexbear comment section argument corpus. It's only ethical if we do it for the purpose of dunking on chuds/libs