• ALoafOfBread@lemmy.ml
    ·
    edit-2
    2 months ago

    Now make mammograms not $500 and not have a 6 month waiting time and make them available for women under 40. Then this'll be a useful breakthrough

  • cecinestpasunbot@lemmy.ml
    ·
    2 months ago

    Unfortunately AI models like this one often never make it to the clinic. The model could be impressive enough to identify 100% of cases that will develop breast cancer. However if it has a false positive rate of say 5% it’s use may actually create more harm than it intends to prevent.

    • Maven (famous)@lemmy.zip
      ·
      2 months ago

      Another big thing to note, we recently had a different but VERY similar headline about finding typhoid early and was able to point it out more accurately than doctors could.

      But when they examined the AI to see what it was doing, it turns out that it was weighing the specs of the machine being used to do the scan... An older machine means the area was likely poorer and therefore more likely to have typhoid. The AI wasn't pointing out if someone had Typhoid it was just telling you if they were in a rich area or not.

    • ColeSloth@discuss.tchncs.de
      ·
      2 months ago

      Not at all, in this case.

      A false positive of even 50% can mean telling the patient "they are at a higher risk of developing breast cancer and should get screened every 6 months instead of every year for the next 5 years".

      Keep in mind that women have about a 12% chance of getting breast cancer at some point in their lives. During the highest risk years its a 2 percent chamce per year, so a machine with a 50% false positive for a 5 year prediction would still only be telling like 15% of women to be screened more often.

  • mayo_cider [he/him]
    ·
    edit-2
    2 months ago

    Neural networks are great for pattern recognition, unfortunately all the hype is in pattern generation and we end up with mammograms in anime style

    • D61 [any]
      ·
      2 months ago

      Doctor: There seems to be something wrong with the image.

      Technician: What's the problem?

      Doctor: The patient only has two breasts, but the image that came back from the AI machine shows them having six breasts and much MUCH larger breasts than the patient actually has.

      Technician: sighs

      • mayo_cider [he/him]
        ·
        2 months ago

        Why does the paperwork suddenly claim the patient is 600 years old shape shifting dragon?

    • fossilesque@mander.xyz
      hexagon
      M
      ·
      2 months ago

      You could participate or complain.

      https://news.mit.edu/2019/using-ai-predict-breast-cancer-and-personalize-care-0507

      • NuraShiny [any]
        ·
        2 months ago

        Complain to who? Some random twitter account? WHy would I do that?

    • Flyberius [comrade/them]
      ·
      2 months ago

      Honestly this is a pretty good use case for LLMs and I've seen them used very successfully to detect infection in samples for various neglected tropical diseases. This literally is what AI should be used for.

      • NuraShiny [any]
        ·
        2 months ago

        Sure, agreed . Too bad 99% of it's use is still stealing from society to make a few billionaires richer.

        • Flyberius [comrade/them]
          ·
          2 months ago

          I also agree.

          However these medical LLMs have been around for a long time, and don't use horrific amounts of energy, not do they make billionaires richer. They are the sorts of things that a hobbiest can put together provided they have enough training data. Further to that they can run offline, allowing doctors to perform tests in the field, as I can attest to witnessing first hand with soil transmitted helminths surveys in Mozambique. That means that instead of checking thousands of stool samples manually, those same people can be paid to collect more samples or distribute the drugs to cure the disease in affected populations.

          • NuraShiny [any]
            ·
            2 months ago

            I highly doubt the medical data to do these are available to a hobbyist, or that someone like that would have the know-how to train the AI.

            But yea, rare non-bad use of AI. Now we just need to eat the rich to make it a good for humanity. Let's get to that I say!

  • earmuff@lemmy.dbzer0.com
    ·
    2 months ago

    Serious question: is there a way to get access to medical imagery as a non-student? I would love to do some machine learning with it myself, as I see lot’s of potential in image analysis in general. 5 years ago I created a model that was able to spot certain types of ships based only on satellite imagery, which were not easily detectable by eye and ignoring the fact that one human cannot scan 15k images in one hour. Similar use case with medical imagery - seeing the things that are not yet detectable by human eyes.

    • booty [he/him]
      ·
      2 months ago

      5 years ago I created a model that was able to spot certain types of ships based only on satellite imagery, which were not easily detectable by eye and ignoring the fact that one human cannot scan 15k images in one hour.

      what is your intended use case? are you trying to help government agencies perfect spying? sounds very cringe ngl

      • earmuff@lemmy.dbzer0.com
        ·
        2 months ago

        My intended use case is to find possibilities how ML can support people with certain tasks. Science is not political, for what my technology is abused, I cannot control. This is no reason to stop science entirely, there will always be someone abusing something for their own gain.

        But thanks for assuming without asking first what the context was.

        • MaeBorowski [she/her]
          ·
          2 months ago

          find possibilities how ML can support people with certain tasks

          Marxism-Leninism? anakin-padme-2

          Oh, Machine Learning. sicko-wistful

          Science is not political

          in an ideal world maybe, but that is not our world. In reality science is always always political. It is unavoidable.

          • earmuff@lemmy.dbzer0.com
            ·
            edit-2
            2 months ago

            Typical hexbear reply lol

            Unfortunately, you are right, though. Science can be political. My science is not. I like my bubble.

            • Kuori [she/her]
              ·
              2 months ago

              that's just going through life with blinders on

            • MaeBorowski [she/her]
              ·
              2 months ago

              Typical hexbear reply

              Unfortunately, you are right

              Yes, typically hexbear replies are right.

              It's not unfortunate though, it's simply a matter of having an understanding of the world and a willingness to accept it and engage with it. It's too bad that you seem not to want that understanding or that you lack the willingness to accept it.

              My science is not. I like my bubble.

              How can you possibly square that first short sentence with the second? Are you really that willfully hypocritical? Yes, "your" science is political. No science escapes it, and the people who do science thinking themselves and their work is unaffected by their ideology are the most effected by ideology. No wonder you like your bubble - from within it, you don't have to concern yourself with any of the real world or even the smallest sliver of self reflection. But all it is is a happy, self-reinforcing delusion. You pretend to be someone who appreciates science, but if you truly did, you would be doing everything you can to recognize your unavoidable biases rather than denying them while simultaneously wallowing in them, which is what you are openly admitting to doing whether you realize it or not.

        • booty [he/him]
          ·
          2 months ago

          My intended use case is to find possibilities how ML can support people with certain tasks.

          weaselly bullshit. how exactly do you intend for people to use technology that identifies ships via satellite? what is your goal? because the only use cases I can see for this are negative

          This is no reason to stop science entirely

          if the only thing your tech can be used for is bad then you're bad for innovating that tech

          • earmuff@lemmy.dbzer0.com
            ·
            2 months ago

            Ever thought about identifying ships full of refugees and send help, before their ships break apart and 50 people drown?

            Of course you have not. Your hatered makes you blind. Close minds never were able to see why science is important. Now enjoy spreading hate somewhere else.

            • booty [he/him]
              ·
              2 months ago

              Ever thought about identifying ships full of refugees and send help, before their ships break apart and 50 people drown?

              No, I didn't think about that. If you did, why exactly were you so hostile to me asking what use you thought this might serve?

              • earmuff@lemmy.dbzer0.com
                ·
                2 months ago

                I don’t think my reply was hostile, I just criticized your behavior assuming things, before you know the whole truth. I kept everything neutral and didn’t have the urge to have a discussion with someone already on edge. I hope you understand and also learn that not everything is entirely evil in this world. Please stay curious - don’t assume.

                • booty [he/him]
                  ·
                  2 months ago

                  I just criticized your behavior assuming things, before you know the whole truth.

                  I didn't assume anything. I asked you what your intended use case was and you responded with vague platitudes, sarcasm, and then once I pressed further, insults. Try re-reading your comments from a more objective standpoint and you'll find neutrality nowhere within them.

                  • earmuff@lemmy.dbzer0.com
                    ·
                    2 months ago

                    (…) are you trying to help government agencies perfect spying? sounds very cringe ngl

                    Tell me again which part of your reply is telling me, you are actually interested in an objective discussion, without assuming things and wanting to start a fight for no reason.

                    I struggle to find that part.

                    • booty [he/him]
                      ·
                      edit-2
                      2 months ago

                      I pointed out what I considered (and still consider) to be the most likely use for the tech you were describing, while asking you if that was your intention. A simple "no, actually I was thinking more about another use case" would have been a far more neutral and reasonable response. Instead, you assumed I was speaking in bad faith and responded in kind. You are the only one making assumptions or starting fights for no reason.

  • Slotos@feddit.nl
    ·
    2 months ago

    https://youtube.com/shorts/xIMlJUwB1m8?si=zH6eF5xZ5Xoz_zsz

    Detecting is not enough to be useful.

    • EatATaco@lemm.ee
      ·
      2 months ago

      The test is 90% accurate, thats still pretty useful. Especially if you are simply putting people into a high risk group that needs to be more closely monitored.

      • Slotos@feddit.nl
        ·
        2 months ago

        “90% accurate” is a non-statement. It’s like you haven’t even watched the video you respond to. Also, where the hell did you pull that number from?

        How specific is it and how sensitive is it is what matters. And if Mirai in https://www.science.org/doi/10.1126/scitranslmed.aba4373 is the same model that the tweet mentions, then neither its specificity nor sensitivity reach 90%. And considering that the image in the tweet is trackable to a publication in the same year (https://news.mit.edu/2021/robust-artificial-intelligence-tools-predict-future-cancer-0128), I’m fairly sure that it’s the same Mirai.

        • EatATaco@lemm.ee
          ·
          edit-2
          2 months ago

          Also, where the hell did you pull that number from?

          Well, you can just do the math yourself, it's pretty straight-forward.

          However, more to the point, it's taken right from around 38 seconds into the video. Kind of funny to be accused of "not watching the video" by someone who is implying the number was pulled from nowhere, when it's right in the video.

          I certainly don't think this closes the book on anything, but I'm responding to your claim that it's not useful. If this is a cheap and easy test, it's a great screening tool putting people into groups of low risk/high risk for which further, maybe more expensive/specific/sensitive, tests can be done. Especially if it can do this early.

  • MonkderVierte@lemmy.ml
    ·
    edit-2
    2 months ago

    Btw, my dentist used AI to identify potential problems in a radiograph. The result was pretty impressive. Have to get a filling tho.

    • D61 [any]
      ·
      2 months ago

      Much easier to assume the training data isn't garbage when the AI expert system only has a narrow scope, right?

      • somename [she/her]
        ·
        2 months ago

        Yeah, machine learning actually has a ton of very useful applications in things. It’s just predictably the dumbest and most toxic manifestations of it are hyped up in a capitalist system.

  • Bob@feddit.nl
    ·
    2 months ago

    I had a housemate a couple of years ago who had a side job where she'd look through a load of these and confirm which were accurate. She didn't say it was AI though.

  • humbletightband@lemmy.dbzer0.com
    ·
    1 month ago

    Haha I love Gell-Mann amnesia. A few weeks ago there was news about speeding up the internet to gazillion bytes per nanosecond and it turned out to be fake.

    Now this thing is all over the internet and everyone believes it.