Not much familiar wirh metrics for evaluating progression in medical fields, so asking in general sense.

  • dfyx@lemmy.helios42.de
    link
    fedilink
    arrow-up
    30
    ·
    3 days ago

    Absolutely and it has done so for over a decade. Not LLMs of course, those are not suitable for the job but there are lots of specialized AI models for medical applications.

    My day job is software development for ophthalmology (eye medicine) and people are developing models that can, for example, detect cataracts in an OCT scan long before they become a problem. Grading those by hand is usually pretty hard.

      • ThirdConsul@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        3 days ago

        So… The medical professional is taking voice notes and then they get transcribed (ok, this is fine) - and then summarized automatically? I don’t think the summary is a good idea - it’s not a car factory, the MD should get to know my medical history, not just a summary of one.

          • AnyOldName3@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            3 days ago

            You can’t make an LLM only reference the data it’s summarising. Everything an LLM outputs is a collage of text and patterns from its original training data, and it’s choosing whatever piece of that data seems most likely given the existing text in its context window. If there’s not a huge corpus of training data, it won’t have a model of English and won’t know how to summarise text, and even restricting the training data to medical notes will stop mean it’s potentially going to hallucinate something from someone else’s medical notes that’s commonly associated with things in the current patient’s notes, or it’s going to potentially leave out something from the current patient’s notes that’s very rare or totally absent from its training data.

              • cecinestpasunbot@lemmy.ml
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 days ago

                If you end up integrating LLMs in a way where it could impact patient care that’s actually pretty dangerous considering their training data includes plenty of fictional and pseudo scientific sources. That said it might be okay for medical research applications where accuracy isn’t as critical.

  • testfactor@lemmy.world
    link
    fedilink
    arrow-up
    36
    arrow-down
    1
    ·
    3 days ago

    Depends on how you define AI to some degree, but yeah. Protein folding has basically been solved in the past few years with neural-network based AI systems.

  • Bobo The Great@sopuli.xyz
    link
    fedilink
    arrow-up
    19
    ·
    3 days ago

    I have anecdotal evidence that ML applied to image recognition is being used to improve imaging machines (MRI, tomography, etc…)

    • CarrotsHaveEars@lemmy.ml
      link
      fedilink
      arrow-up
      12
      ·
      3 days ago

      Thanks for using the right term, “machine learning”. There are tons of papers on Kaggle showcase higher than 0.5 accuracy in predicting positive diagnosis. Not to mention professional image-recognition machines have been sent to hospitals and in service aiding doctors for almost a decade. That was before the AI stock market blew up.

  • lefty7283@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 days ago

    I know a few attendings that use it for dictation. It’ll record the entire convo with the patient, plus whatever the doc dictates to it, and by the time they’re out of the room a note is typed up in the right format they wouldn’t have to stare at the computer the whole visit. According to them it’s a lot more time efficient to have it dictate the notes and double check them at the end of the day, versus typing something up after every patient. It is approved by the hospital and integrated directly into the EMR, so I guess it’s HIPAA compliant

  • ☂️-@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 day ago

    the actual good use for ai as opposed to the big tech chatbots.

    ive read a pair of studies related to ai and prediction of cardiovascular disease that looked very promising. too bad capitalism is using it to fuck us over.