Not much familiar wirh metrics for evaluating progression in medical fields, so asking in general sense.
Absolutely and it has done so for over a decade. Not LLMs of course, those are not suitable for the job but there are lots of specialized AI models for medical applications.
My day job is software development for ophthalmology (eye medicine) and people are developing models that can, for example, detect cataracts in an OCT scan long before they become a problem. Grading those by hand is usually pretty hard.
Can you tell me more about your job, as fellow computer guy I would really appreciate first hand experience.
deleted by creator
So… The medical professional is taking voice notes and then they get transcribed (ok, this is fine) - and then summarized automatically? I don’t think the summary is a good idea - it’s not a car factory, the MD should get to know my medical history, not just a summary of one.
deleted by creator
You can’t make an LLM only reference the data it’s summarising. Everything an LLM outputs is a collage of text and patterns from its original training data, and it’s choosing whatever piece of that data seems most likely given the existing text in its context window. If there’s not a huge corpus of training data, it won’t have a model of English and won’t know how to summarise text, and even restricting the training data to medical notes will stop mean it’s potentially going to hallucinate something from someone else’s medical notes that’s commonly associated with things in the current patient’s notes, or it’s going to potentially leave out something from the current patient’s notes that’s very rare or totally absent from its training data.
deleted by creator
If you end up integrating LLMs in a way where it could impact patient care that’s actually pretty dangerous considering their training data includes plenty of fictional and pseudo scientific sources. That said it might be okay for medical research applications where accuracy isn’t as critical.
deleted by creator
Depends on how you define AI to some degree, but yeah. Protein folding has basically been solved in the past few years with neural-network based AI systems.
I have anecdotal evidence that ML applied to image recognition is being used to improve imaging machines (MRI, tomography, etc…)
Thanks for using the right term, “machine learning”. There are tons of papers on Kaggle showcase higher than 0.5 accuracy in predicting positive diagnosis. Not to mention professional image-recognition machines have been sent to hospitals and in service aiding doctors for almost a decade. That was before the AI stock market blew up.
I rember when “AI” was just the pathfinding in videogames
That, and predefined counter-attack scheme in fighting games and strategy games.
Yes. Objectively.
I know a few attendings that use it for dictation. It’ll record the entire convo with the patient, plus whatever the doc dictates to it, and by the time they’re out of the room a note is typed up in the right format they wouldn’t have to stare at the computer the whole visit. According to them it’s a lot more time efficient to have it dictate the notes and double check them at the end of the day, versus typing something up after every patient. It is approved by the hospital and integrated directly into the EMR, so I guess it’s HIPAA compliant
the actual good use for ai as opposed to the big tech chatbots.
ive read a pair of studies related to ai and prediction of cardiovascular disease that looked very promising. too bad capitalism is using it to fuck us over.
deleted by creator