top | item 46719473

(no title)

Gazoche | 1 month ago

Evangelists keep insisting that healthcare is one of the things that AI will revolutionize in the coming years, but I just don't get it. To me it's not even clear what they mean by "AI" in this context (and I'm not convinced it's clear to them either).

If they mean "machine learning", then sure there are application in cancer detection and the like, but development there has been moving at a steady pace for decades and has nothing to do with the current hype wave of GenAI, so there's no reason to assume it's suddenly going to go exponential. I used to work in that field and I'm confident it's not going to change overnight: progress there is slow not because of the models, but because data is sparse and noisy, labels are even sparser and noisier, deployment procedures are rigid and legal compliance is a nightmare.

If they mean "generative AI", then how is that supposed to work exactly? Asking LLMs for medical diagnosis is no better than asking "the Internet at large". They only return the most statistically likely output given their training corpus (that corpus being the Internet as a whole), so it's more likely your diagnosis will be based on a random Reddit comment that the LLMs has ingested somewhere, than an actual medical paper.

The only plausible applications I can think of are tasks such as summarizing papers, acting as augmented search engines for datasets and papers, or maybe automating some menial administrative tasks. Useful, for sure, but not revolutionary.

discuss

order

croon|1 month ago

The most statistically likely output given your diligently described symtoms could still be useful. The prohibitive cost in healthcare in general is likely your time with your doctor. If you could "consult" with a dumb LLM beforehand and give the doctor a couple of different venues to look at that they can then shoot down or further explore could likely save time rather than them having to prod you for exhaustive binary tree exploring.

This from a huge LLM skeptic in general. It doesn't have to be right all the time if it in aggregate saves time doctors can spend diagnosing you.

Gazoche|1 month ago

Sure, but what confidence do you have that what the "dumb" LLM says is worth any salt ? It's no different than aggregating the results of a Reddit search, or perhaps even worse because LLMs lack the intent or common sense filter of a human. It could be combining two contradicting sources in a way that only makes sense statistically, or regurgitate joke answers without understanding context (the infamous "you should eat at least one small rock per day").

NoGravitas|1 month ago

Realistically the more likely use will be medical transcription - making an official record of doctors' patient notes. The inevitable errors will reduce the quality of patient care, but they will let doctors see more patients in a day, which is what the healthcare companies care about.

wosined|1 month ago

Such "AI" has already existed for decades. Look up expert systems.

krackers|1 month ago

No, doctors are smart enough as a group to have inserted themselves as middlemen and codified it into law, so it will not revolutionize healthcare in a meaningful sense of cutting through the bureaucracy. You may be able to use LLMs to get a suggested diagnosis once tests and symptoms are communicated, but you're going to need to go the doctor to get a referral for the tests/imaging, for formal recognition of your issue (as needed for things like workplace accommodations), and of course for any treatments as well.

At best and if you're lucky to have a receptive doctor you can use it to nudge them in the right direction. But until direct to consumer sales for medical equipment and tests are allowed, the medical profession is well insulated. It is impossible by regulation to "take healthcare into your own hands" even if you want to.

NoGravitas|1 month ago

> Evangelists keep insisting that healthcare is one of the things that AI will revolutionize in the coming years, but I just don't get it. To me it's not even clear what they mean by "AI" in this context (and I'm not convinced it's clear to them either).

It's a more-or-less intentional equivocation between different meanings of AI, as you note, machine learning vs generative AI. They want to point at the real but unsexy potential of ML for medical use in order to pump up the perceived value of LLMs. They want to imply to the general public and investors that LLMs are going to cure cancer.

wrenky|1 month ago

Totally anecdotal, but recently my wife had to go to urgent care for something wrong with her ankle- They send a 4-5 page sheet of arcane terms and diagnoses to her care app (relayed to me via text) and I just slammed that into gemnai and asked "what does this mean" and it did quite well! Gave possible causes, what it meant for her in the long term vs short term, and ways to prevent it. I had a better understanding of what was wrong before the doctor even got to my wife in the waiting room!

Obviously still double check things, but it was moment of clarity I hadn't really had before this. Still needed the doctor and all the experience to diagnose and fix things, but relaying that info back to me is something doctors are only okay at. Try it out! take a summary sheet of a recent visit or incident and feed it in.

matty22|1 month ago

Not to mention that the LAST place I want an all-consuming, privacy-destroying data beast is anywhere near my health data.