top | item 38451394

(no title)

blipmusic | 2 years ago

Isn’t that whataboutism at its best? Those two things are completely unrelated.

discuss

order

mannyv|2 years ago

No, it's showing that the risk of errors exists even without AI.

AI doesn't necessarily make that risk higher or lower a priori.

Plus if you knew how much of current medical practice exists without evidence you wouldn't be worrying about AI.

blipmusic|2 years ago

Maybe it’s ok to worry about both? Not trusting ”arbitrary thing A” does not logically make ”arbitrary thing B” more trustworthy. I do realise that these models intend to (incrementally) represent collective knowledge and may get there in the future. But if you worry about A, why not worry about B which is based on A?

not2b|2 years ago

You seem to be assuming, without any evidence at all, that LLMs giving medical advice are likely to be roughly equivalent in accuracy to doctors who are actually examining the patient and not just processing language, just because you are aware that medical mistakes are common.

robertlagrant|2 years ago

It's not whataboutism at its best, no. Just as with self-driving cars, medical AIs don't have to be perfect, or even to cause zero deaths. They just have to improve the current situation.

blipmusic|2 years ago

It depends who the end user is. As an aid for a trained physician, who is in a better position to spot the hallucinations, it may be fine, whereas a self-medicating patient could be at risk. We absolutely need more resources in healthcare throughout the world, and it may be that these models, or even AGI, have great potential as a companion for e.g. Doctors Without Borders or even at the local hospital in the future. But there’s quite a bit more nuance to giving medical advice compared to perfecting a self driving car.