top | item 44279988

(no title)

hackitup7 | 8 months ago

This is just a random anecdote but ChatGPT (when given many, many details with 100% honesty) has essentially matched exactly what doctors told me in every case where I've tested it. This was across several non-serious situations (what's this rash) and one quite serious situation, although the last is a decently common condition.

The two times that ChatGPT got a situation even somewhat wrong, were:

- My kid had a rash and ChatGPT thought it was one thing. His symptoms changed slightly the next day, I typed in the new symptoms, and it got it immediately. We had to go to urgent care to get confirmation, but in hindsight ChatGPT had already solved it. - In another situation my kid had a rash with somewhat random symptoms and the AI essentially said "I don't know what this is but it's not a big deal as far as the data shows." It disappeared the next day.

It has never gotten anything wrong other than these rashes. Including issues related to ENT, ophthalmology, head trauma, skincare, and more. Afaict it is basically really good at matching symptoms to known conditions and then describing standard of care (and variations).

I now use it as my frontline triage tool for assessing risk. Specifically ChatGPT says "see a doctor soon/ASAP" I do it, if it doesn't say to go see a doctor, I use my own judgment ie I won't skip a doctor trip if I'm nervous just because AI said so. This is all 100% anecdotes and I'm not disagreeing with the study, but I've been incredibly impressed by its ability to rapidly distill medical standard of care.

discuss

order

extr|8 months ago

I've had an identical experience of ChatGPT misidentifying my kid's rash. In my case I would say it got points for being in the same ballpark - it guessed HFM, the real answer was "an unnamed similar-ish virus to HFM but not HFM proper". The treatment was the same, just let it run it's course and our kid was fine. But I think it also made me realize that our pediatrician is still quite important in the sense that she has local, contextual, geography-based knowledge of what other kids in the area are experiencing too. She recognized it immediately because she had already seen 2 dozen other kids with it in the last month. That's going to be hard for any AI system to replicate until some distant time when all healthcare data is fed into The Matrix.

brundolf|8 months ago

I wonder if the software developer mindset plays into this. We're really good at over-reporting all possibly-relevant information for "debugging" purposes

forgetfreeman|8 months ago

I sincerely hope your credulity doesn't swing around to bite you in the ass with this.

IshKebab|8 months ago

I think it's fine as long as you don't blindly believe it. I.e. if it tells you it's X then you go and look at credible sources to confirm (sounds like he did that). If it tells you "it's nothing don't worry" but you have some reason to disagree then you don't blindly obey the AI.

When diagnosing kids illnesses you're basically half guessing most of the time anyway. For example the NHS tells you to call 111 (non-emergency medical number) if they have a fever and they "do not want to eat, or are not their usual self and you're worried".

I think in America access to healthcare is pretty bad and expensive so probably a bit of AI help is a good thing. Vague wooly searches using written descriptions is one of the things they're actually quite good at.