(no title)
hackitup7 | 8 months ago
The two times that ChatGPT got a situation even somewhat wrong, were:
- My kid had a rash and ChatGPT thought it was one thing. His symptoms changed slightly the next day, I typed in the new symptoms, and it got it immediately. We had to go to urgent care to get confirmation, but in hindsight ChatGPT had already solved it. - In another situation my kid had a rash with somewhat random symptoms and the AI essentially said "I don't know what this is but it's not a big deal as far as the data shows." It disappeared the next day.
It has never gotten anything wrong other than these rashes. Including issues related to ENT, ophthalmology, head trauma, skincare, and more. Afaict it is basically really good at matching symptoms to known conditions and then describing standard of care (and variations).
I now use it as my frontline triage tool for assessing risk. Specifically ChatGPT says "see a doctor soon/ASAP" I do it, if it doesn't say to go see a doctor, I use my own judgment ie I won't skip a doctor trip if I'm nervous just because AI said so. This is all 100% anecdotes and I'm not disagreeing with the study, but I've been incredibly impressed by its ability to rapidly distill medical standard of care.
extr|8 months ago
brundolf|8 months ago
forgetfreeman|8 months ago
IshKebab|8 months ago
When diagnosing kids illnesses you're basically half guessing most of the time anyway. For example the NHS tells you to call 111 (non-emergency medical number) if they have a fever and they "do not want to eat, or are not their usual self and you're worried".
I think in America access to healthcare is pretty bad and expensive so probably a bit of AI help is a good thing. Vague wooly searches using written descriptions is one of the things they're actually quite good at.