top | item 45826268

(no title)

cpfohl | 3 months ago

Tricky…my son had a really rare congenital issue that no one could solve for a long time. After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.

I’m not saying we should be getting AI advice without a professional, but I’m my case it could have saved my kid a LOT of physical pain.

discuss

order

rafaelmn|3 months ago

>After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.

Something I've noticed is that it's much easier to lead the LLM to the answer when you know where you want to go (even when the answer is factually wrong !), it doesn't have to be obvious leading but just framing the question in terms of mentioning all the symptoms you now know to be relevant in the order that's diagnosable, etc.

Not saying that's the case here, you might have gotten the correct answer first try - but checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history.

el_benhameen|3 months ago

In January my daughter had a pretty scary stomach issue that had us in the ER twice in 24 hours and that ended in surgery (just fine now).

The first ER doc thought it was just a stomach ache, the second thought a stomach ache or maybe appendicitis. Did some ultrasounds, meds, etc. Got sent home with a pat on the head, came back a few hours later, still no answers.

I gave her medical history and all of the data from the ER visits to whatever the current version of ChatGPT was at the time to make sure I wasn’t failing to ask any important questions. I’m not an AI True Believer (tm), but it was clear that the doctors were missing something and I had hit the limit of my Googling abilities.

ChatGPT suggested, among an few other diagnoses, a rare intestinal birth defect that affects about 2% of the population; 2% of affected people become symptomatic during their lifetimes. I kind of filed it away and looked more into the other stuff.

They decided it might be appendicitis and went to operate. When the surgeon called to tell me that it was in fact this very rare condition, she was pretty surprised when I said I’d heard of it.

So, not a one-shot, and not a novel discovery or anything, but an anecdote where I couldn’t have subconsciously guided it to the answer as I didn’t know the answer myself.

mythrwy|3 months ago

Indeed is is very easy to lead the LLM to the answer, often without realizing you are doing so.

I had a long ongoing discussion about possible alternate career paths with ChatGPT in several threads. At that point it was well aware of my education and skills, had helped clean up resumes, knew my goals, experience and all that.

So I said maybe I'll look at doing X. "Now you are thinking clearly! This is a really good fit for your skill set! If you want I can provide a checklist.". I'm just tossing around ideas but look, GPT says I can do this and it's a good fit!

After 3 idea pivots I started getting a little suspicious. So I try to think of the thing I am least qualified to do in the world and came up with "Design Women's Dresses". I wrote up all the reasons that might be a good pivot (i.e. Past experience with landscape design and it's the same idea, you reveal certain elements seductively but not all at once, matching color palettes, textures etc). Of course GPT says "Now you are really thinking clearly! You could 100% do this! If you want I can start making a list of what you will need to produce you first custom dresses". It was funny but also a bit alarming.

These tools are great. Don't take them too seriously, you can make them say a lot of things with great conviction. It's mostly just you talking to yourself in my opinion.

schiffern|3 months ago

  >checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history
So... exactly the same behavior as human doctors?

cpfohl|3 months ago

Yeah, my son's issue was rare and congenital. I wish I still had the conversation, but I can't remember which LLM it was and it's not in either my Claude or GPT history. It got it in two shots.

1. I described the symptoms the same way we described it to the ER the first time we brought it in. It suggested all the same things that the ER tested for. 2. I gave it the lab results for each of the suggestions it made (since the ER had in fact done all the tests they suggested).

After that back and forth it gave back a list of 3-4 more possibilities and the 2nd item was the exact issue that was revealed by radiology (and corrected with surgery).

Aurornis|3 months ago

> Something I've noticed is that it's much easier to lead the LLM to the answer when you know where you want to go

This goes both ways, too. It’s becoming common to see cases where people become convinced they have a condition but doctors and/or tests disagree. They can become progressively better and better at getting ChatGPT to return the diagnosis by refining their prompts and learning what to tell it as well as what to leave out.

Previously we joked about WebMD convincing people they had conditions they did not, but ChatGPT is far more powerful for these people.

terminalshort|3 months ago

Why are you saying we shouldn't get AI advice without a "professional", then? Why is everybody here saying "in my experience it's just as good or better, but we need rules to make people use the worse option"? I have narcolepsy and it took a dozen doctors before they got it right. AI nails the diagnosis. Everybody should be using it.

pinnochio|3 months ago

Survivorship bias.

RobertDeNiro|3 months ago

I wonder if the reason AI is better at these diagnostics, is because the amount of time it spends with the patient is unbounded. Whereas a doctor is always restricted by the amount of time they have with the patient.

ares623|3 months ago

How do you hold the AI accountable when it makes a mistake? Can you take away its license "individually"?

buu700|3 months ago

Aside from AI skepticism, I think a lot of it likely comes from low expectations of what the broader population would get out of it. Writing, reading comprehension, critical thinking, and LLM-fu may be skills that come naturally to many of us, but at the same time many others who "do their own research" also fall into rabbit holes and arrive at wacky conclusions like flat-Eartherism.

I don't agree with the idea that "we need rules to make people use the worse option" — contrary to prevailing political opinion, I believe people should be free to make their own mistakes — but I wouldn't necessarily rush to advocate that everyone start using current-gen AI for important research either. It's easy to imagine that an average user might lead the AI toward a preconceived false conclusion or latch onto one particular low-probability possibility presented by the AI, badger it into affirming a specific answer while grinding down its context window, and then accept that answer uncritically while unknowingly neglecting or exacerbating a serious medical or legal issue.

cpfohl|3 months ago

I’m saying that is a great tool for people who can see through the idiotic nonsense they so often make up. A professional _has_ the context to see through it.

It should empower and enable informed decisions not make them.

tamimio|3 months ago

That's the experience of a lot of people I know or read their stories online, but it isn't about AI bad diagnosis, it's because they know in 5 years doctors and lawyers will be burger flippers, and as a result people won't be motivated to go into any of these fields. In Canada, the process to be a doctor is extremely complicated and hard only to keep it as some sort of private community that only the very few can become doctors, all to keep the wages abysmally high, and as a result, you end up waiting long times for appointments, and the doctors themselves are overwhelmed too. Messed up system that you better pray you never have to become its victim.

In my opinion, AI should do both legal and medical work, keep some humans for decision making, and the rest of the doctors to be surgeons instead.

tencentshill|3 months ago

We are all obligated to hoard as many offline AI models as possible if the larger ones are legally restricted like this.

SlavikCA|3 months ago

Google released MedGemma model: "optimized for medical text and image comprehension".

I use it. Found it to be helpful.

throwaway290|3 months ago

this is fresh news right? a friend just used chatgpt for medical advice last week (stuffed his wound with antibiotics after motorbike crash). are you saying you completely treated the congenital issue in this timeframe?

cj|3 months ago

He’s simply saying that ChatGPT was able to point them in the right direction after 1 chat exchange, compared to doctors who couldn’t for a long time.

Edit: Not saying this is the case for the person above, but one thing that might bias these observations is ChatGPT’s memory features.

If you have a chat about the condition after it’s diagnosed, you can’t use the same ChatGPT account to test whether it could have diagnosed the same thing (since the chatGPT account now knows the son has a specific condition).

The memory features are awesome but also sucks at the same time. I feel myself getting stuck in a personalized bubble even more so than Google.