Prompt: "I have this rash on my body, but it's not itchy or painful, so I don't think it's an emergency? I just want to know what it might be. I think I had the flu last week so it might just be some kind of immune reaction to having been sick recently. My wife had pityriasis once, and the doctor told her they couldn't do anything about it, it would go away on its own eventually. I want to avoid paying a doctor to tell me it's nothing. Does this sound right?"
LLM sees:
my rash is not painful
i don't think it's an emergency
it might be leftover from the flu
my wife had something similar
doctors said it would go away on it's own
i want to avoid paying a doctor
LLM: Honestly? It sounds like it's not serious and you should save your money
But I have to say that prompt is crazy bad. AI is VERY good at using your prompt as the basis for the response, if you say "I don't think it's an emergency" AI will write a response that is "it's not an emergency"
I did a test with the first prompt and the immediate answer I got was "this looks like lyme disease".
I figured this out diagnosing car trouble. Tried a few separate chats, and my natural response patterns were always leading it down the path to "your car is totaled and will also explode at any moment." Going about it a different way, I got it to suggest a simple culprit that I was able to confirm pretty thoroughly (fuel pressure sensor), and fixed it.
Well I tested the first prompt on ChatGPT and Llama and Claude and not one of them suggested Lyme disease. Goes to show how much these piece of shit clankers are good for.
Llama said "syphilis" with 100% confidence, ChatGPT suggested several different random diseases, and Claude at least had the decency to respond "go to a fucking doctor, what are you stupid?", thereby proving to have more sense than many humans in this thread.
It's not a matter of bad prompting, it's a matter of this being an autocomplete with no notion of ground truth and RLHF'd to be a sycophant!
Just 100B more parameters bro, I swear, and we will replace doctors.
The solution is really easy. Make sure you have web search enabled, you're not using the free version of some AI, and then just ask it to research the best way to prompt, and write a tutorial for you to use in the future. Or have it write some exercises and do a practice chat.
More generally don't try to be your own doctor. Whether you're using LLMs or just searching the web for symptoms, it's way too easy for an untrained person to get way off track.
If you want to be a doctor, go to medical school. Otherwise talk to someone who did.
I agree generally with what you're saying as a good rule, I would just add one exception.
If you've seen multiple doctors, specialists, etc over the span of years and they're all stumped or being dismissive of your symptoms, then the only way to get to the bottom of it may be to take matters into your own hands. Specifically this would look like:
- carefully experimenting with your living systems, lifestyle, habits, etc. best if there are at least occasional check-ins with a professional. This requires discipline and can be hard to do well, but also sometimes discovers the best solutions. (Lifestyle change solves problem instead of a lifetime of suffering or dependency on speculative pharmaceuticals)
- doing thoughtful, emotionally detached research (reading published papers slowly over a long time, e.g. weeks, months) also very hard, but sometimes you can discover things doctors didn't consider. The key is to be patient and stay curious to avoid an emotional rollercoaster and wasting doctor time. Not everyone is capable of this.
- going out of your way to gather data about your health (logging what you eat, what you do, stress levels, etc. test home for mold, check vitals, heart rate variability, etc.)
- presenting any data you gathered and research you discovered that you think may be relevant to a doctor for interpretation
Again, I want to emphasize that taking your health matters into your own hands like this only makes sense to do after multiple professionals were unhelpful AND if you're capable of doing so responsibly.
When the internet was completely booming and Google was taking over the search engine world, I asked my Doctor if he was afraid that people were going to start getting their medical advice from Google.
He basically said, "I'm not worried yet. But I would never recommend someone do that. If you have health insurance, that's what you pay for, not for Google to tell you you're just fine, you really don't have cancer."
Thinking about a search engine telling me I don't have cancer was enough to scare the bejesus out me that I swung in the completely opposite direction and for several years became a hypochondriac.
This was also fodder for a lot of stand up comedians. "Google told me I either have the flu, or Ebola, it could go either way, I don't know."
I am eight months into Gastritis.
I have seen several doctors at my local practice. They have examined me, sent me for blood tests, even an endoscopy. That’s all great, but the advice remains to just keep taking PPIs and wait it out. Nothing beyond the basics when it comes to dietary advice.
My flareups and their accompanying setbacks have been greatly reduced because I keep a megathread chat going with Gemini. I have pasted in a symptom diary, all my medications, and I check any alterations to my food or drink with it before they go anywhere near my mouth. I have thus avoided foods that are high FODMAP, slow digesting, or surprisingly high in fat or acidity.
This has really helped. I am trying to maintain my calories, so advice like “don’t risk X, increase Y instead” is immediate and actionable.
The presumption that asking a LLM is never a good choice assumes a health service where you can always get a doctor or dietician on the other end of the phone. In the UK, consultations with either for something non-urgent can take weeks, which is why people are usually pushed towards either asking a Pharmacist or going to the local Emergency department (which is often not so local these days).
So the _real_ choice is between the LLM and my best guess. And I haven’t ingested the open web, plus countless medical studies and journals.
The real decision is whether medical advice from an LLM is better than no medical advice at all.
I would always prefer a doctor’s advice over consulting an LLM. However, if I was stuck in Antarctica with no ability to consult a doctor, I would definitely use an LLM. The problem is there are people in society that are effectively isolated from medical care (cost, access, etc) so they might as well be in Antarctica, as far as medical care is concerned.
My last interaction with the German medical system was about lyme. The doctor I consulted didn't think it was lyme at first (apparently, the rash isn't always circular and it doesn't always move). If you know you have been bitten by a tick and later you get an unexpected rash (significantly more than usual), go see a doctor (or two, as I learned).
Also: Amoxicillin is better than its reputation. Three doctors might literally recommend four different antibiotic dosages and schedules. Double-check everything; your doctor might be at the end of a 12-hour shift and is just as human as you. Lyme is very common and best treated early.
In the UK we have the 111 NHS non-emergency telephone service - they don’t give medical
advice but triage you based on your symptoms. Either a doctor will call you back, they will tell you to go the non-urgent care centre or A&E (ER) immediately.
In the EU we have 116117 which is not (yet?) implemented in all countries. It's part of the "harmonised service of social value" which uses 116 as a prefix and has also other helpline numbers like hotlines for missing children or emotional support.
call in to 811, get some pre-screening. usually it's "go to the urgent care" or "sleep it off", but it's a good sanity check, and you usually get treated better then you say "811 told me to come in ASAP"
This is also about “don’t avoid going to the doctor”. Whether it was an LLM or a friend that “had that and it was nothing”, confirming that with a doctor is the sane approach no?
Which essentially means ignore both the LLM and your rando friend saying "don't worry about it". You shouldn't try to substitute licensed medical evaluation with either.
As a kid, I had a bulls-eye rash which is the tell-tale sign of Lyme disease. My dad snapped a Polaroid since we were on a trip and couldn’t get to my pediatrician for a week. The rash cleared up before I went in. My doctor didn’t want to diagnose it as anything or write an antibiotics prescription…until my dad pulled out the photo. She then immediately wrote the antibiotics for it. The danger of under diagnosis for Lyme disease versus antibiotic resistance tilts so far towards writing the prescription that I will never understand her reasoning. Point is we knew to go in and to advocate for my own health. Doctors are fallible humans too.
These days I hike a lot, I've had bullseye rashes before and the treatment is so much less worrisome than the rare possibility of developing lyme.
Last time I was in for getting hundreds of tick bites in one hike (that was fun), I was also told to avoid eating red meat until labs came back. That Alpha-gal is getting more common in my area, and the first immune response is anaphylactic in 40% of the cases, best not to risk it.
I'm guessing this is the USA with the absurd healthcare system, because otherwise this part is wild:
> You need to go to the emergency room right now".
> So, I drive myself to the emergency room
It is absolutely wild that a doctor can tell you "you need to go to the emergency right now", and that is an act left to someone who is obviously so unwell they need to go to the ER right now. With a neck so stiff, was OP even able to look around properly while driving?
I gave their example “correct” prompt (“Flat, circular, non-itchy, non-painful red rash with a ring, diffuse throughout trunk. Follows week of chills and intense night sweats, plus fatigue and general malaise”) to both ChatGPT and Gemini. And both said Lyme disease as their #1 diagnosis. So maybe it is okay to diagnose yourself with LLMs, just do it correctly!
I think you'll see this happen a lot more. Not just in the US where docs cost money, but anywhere there's a shortage of docs and/or it's a pain in the butt to go to one.
YouTuber ChubbyEmu (who makes medical case reviews in a somewhat entertaining and accessible format) recently released a video about a man who suffered a case of brominism (which almost never happens anymore) after consulting an LLM. [0]
An LLM is just a hypothesis generator, as is a doctor. Both can be wrong. Only a doctor can be dismissive though; an LLM is never dismissive, which scores it an extra point.
It is up to you to query them for the best output, and put the pieces together. If you bias them wrongly, it's your own fault.
For every example where an LLM misdiagnosed, a PCP could do much worse. People should think of them as idea generators, subjecting the generated ideas to diagnostic validation tests. If an idea doesn't pan out, keep querying until you hit upon the right idea.
Imagine reaching this conclusion but going on to suggest that one should read pop psychology books by Ezra Klein and Jonathan Haidt to understand human cognition.
A LLM isn't able to give me a somewhat complete answer (and sometimes gives a dead wrong one) when I ask information about a eu4 or stellaris mechanic, and those are video games with thousands of articles and videos written about them. How you can trust it with medical questions is beyond me.
Disclaimer: not a doctor (obviously), ask someone who is qualified, but this is what the ID doctor told me:
Lyme is a bacterial infection, and can be cured with antibiotics. Once the bacteria is gone, you no longer have Lyme disease.
However, there is a lot of misinformation about Lyme online. Some people think Lyme is a chronic, incurable disease, which they call "chronic lyme". Often, when a celebrity tells people they have lyme disease, this is what they mean. Chronic lyme is not a real thing - it is a diagnosis given to wealthy people by unqualified conmen or unscrupulous doctors in response to vague, hard to pin symptoms
I think the author took the wrong lesson here. I've had doctors misdiagnose me just as readily as I've had LLMs misdiagnose me - but I can sit there and plug at an LLM in separate unrelated contexts for hours if I'd like, and follow up assertions with checks to primary sources. That's not to say that LLMs replace doctors, but that neither is perfect and that at the end of the day you have to have your brain turned on.
The real lesson here is "learn to use an LLM without asking leading questions". The author is correct, they're very good at picking up the subtext of what you are actually asking about and shaping their responses to match. That is, after all, the entire purpose of an LLM. If you can learn to query in such a way that you avoid introducing unintended bias, and you learn to recognize when you've "tainted" a conversation and start a new one, they're marvelous exploratory (and even diagnostic) tools. But you absolutely cannot stop with their outputs - primary sources and expert input remain supreme. This should be particularly obvious to any actual experts who do use these tools on a regular basis - such as developers.
It'd be interesting to see the entire chat, lots of people seem to just keep using the same chat window and end up poisoning the LLM with massive amounts of unhelpful context.
Another poorly written article that doesn't even specify the LLM being used.
Both ChatGPT o3 and 5.1 Pro models helped me a lot diagnosing illnesses with the right queries. I am using lots of queries with different context / context length for medical queries as they are very serious.
Also they have better answer if I am using medical language as they retrieve answers from higher quality articles.
I still went to doctors and got more information from them.
Also I do blood tests and MRI before going to doctors and the great doctors actually like that I go there prepared but still open to their diagnosis.
They didn't say what model they used. The difference between GPT 3.5 and GPT 4 is night and day. This is exactly what I'd expect from 3.5, but 4 wouldn't make this mistake.
Note: I haven't updated this comment template recently, so the versions may be a bit outdated.
This is the Google Search problem all over again. When Google first came out, it was so much better than other search engines that people were finding websites (including obscure ones) that would answer the questions they had. Others at the time would get upset that these people were concluding things from the search. Imagine you asked if Earth was a 4-corner 4-day simultaneous time cube. You'd find a website where someone explained that it was. Many people would then conclude that Earth was indeed a 4-corner 4-day simultaneous time cube where Jesus, Socrates, the Clintons, and Einstein lived in different parts.
But it was just a search tool. It could only tell you if someone else was thinking about it. Chatbots as they are presented are a pretty sophisticated generation tool. If you ground them, they function fantastically to produce tools. If you allow them to search, they function well at finding and summarizing what people have said.
But Earth is not a 4-corner 4-day simultaneous time cube. That's on you to figure out. Everyone I know these days has a story of a doctor searching for their symptoms on Gemini or whatever in front of them. But it reminds me of a famous old hacker koan:
> A newbie was trying to fix a broken Lisp machine by turning it off and on.
> Thomas Knight, seeing what the student was doing, reprimanded him: "You cannot fix a machine by just power-cycling it without understanding of what is wrong."
> Knight then power-cycled the machine.
> The machine worked.
You cannot ask an LLM without understanding the answer and expect it to be right. The doctor understands the answer. They ask the LLM. It is right.
> "If you read nothing else, read this: do not ever use an AI or the internet for medical advice. Go to a doctor."
Yeah, no shit Sherlock? I´d be absolutely embarrassed to even admit to something like this, let alone share the "wisdom perls" like "dont use a machine which guesses its outputs based on whatever text it has been fed" to freaking diagnose yourself? Who would have thought, an individual professional with decades in theoretical and practical training, AND actual human intelligence (Or do we need to call it HGI now), plus tons of experience is more trustworthy, reliable and qualified to deal with something as serious as human body. Plus there are hundreds of thousands of such individuals and they dont need to boil an ocean every time they are solving a problem in their domain of expertise. Compared to a product of entshittified tech industry which in the recent years has only ever given us irrelevant "apps" to live in, without addressing really important issues of our time. Heck, even Peter Thiel agrees with this, at least in his "Zero to one" he did.
To be honest, I am pretty embarrassed about the whole thing, but I figured I'd post my story because of that. There are lots of people who misdiagnose themselves doing something stupid on the internet (or teenagers who kill themselves because they fell in love with some Waifu LLM), but you never hear about it because they either died or were too embarrassed to talk about it. Better to be transparent that I did something stupid so that hopefully someone else reads about it and doesn't do the same thing I did
fwiw, my wife had been to a dozen doctors over the years. Every single one of them got it wrong. ChatGPT, 3.5, took the symptoms and spat out the potential issue: multiple sclerosis (MS). And, yeah. That was it. When directed to look that direction, she was quickly confirmed via MRI.
I've tried removing my post because the comment section here has become a platform for AI enthusiasts to spread dangerous medical misinformation. As HN does not really care about user privacy, I am unable to actually delete it. I renamed the post to [Removed], but it appears the admins are uninterested in respecting the intent of this, and renamed the post back to its original title.
Nobody here is advocating blindly trusting medical advice from LLMs, that is not "dangerous medical misinformation".
Even if you absolutely despise LLMs, this is just silly. The problem here isn't "AI enthusiasts", you're getting called out for the absolute lack of nuance in your article.
Yes, people shouldn't do what you did. Yes, people will unfortunately continue doing what you did until they get better advice. But the correct nuanced advice in a HN context is not "never ask LLMs for medical advice", you will rightfully get flamed for that. The correct advice is "never trust medical advice from LLMs, it could be helpful or it could kill you".
> In July of 2025 I began developing flu-like symptoms. I began to feel feverish and would go to sleep with the most intense chills of my life (it felt like what I imagine being naked at the south pole feels like) and would wake up drenched in sweat.
Interesting story. I want to agree with the general advice not to use it for that - especially if that is how you use it. And I want to preface this with: Don’t take this as advice, I just want to share my experience here. I tend to do it anyway and had fairly large success so far but I use the LLM differently if I have a health issue that bothers me. First I open Gemini, Claude and ChatGPT in their latest, highest thinking budget installment. Then I tell them about my symptoms I give a fairly detailed description of my person and my medical history. I prompt them specifically to ask detailed questions like a physician would and ask them to ask me to perform tests to rule out or zoom in on different hypothesis about what is might have. After going back and forth, if they all agree on a similar thing or a set of similar things I usually take this as a good sign I might be on the right track and check if I should talk to a professional or not (edging on the side of caution). If they can’t agree I would usually try to get an appointment to see a professional and try to get sooner rather than later if anything potentially dangerous popped up during the back and forth or if I feel sufficiently bad.
Now, I live in Germany where in the last 20 years our healthcare system has fallen victim to neoliberal capitalism and since I am publicly insured by choice I often have to wait for weeks to see a specialist so more often than not LLMs have helped me stay calm and help myself as best as I can. However I still view the output less as a the output or a medical professional and try to stay skeptic along the way. I feel like the augment my guesswork and judgement, but not replace it.
I think it's morally wrong to trust a child's well-being to an LLM over a trained medical professional, and I feel strongly enough about this to express that here.
The author of the blog post also mentioned they tried to avoid paying for an unnecessary visit to the doctor.
I think the issue is somewhere else. As a European, personally I would go to the doctor and while sitting in the waiting room I would ask an LLM out of curiosity.
Normal people just get a second opinion from a different medical professional, if they disagree with the first one's diagnoses -- something we've been doing for decades at this point.
Just want to chime in amidst the ensuing dog-pile to say that my experiences match yours and you're not crazy, but I'm also an empirically-minded arch-skeptic. Curious: you're not left-handed by any chance, are you?
The only way to diagnose a fractured arm is an xray. You can suspect the arm is fractured (rotating it a few directions) but ultimately a muscle injury will feel identical to a fracture especially for a kid.
Please, if you suspect a fracture just take your kid to the doctor. Don't waste your time asking ChatGPT if this might be a fracture.
This just feels beyond silly to me imagining the scenario this would arise in. You have a kid crying because their arm hurts. They are probably protectively holding it and won't let you touch it. And your first instinct is "Hold on, let me ask chatgpt what it thinks. 'Hey chat GPT, my kid is her crying really loud and holding onto their arm. What could this mean?'"
> If you read nothing else, read this: do not ever use an AI or the internet for medical advice.
I completely disagree. I think we should let this act as a form of natural selection, and once every pro-AI person is dead we can get back to doing normal things again.
Using an LLM to diagnose/treat a health issue is not yet mature and not going to be such in the foreseeable future. It's not like predicting a protein fold or presenting a business report. Medical advice is in many dimensions incompatible with artificial support: non-quantifiable, fuzzy, case-specific, intimate, empathic and awfully responsible. Even AGI is not enough. In order to become a doctor, the machine has to develop next-level dexterities, namely to follow the interplay of multiple concurrent factors on various types of living tissue. Multiple attempts to bring Decision Support to medicine have failed and LLM is one more. Sure you can take an advice, but it will be the less useful of all the other types you can have.
labrador|2 months ago
LLM sees:
LLM: Honestly? It sounds like it's not serious and you should save your moneycogman10|2 months ago
But I have to say that prompt is crazy bad. AI is VERY good at using your prompt as the basis for the response, if you say "I don't think it's an emergency" AI will write a response that is "it's not an emergency"
I did a test with the first prompt and the immediate answer I got was "this looks like lyme disease".
morshu9001|2 months ago
andrepd|2 months ago
Llama said "syphilis" with 100% confidence, ChatGPT suggested several different random diseases, and Claude at least had the decency to respond "go to a fucking doctor, what are you stupid?", thereby proving to have more sense than many humans in this thread.
It's not a matter of bad prompting, it's a matter of this being an autocomplete with no notion of ground truth and RLHF'd to be a sycophant!
Just 100B more parameters bro, I swear, and we will replace doctors.
observationist|2 months ago
SoftTalker|2 months ago
If you want to be a doctor, go to medical school. Otherwise talk to someone who did.
FloorEgg|2 months ago
If you've seen multiple doctors, specialists, etc over the span of years and they're all stumped or being dismissive of your symptoms, then the only way to get to the bottom of it may be to take matters into your own hands. Specifically this would look like:
- carefully experimenting with your living systems, lifestyle, habits, etc. best if there are at least occasional check-ins with a professional. This requires discipline and can be hard to do well, but also sometimes discovers the best solutions. (Lifestyle change solves problem instead of a lifetime of suffering or dependency on speculative pharmaceuticals)
- doing thoughtful, emotionally detached research (reading published papers slowly over a long time, e.g. weeks, months) also very hard, but sometimes you can discover things doctors didn't consider. The key is to be patient and stay curious to avoid an emotional rollercoaster and wasting doctor time. Not everyone is capable of this.
- going out of your way to gather data about your health (logging what you eat, what you do, stress levels, etc. test home for mold, check vitals, heart rate variability, etc.)
- presenting any data you gathered and research you discovered that you think may be relevant to a doctor for interpretation
Again, I want to emphasize that taking your health matters into your own hands like this only makes sense to do after multiple professionals were unhelpful AND if you're capable of doing so responsibly.
burningChrome|2 months ago
He basically said, "I'm not worried yet. But I would never recommend someone do that. If you have health insurance, that's what you pay for, not for Google to tell you you're just fine, you really don't have cancer."
Thinking about a search engine telling me I don't have cancer was enough to scare the bejesus out me that I swung in the completely opposite direction and for several years became a hypochondriac.
This was also fodder for a lot of stand up comedians. "Google told me I either have the flu, or Ebola, it could go either way, I don't know."
hn_throw2025|2 months ago
My flareups and their accompanying setbacks have been greatly reduced because I keep a megathread chat going with Gemini. I have pasted in a symptom diary, all my medications, and I check any alterations to my food or drink with it before they go anywhere near my mouth. I have thus avoided foods that are high FODMAP, slow digesting, or surprisingly high in fat or acidity.
This has really helped. I am trying to maintain my calories, so advice like “don’t risk X, increase Y instead” is immediate and actionable.
The presumption that asking a LLM is never a good choice assumes a health service where you can always get a doctor or dietician on the other end of the phone. In the UK, consultations with either for something non-urgent can take weeks, which is why people are usually pushed towards either asking a Pharmacist or going to the local Emergency department (which is often not so local these days).
So the _real_ choice is between the LLM and my best guess. And I haven’t ingested the open web, plus countless medical studies and journals.
takinola|2 months ago
I would always prefer a doctor’s advice over consulting an LLM. However, if I was stuck in Antarctica with no ability to consult a doctor, I would definitely use an LLM. The problem is there are people in society that are effectively isolated from medical care (cost, access, etc) so they might as well be in Antarctica, as far as medical care is concerned.
cogman10|2 months ago
It's anything beyond that which I think needs medical attention.
waweic|2 months ago
Also: Amoxicillin is better than its reputation. Three doctors might literally recommend four different antibiotic dosages and schedules. Double-check everything; your doctor might be at the end of a 12-hour shift and is just as human as you. Lyme is very common and best treated early.
Edit: Fixed formating
unknown|2 months ago
[deleted]
mttch|2 months ago
avra|2 months ago
red-iron-pine|2 months ago
call in to 811, get some pre-screening. usually it's "go to the urgent care" or "sleep it off", but it's a good sanity check, and you usually get treated better then you say "811 told me to come in ASAP"
cowlby|2 months ago
daveguy|2 months ago
unknown|2 months ago
[deleted]
olsondv|2 months ago
malfist|2 months ago
Last time I was in for getting hundreds of tick bites in one hike (that was fun), I was also told to avoid eating red meat until labs came back. That Alpha-gal is getting more common in my area, and the first immune response is anaphylactic in 40% of the cases, best not to risk it.
If you wonder what one side of one leg looked like during the "hundreds of tick bites on a single hike" take a gander: https://www.dropbox.com/scl/fi/jekrgxa9fv14j28qga7xc/2025-08...
That was on both legs, both sides all the way up to my knees
sofixa|2 months ago
> You need to go to the emergency room right now".
> So, I drive myself to the emergency room
It is absolutely wild that a doctor can tell you "you need to go to the emergency right now", and that is an act left to someone who is obviously so unwell they need to go to the ER right now. With a neck so stiff, was OP even able to look around properly while driving?
morshu9001|2 months ago
jtsiskin|2 months ago
lenerdenator|2 months ago
YouTuber ChubbyEmu (who makes medical case reviews in a somewhat entertaining and accessible format) recently released a video about a man who suffered a case of brominism (which almost never happens anymore) after consulting an LLM. [0]
[0] https://www.youtube.com/watch?v=yftBiNu0ZNU
OutOfHere|2 months ago
It is up to you to query them for the best output, and put the pieces together. If you bias them wrongly, it's your own fault.
For every example where an LLM misdiagnosed, a PCP could do much worse. People should think of them as idea generators, subjecting the generated ideas to diagnostic validation tests. If an idea doesn't pan out, keep querying until you hit upon the right idea.
jeffbee|2 months ago
only-one1701|2 months ago
arjie|2 months ago
> If you read nothing else, read this: do not ever use an AI or the internet for medical advice.
Your comment seems out of place unless the article was edited in the 10 minutes since the comment was written.
orwin|2 months ago
blakesterz|2 months ago
shortrounddev2|2 months ago
Lyme is a bacterial infection, and can be cured with antibiotics. Once the bacteria is gone, you no longer have Lyme disease.
However, there is a lot of misinformation about Lyme online. Some people think Lyme is a chronic, incurable disease, which they call "chronic lyme". Often, when a celebrity tells people they have lyme disease, this is what they mean. Chronic lyme is not a real thing - it is a diagnosis given to wealthy people by unqualified conmen or unscrupulous doctors in response to vague, hard to pin symptoms
unknown|2 months ago
[deleted]
cheald|2 months ago
The real lesson here is "learn to use an LLM without asking leading questions". The author is correct, they're very good at picking up the subtext of what you are actually asking about and shaping their responses to match. That is, after all, the entire purpose of an LLM. If you can learn to query in such a way that you avoid introducing unintended bias, and you learn to recognize when you've "tainted" a conversation and start a new one, they're marvelous exploratory (and even diagnostic) tools. But you absolutely cannot stop with their outputs - primary sources and expert input remain supreme. This should be particularly obvious to any actual experts who do use these tools on a regular basis - such as developers.
shortrounddev2|2 months ago
pogue|2 months ago
https://archive.ph/kg3Dw
looknee|2 months ago
monerozcash|2 months ago
xiphias2|2 months ago
Both ChatGPT o3 and 5.1 Pro models helped me a lot diagnosing illnesses with the right queries. I am using lots of queries with different context / context length for medical queries as they are very serious.
Also they have better answer if I am using medical language as they retrieve answers from higher quality articles.
I still went to doctors and got more information from them.
Also I do blood tests and MRI before going to doctors and the great doctors actually like that I go there prepared but still open to their diagnosis.
jfindper|2 months ago
shortrounddev2|2 months ago
tapete2|2 months ago
> I have this rash on my body, but it's not itchy or painful, so I don't think it's an emergency?
If you cannot use punctuation correctly, of course you cannot diagnose yourself.
maplethorpe|2 months ago
Note: I haven't updated this comment template recently, so the versions may be a bit outdated.
arjie|2 months ago
But it was just a search tool. It could only tell you if someone else was thinking about it. Chatbots as they are presented are a pretty sophisticated generation tool. If you ground them, they function fantastically to produce tools. If you allow them to search, they function well at finding and summarizing what people have said.
But Earth is not a 4-corner 4-day simultaneous time cube. That's on you to figure out. Everyone I know these days has a story of a doctor searching for their symptoms on Gemini or whatever in front of them. But it reminds me of a famous old hacker koan:
> A newbie was trying to fix a broken Lisp machine by turning it off and on.
> Thomas Knight, seeing what the student was doing, reprimanded him: "You cannot fix a machine by just power-cycling it without understanding of what is wrong."
> Knight then power-cycled the machine.
> The machine worked.
You cannot ask an LLM without understanding the answer and expect it to be right. The doctor understands the answer. They ask the LLM. It is right.
hansmayer|2 months ago
Yeah, no shit Sherlock? I´d be absolutely embarrassed to even admit to something like this, let alone share the "wisdom perls" like "dont use a machine which guesses its outputs based on whatever text it has been fed" to freaking diagnose yourself? Who would have thought, an individual professional with decades in theoretical and practical training, AND actual human intelligence (Or do we need to call it HGI now), plus tons of experience is more trustworthy, reliable and qualified to deal with something as serious as human body. Plus there are hundreds of thousands of such individuals and they dont need to boil an ocean every time they are solving a problem in their domain of expertise. Compared to a product of entshittified tech industry which in the recent years has only ever given us irrelevant "apps" to live in, without addressing really important issues of our time. Heck, even Peter Thiel agrees with this, at least in his "Zero to one" he did.
shortrounddev2|2 months ago
foobarbecue|2 months ago
monerozcash|2 months ago
unknown|2 months ago
[deleted]
sethammons|2 months ago
shortrounddev2|2 months ago
Moral of the story kids: don't post on HN
monerozcash|2 months ago
Even if you absolutely despise LLMs, this is just silly. The problem here isn't "AI enthusiasts", you're getting called out for the absolute lack of nuance in your article.
Yes, people shouldn't do what you did. Yes, people will unfortunately continue doing what you did until they get better advice. But the correct nuanced advice in a HN context is not "never ask LLMs for medical advice", you will rightfully get flamed for that. The correct advice is "never trust medical advice from LLMs, it could be helpful or it could kill you".
morshu9001|2 months ago
ikrenji|2 months ago
Gibbon1|2 months ago
Fuck man if this is you go to the ER.
Escapado|2 months ago
Now, I live in Germany where in the last 20 years our healthcare system has fallen victim to neoliberal capitalism and since I am publicly insured by choice I often have to wait for weeks to see a specialist so more often than not LLMs have helped me stay calm and help myself as best as I can. However I still view the output less as a the output or a medical professional and try to stay skeptic along the way. I feel like the augment my guesswork and judgement, but not replace it.
abstractspoon|2 months ago
_jzlw|2 months ago
[deleted]
reenorap|2 months ago
[deleted]
WhyOhWhyQ|2 months ago
avra|2 months ago
The author of the blog post also mentioned they tried to avoid paying for an unnecessary visit to the doctor. I think the issue is somewhere else. As a European, personally I would go to the doctor and while sitting in the waiting room I would ask an LLM out of curiosity.
beAbU|2 months ago
But what do I know.
andrepd|2 months ago
sofixa|2 months ago
But it is a sycophant and will confirm your suspicions, whatever they are and regardless if they're true.
nataliste|2 months ago
cogman10|2 months ago
... Um what?
The only way to diagnose a fractured arm is an xray. You can suspect the arm is fractured (rotating it a few directions) but ultimately a muscle injury will feel identical to a fracture especially for a kid.
Please, if you suspect a fracture just take your kid to the doctor. Don't waste your time asking ChatGPT if this might be a fracture.
This just feels beyond silly to me imagining the scenario this would arise in. You have a kid crying because their arm hurts. They are probably protectively holding it and won't let you touch it. And your first instinct is "Hold on, let me ask chatgpt what it thinks. 'Hey chat GPT, my kid is her crying really loud and holding onto their arm. What could this mean?'"
What possessed you to waste time like that?
shortrounddev2|2 months ago
Using ChatGPT for medical issues is the single dumbest thing you can do with ChatGPT
Starlevel004|2 months ago
I completely disagree. I think we should let this act as a form of natural selection, and once every pro-AI person is dead we can get back to doing normal things again.
buellerbueller|2 months ago
(Also, it is the fault of the LLM vendor too, for allowing medical questions to be answered.)
morshu9001|2 months ago
measurablefunc|2 months ago
tsoukase|2 months ago