I've used ChatGPT to help understand medical records too. It's definitely faster than searching everything on my own, but whether the information is reliable still depends on personal judgment or asking a real doctor.
More people are treating it like a doctor or lawyer now, and the more it's used that way, the higher the chance something goes wrong. OpenAI is clearly drawing a line here. You're free to ask questions, but it shouldn't be treated as professional advice, especially when making decisions for others.
StarterPro|3 months ago
matwood|3 months ago
eru|3 months ago
I can imagine a few different reasons you might have, but I don't want to guess.
bushbaba|3 months ago
gxs|3 months ago
The thing that gets me about AI is that people act like most doctors or most lawyers are not … shitty and your odds of running into a below average one are almost 50/50
Doctors these days are more like physicists when most of the time you need a mechanic or engineer. I’ve had plenty of encounters wher I had to insist on an MRI or on specific bloodwork to hone in on the root cause of an ailment where the doctor just chalked it up to diet and exercise
Anything can be misused, including google, but the answer isn’t to take it away from people
Legal/financial advice is so out of reach for most people, the harsh truth is that ChatGPT is better than nothing and anyone who would follow what it says blindly is bound to fuck up those decisions up in some way anyway
On the other hand, if you can leverage it same as any other tool it’s a legitimate force multiplier
The cynic in me thinks this is just being done in the interest of those professions, but that starts to feel a bit tin foil-y
sarchertech|3 months ago
ryandrake|3 months ago
This is the huge problem with using LLMs for this kind of thing. How do you verify that it is better? What is the ground truth you are testing it against?
If you wanted to verify that ChatGPT could do math, you'd ask it 100 math problems and then (importantly) verify its answers with a calculator. How do you verify that ChatGPT can interpret medical information without ground truth to compare it to?
People are just saying, "oh it works" based on gut vibes and not based on actually testing the results.