top | item 46532753

(no title)

paulgrimes1 | 1 month ago

Here’s something: my chatGPT quietly assumed I had ADHD for around 9 months, up until October 2025. I don’t suffer from ADHD. I only found out through an answer that began “As you have ADHD..”

I had it stop right there, and asked it to tell me exactly where it got this information; the date, the title of the chat, the exact moment it took this data on as an attribute of mine. It was unable to specify any of it, aside from nine months previous. It continued to insist I had ADHD, and that I told it I did, but was unable to reference exactly when/where.

I asked “do you think it’s dangerous that you have assumed I have a medical / neurological condition for this long? What if you gave me incorrect advice based on this assumption?” to which it answered a paraphrased mea culpa, offered to forget the attribute, and moved the conversation on.

This is a class action waiting to happen.

discuss

order

rafram|1 month ago

> nine months previous

It likely just hallucinated the ADHD thing in this one chat and then made this up when you pushed it for an explanation. It has no way to connect memories to the exact chats they came from AFAIK.

efilife|1 month ago

or had this info injected into its system prompt and was doing everything not to reval it. ChatGPT gets fed your IP address* and approximate location in its system prompt but won't ever admit it and will come up with excuses. Just ask it "search the web to find where im at". It will tell you the country you are in, sometimes down to the city. If you follow up with "how did you know my approximate location?" it will ALWAYS tell you it guessed it. Based on past conversations (that never happened), based on the way you talk, it can even hallucinate that you told it in this exact conversation.

*not entirely sure. I t seems to frequently hallucinate the address

heavyset_go|1 month ago

ChatGPT used the name on my credit card, a name which isn't uncommon, and started talking about my business, XYZ, that I don't have and never claimed to.

Did some digging and there was an obscure reference to a company that folded a long time ago associated with someone who has my name.

What makes it creepier is that they have the same middle name, which isn't in my profile or on my credit card.

When I signed up for ChatGPT, not only did I turn off personalization and training on my data, I even filled out the privacy request opt-out[1] that they're required to adhere to by law in several places.

Also, given that my name isn't rare, there are unfortunately some people with unsavory histories documented online with the name. I can't wait to be confused for one of them.

[1] https://privacy.openai.com/policies/en/

rsync|1 month ago

“ When I signed up for ChatGPT, not only did I turn off personalization and training on my data, I even filled out the privacy request opt-out …”

You did all of that but then you gave them your real name?

Visa/MC payment network has no ability to transfer or check card holder name. Merchants act as if it does, but it doesn’t. You can enter Mickey Mouse as your first name and last name… It won’t make any difference.

Only AMEX and Discover have the ability to validate names.

FWIW, I have a paid account with OpenAI, for using ChatGPT, and I gave them no personal information.

lm28469|1 month ago

I wouldn't be surprised it's because people self diagnose and talk about their """adhd""" all the time on reddit &co. where chatgpt was trained a lot.

yurishimo|1 month ago

Do you think the majority of those people are lying or do you think it's possible that our pursuit of algorithmic consumption is actually rewiring our neural pathways into something that looks/behaves more like ADHD?

Personally, I'm on the fence. I suspect that I've always had a bit of that, but anecdotally, it does seem to have gotten worse in the past decade, but perhaps it's just a symptom of old age (31 hehehe).

roger_|1 month ago

Disable memories so each chat is independent.

If you want chats to shared info, then use a project.

ShakataGaNai|1 month ago

Unfortunately I don't think that's a good solution. Memories are an excellent feature and you see them on.... most similar services now.

Yes, projects have their uses. But as an example - I do python across many projects and non-projects alike. I don't want to want to need to tell ChatGPT exactly how I like my python each and everytime, or with each project. If it was just one or two items like that, fine, I can update its custom instruction personalization. But there are tons of nuances.

The system knowing who I am, what I do for work, what I like, what I don't like, what I'm working on, what I'm interested in... makes it vastly more useful. When I randomly ask ChatGPT "Hey, could I automate this sprinkler" it knows I use home assistant, I've done XYZ projects, I prefer python, I like DIY projects to a certain extent but am willing to buy in which case be prosumer. Etc. Etc. It's more like a real human assistant, than a dumb-bot.

fourside|1 month ago

Not the parent poster but I’ve disabled memory and history and I can still see ChatGPT reference previous answers or shape responses based on previous instructions. I don’t know what I’m doing wrong or how to fix it.

soared|1 month ago

It doesn’t have itself as a data source to reference, so asking “tell me when you said this” etc will never work

Lio|1 month ago

This actually highlights a big privacy problem with health AI.

Say I’m interested in some condition and want to know more about it so I ask a chatbot about it.

It decides “asking for a friend” means I actually have that condition and then silent passes that information on data brokers.

Once it’s in the broker network it’s truth.

We lack the proper infrastructure for to control our own personal data.

Hell, I bet there’s anyone alive that can even name every data broker, let alone contacts them to police what information they’re passing about.

weatherlite|1 month ago

What's the difference between Googling diseases/symptoms and asking ChatGPT?

b800h|1 month ago

Who is "we"? Americans?

usmanity|1 month ago

this seems to be a memory problem with ChatGPT, in your case, I bet it was changing a lot of answers due to that. For me, it really liked referring to the fact that I have an ADU in my backyard, almost pointlessly, something like "Since you walk the dogs before work, and you have a backyard ADU, you should consider these items for breakfast..."

GuB-42|1 month ago

I wonder if that's because so many people claim to have ADHD for dubious reasons, often some kind of self-diagnosis. Maybe because being "neurodivergent" is somewhat trendy, or maybe to get some amphetamines.

ChatGPT may have picked that up and give people ADHD for no good reason.

dunk010|1 month ago

Perhaps you do ;-)

mountainriver|1 month ago

Machine learning has been used in healthcare forever now

immibis|1 month ago

Machine learning isn't ChatGPT

rpigab|1 month ago

What did you expect when confronting it? It's a text autocomplete engine, it will spit out what you want, biased towards absolute politeness and sycophancy. It's like yelling at your toaster.

llmslave2|1 month ago

I feel like the right legal solution is to make the service providers liable in the same way if you offered a service where you got diagnosed by a human and they fucked up, the service is liable. And real liability, with developers and execs going to jail or fined heavily.

The AI models are just tools, but the providers who offer them are not just providing a tool.

This also means if you run the model locally, you're the one liable. I think this makes the most sense and is fairly simple to draw a line.

dyauspitr|1 month ago

[deleted]

Eufrat|1 month ago

I don’t think this is a fair retort. This is not being marketed towards people who have any inkling about how any of this works. The linked press release is clearly trying to get the average person jazzed up about wiring their medical history and fitness data to ChatGPT.

ChatGPT is just suppose to “work” for the lay person and it just doesn’t quite often. OpenAI is already being sued by people for stochastic parroting that ended in tragedy. In one case they’ve tried to use the rather novel affirmative defense that they’re not not liable because using ChatGPT for self-harm was against the terms of service the victim agreed to when using the service.

mabedan|1 month ago

Right. GPT is a glorified keyboard prediction, and people should treat it as such. I don’t get it when people get mad at the output.

125123wqw1212|1 month ago

I mean if someone talked to you your whole life assuming you are autistic,that's kind of fucked up ?

dinkumthinkum|1 month ago

I think you are definitely right. People need to learn to be more resilient. People are in such a hurry to give over their lives to Sam Altman (cue the "decentralizers and democratizers").