(no title)
paulgrimes1 | 1 month ago
I had it stop right there, and asked it to tell me exactly where it got this information; the date, the title of the chat, the exact moment it took this data on as an attribute of mine. It was unable to specify any of it, aside from nine months previous. It continued to insist I had ADHD, and that I told it I did, but was unable to reference exactly when/where.
I asked “do you think it’s dangerous that you have assumed I have a medical / neurological condition for this long? What if you gave me incorrect advice based on this assumption?” to which it answered a paraphrased mea culpa, offered to forget the attribute, and moved the conversation on.
This is a class action waiting to happen.
rafram|1 month ago
It likely just hallucinated the ADHD thing in this one chat and then made this up when you pushed it for an explanation. It has no way to connect memories to the exact chats they came from AFAIK.
efilife|1 month ago
*not entirely sure. I t seems to frequently hallucinate the address
heavyset_go|1 month ago
Did some digging and there was an obscure reference to a company that folded a long time ago associated with someone who has my name.
What makes it creepier is that they have the same middle name, which isn't in my profile or on my credit card.
When I signed up for ChatGPT, not only did I turn off personalization and training on my data, I even filled out the privacy request opt-out[1] that they're required to adhere to by law in several places.
Also, given that my name isn't rare, there are unfortunately some people with unsavory histories documented online with the name. I can't wait to be confused for one of them.
[1] https://privacy.openai.com/policies/en/
rsync|1 month ago
You did all of that but then you gave them your real name?
Visa/MC payment network has no ability to transfer or check card holder name. Merchants act as if it does, but it doesn’t. You can enter Mickey Mouse as your first name and last name… It won’t make any difference.
Only AMEX and Discover have the ability to validate names.
FWIW, I have a paid account with OpenAI, for using ChatGPT, and I gave them no personal information.
lm28469|1 month ago
yurishimo|1 month ago
Personally, I'm on the fence. I suspect that I've always had a bit of that, but anecdotally, it does seem to have gotten worse in the past decade, but perhaps it's just a symptom of old age (31 hehehe).
roger_|1 month ago
If you want chats to shared info, then use a project.
ShakataGaNai|1 month ago
Yes, projects have their uses. But as an example - I do python across many projects and non-projects alike. I don't want to want to need to tell ChatGPT exactly how I like my python each and everytime, or with each project. If it was just one or two items like that, fine, I can update its custom instruction personalization. But there are tons of nuances.
The system knowing who I am, what I do for work, what I like, what I don't like, what I'm working on, what I'm interested in... makes it vastly more useful. When I randomly ask ChatGPT "Hey, could I automate this sprinkler" it knows I use home assistant, I've done XYZ projects, I prefer python, I like DIY projects to a certain extent but am willing to buy in which case be prosumer. Etc. Etc. It's more like a real human assistant, than a dumb-bot.
fourside|1 month ago
soared|1 month ago
unknown|1 month ago
[deleted]
Lio|1 month ago
Say I’m interested in some condition and want to know more about it so I ask a chatbot about it.
It decides “asking for a friend” means I actually have that condition and then silent passes that information on data brokers.
Once it’s in the broker network it’s truth.
We lack the proper infrastructure for to control our own personal data.
Hell, I bet there’s anyone alive that can even name every data broker, let alone contacts them to police what information they’re passing about.
weatherlite|1 month ago
b800h|1 month ago
usmanity|1 month ago
GuB-42|1 month ago
ChatGPT may have picked that up and give people ADHD for no good reason.
dunk010|1 month ago
mountainriver|1 month ago
immibis|1 month ago
rpigab|1 month ago
llmslave2|1 month ago
The AI models are just tools, but the providers who offer them are not just providing a tool.
This also means if you run the model locally, you're the one liable. I think this makes the most sense and is fairly simple to draw a line.
dyauspitr|1 month ago
[deleted]
Eufrat|1 month ago
ChatGPT is just suppose to “work” for the lay person and it just doesn’t quite often. OpenAI is already being sued by people for stochastic parroting that ended in tragedy. In one case they’ve tried to use the rather novel affirmative defense that they’re not not liable because using ChatGPT for self-harm was against the terms of service the victim agreed to when using the service.
mabedan|1 month ago
125123wqw1212|1 month ago
dinkumthinkum|1 month ago