This response is a non-sequitur, this isn't _someone_, this is an inanimate program that hallucinates responses.
If every building I went to in the US had ramps and elevators even though I'm not in a wheelchair, would it be "fucked up" that the building and architects assume I'm a cripple?
There's just as much meaning in ChatGPT saying "As you said, you have ADHD" as a building having an elevator.
In the training data for ChatGPT, the word ADHD existed and was associated with something that people call each other online, cool. How deep.
Anyway, I do assume very single user of this website, including myself, all have autism (possibly undiagnosed), so do with that information what you will. I'm pretty sure most HN posters make the same assumption.
ChatGPT is, to my knowledge, trained on Reddit and at least certain sub-reddits are basically people (or bots) telling others that they probably have ADHD/ADD. These are the "AskReddit" type of sub-reddit. There's a Danish subreddit for everyday questions (advise column style posts), and like 80% of people there are apparently either autistic or have ADHD.
So I'm not entirely surprised that an LLM would start assuming that the user have ADD, because that's what part of it's training data suggests it should.
The issue is it doesn't apply here as it's neither a person or a coherent memory/thinking being.
"Thinking" models are basically just a secondary separately prompted hidden output that prefaces yours so your output is hopefully more aligned to what you want, but there's no magic other than more tokens and trying what works.
TheDong|1 month ago
If every building I went to in the US had ramps and elevators even though I'm not in a wheelchair, would it be "fucked up" that the building and architects assume I'm a cripple?
There's just as much meaning in ChatGPT saying "As you said, you have ADHD" as a building having an elevator.
In the training data for ChatGPT, the word ADHD existed and was associated with something that people call each other online, cool. How deep.
Anyway, I do assume very single user of this website, including myself, all have autism (possibly undiagnosed), so do with that information what you will. I'm pretty sure most HN posters make the same assumption.
KeplerBoy|1 month ago
Lio|1 month ago
It’s probably a very human trait to do that but it is a bad habit.
1412312510129|1 month ago
mrweasel|1 month ago
So I'm not entirely surprised that an LLM would start assuming that the user have ADD, because that's what part of it's training data suggests it should.
croon|1 month ago
The issue is it doesn't apply here as it's neither a person or a coherent memory/thinking being.
"Thinking" models are basically just a secondary separately prompted hidden output that prefaces yours so your output is hopefully more aligned to what you want, but there's no magic other than more tokens and trying what works.
dinkumthinkum|1 month ago