top | item 46538306

(no title)

125123wqw1212 | 1 month ago

I mean if someone talked to you your whole life assuming you are autistic,that's kind of fucked up ?

discuss

order

TheDong|1 month ago

This response is a non-sequitur, this isn't _someone_, this is an inanimate program that hallucinates responses.

If every building I went to in the US had ramps and elevators even though I'm not in a wheelchair, would it be "fucked up" that the building and architects assume I'm a cripple?

There's just as much meaning in ChatGPT saying "As you said, you have ADHD" as a building having an elevator.

In the training data for ChatGPT, the word ADHD existed and was associated with something that people call each other online, cool. How deep.

Anyway, I do assume very single user of this website, including myself, all have autism (possibly undiagnosed), so do with that information what you will. I'm pretty sure most HN posters make the same assumption.

KeplerBoy|1 month ago

That's kind of how it works though. People who know you very much associate certain traits and labels with you.

Lio|1 month ago

It’s an unpleasant experience to have people who think they know you but clearly don’t project their opinions of what they think you’re like.

It’s probably a very human trait to do that but it is a bad habit.

1412312510129|1 month ago

Yeah and it's fucked up, so being dramatic is warranted.

mrweasel|1 month ago

ChatGPT is, to my knowledge, trained on Reddit and at least certain sub-reddits are basically people (or bots) telling others that they probably have ADHD/ADD. These are the "AskReddit" type of sub-reddit. There's a Danish subreddit for everyday questions (advise column style posts), and like 80% of people there are apparently either autistic or have ADHD.

So I'm not entirely surprised that an LLM would start assuming that the user have ADD, because that's what part of it's training data suggests it should.

croon|1 month ago

In your scenario, maybe yes.

The issue is it doesn't apply here as it's neither a person or a coherent memory/thinking being.

"Thinking" models are basically just a secondary separately prompted hidden output that prefaces yours so your output is hopefully more aligned to what you want, but there's no magic other than more tokens and trying what works.

dinkumthinkum|1 month ago

It's not a person and its not a thinking being.