top | item 42331948

(no title)

pie420 | 1 year ago

nothing chatgpt says is with maximum confidence. the EULA and terms of use are riddled with "no guarantee of accuracy" and "use at own risk"

discuss

order

albumen|1 year ago

No they're right. ChatGPT (and all chargers) responds confidently while making simple errors. Disclaimers upon signup or in tiny corner text are so at odds with the actual chat experience.

crackrook|1 year ago

What I meant to say was that the model uses the verbiage of a maximally confident human. In my experience the interns worth having have some sense of the limits of their knowledge and will tell you "I don't know" or qualify information with "I'm not certain, but..."

If an intern set their Slack status to "There's no guarantee that what I say will be accurate, engage with me at your own risk." That wouldn't excuse their attempts to answer every question as if they wrote the book on the subject.

daveguy|1 year ago

I think the point is that an LLM almost always responds with the appearance of high confidence. It will much quicker hallucinate than say "I don't know."

Terr_|1 year ago

And we, as humans, are having a hard time compartmentalizing and forgetting our lifetimes of language cues, which typically correlate with attention to detail, intelligence, time investment, etc.

New echnology allows those signs to be counterfeited quickly and cheaply, and it tricks our subconscious despite our best efforts to be hyper-vigilant. (Our brains don't want to do that, it's expensive.)

Perhaps a stopgap might be to make the LLM say everything in a hostile villainous way...

Draiken|1 year ago

They aren't talking about EULAs. It's how they give out their answers.