top | item 44577284

(no title)

jonrouach | 7 months ago

you're sure it's not their "feature" that calling the api with empty string returns random hallucinations?

https://jarbon.medium.com/gpt-prompt-bug-94322a96c574

discuss

order

requilence|7 months ago

No, definitely not the empty string hallucination bug. These are clearly real user conversations. They start like proper replies to requests, sometimes reference the original question, and appear in different languages.

jonrouach|7 months ago

i had the exact same behavior back in 2023, it seemed like clearly leakage of user conversations - but it was just a bug with api calls in the software i was using.

https://snipboard.io/FXOkdK.jpg

JyB|7 months ago

I don’t see anything here that would prevent a LLM from generating these. Right?

addandsubtract|7 months ago

New Touring Test unlocked! Differentiate between real and fake hallucinations.