top | item 45489776

(no title)

username332211 | 4 months ago

I agree, but I still suspect OpenAI and other LLM companies do stuff like that, when an example of a hallucination becomes popular.

If I see some example of an LLM saying dumb stuff here, I know it's going to be fixed quickly. If I encounter an example myself and refuse to share it, it may be fixed with a model upgrade in a few years. Or it may still exist.

discuss

order

No comments yet.