top | item 44384780

(no title)

plausibilitious | 8 months ago

This argument is the reason why LLM output failing to match reality was labelled 'hallucination'. It makes it seem like the LLM only makes mistakes in a neatly verifiable manner.

The 'jpeg of the internet' argument was more apt I think. The output of LLMs might be congruent with reality and how the prompt contents represent reality. But they might also not be, and in subtle ways too.

If only all code that has any flaw in it would not run. That would be truly amazing. Alas, there are several orders of magnitude more sequences of commands that can be run than that should be run.

discuss

order

No comments yet.