top | item 36685467

(no title)

rain1 | 2 years ago

This is an example of hallucination.

An LLM doesn't know anything about itself - it can be pre-prompted with facts about itself, but this is going to be an example of it just making plausible text up.

discuss

order

losteric|2 years ago

Is it possible some of these LLMs actually have internal tools / calculators? ie blackboxing what ChatGPT has as explicit plugins

gcr|2 years ago

even if there were some mixture-of-experts shenanigans going on, there is no introspection or reasoning, so the model isn’t able to comment on or understand its “inner experience”, if you can call matrix multiplications an inner experience

qup|2 years ago

If it were, they still wouldn't be able to commentate about it.