Why do you expect hallucination frequency to be the same when the LLM doesn't even remotely compare to a human brain yet? And what do they have to "look like"?
This always reminds me of that time Bing's chat AI doubled down on a wrong fact about the Avatar 2 movie, which people used as evidence that the technology is dumb when it really is exactly the behaviour you can observe in many people every day. And there's a reason adults do it less frequently than children.
Compare the hallucination behaviour of a 7B model with a 70B model and then GPT4 and you'll quickly see the frequency of hallucinations right now doesn't mean much.
You could make an argument that what we currently see are effectively internal monologues. It is extremely hard to evaluate how much subconscious or conscious filtering happens between a human's internal state and the eventual outbound communications, but I wouldn't be at all surprised if the upstream hallucination rate in humans was much higher than you'd think.
By analogy to Kahneman and Tversky's System 1 and System 2, the whole field of Prospect Theory is about how often System 1 is wrong. This feels connected.
alpaca128|2 years ago
Compare the hallucination behaviour of a 7B model with a 70B model and then GPT4 and you'll quickly see the frequency of hallucinations right now doesn't mean much.
regularfry|2 years ago
By analogy to Kahneman and Tversky's System 1 and System 2, the whole field of Prospect Theory is about how often System 1 is wrong. This feels connected.
BlueTemplar|2 years ago
Yesterday I read "Building a deep learning rig" as "Building a deep learning pig" at first for some reason I can't explain...