Hallucinations are an engineering problem and can be solved. Compute per dollar is still growing exponentially. Eventually this technology will be widely proliferated and cheap to operate.
> Hallucinations are an engineering problem and can be solved.
I'd like a little more background on that claim.
As far as I've been able to tell from my understanding of LLMs, everything they create is a hallucination. It's just a case of "text that could plausibly come next based on the patterns of language they were trained on". When an LLM gets stuff correct, that doesn't make it not a hallucination, it's just that enough correct stuff was in the training data that a fair amount of hallucinations will turn out to be correct. Meanwhile, the LLM has no concept of "true" or "false" or "reality" or "fiction".
There's no meta-cognition. It's just "what word probably comes next?" How is that just "an engineering problem [that] can be solved"?
I agree it's more than a simple engineering challenge, but I do so because it is not entirely clear if even humans avoid this issue, or even if we merely minimise it.
We're full of seemingly weird cognitive biases: Roll a roulette wheel in front of people before asking them the percentage of African counties are in the UN, their answers correlate with the number on the wheel.
Most of us judge logical strengths of arguments by how believable the conclusion is; by repetition; by rhyme; and worse, knowledge of cognitive biases doesn't help as we tend to use that knowledge to dismiss conclusions we don't like rather than to test our own.
That's what happened with the internet, which was supposed to be the new Library of Alexandria, educating the world, liberating the masses from the grip of corporate ownership of data and government surveillance, and enabling free global communication and publishing.
It's almost entirely shit now. Instead of being educated, people are manipulated into bubbles of paranoid delusion and unreality, fed by memes and disinformation. Instead of liberation from corporate ownership, everything is infested with dark patterns, data mining, advertising, DRM and subscriptions. You will own nothing and be happy. Instead of liberation from government, the internet has become a platform for government surveillance, propaganda and psyops. Everyone used to have personal webpages and blogs, now everything is siloed into algorithmically-driven social media silos, gatekeeping content unless it drives addiction, parasociality or clickbait. What little that remains on the internet that's even worth anyone's time is all but impossible to find, and will eventually succumb to the cancer in due time.
LLMs will go the same way, because there is no other way for technology to go. Everything will be corrupted by the capitalist imperative, everything will be debased by the tragedy of the commons, every app, service and cool new thing will claw its way down the lobster bucket of society, across our beaten and scarred backs, to find the bottom common denominator of value and suck the marrow its bones.
But at least I'll be able to run it on a cellphone. Score for progress?
Karellen|2 years ago
I'd like a little more background on that claim.
As far as I've been able to tell from my understanding of LLMs, everything they create is a hallucination. It's just a case of "text that could plausibly come next based on the patterns of language they were trained on". When an LLM gets stuff correct, that doesn't make it not a hallucination, it's just that enough correct stuff was in the training data that a fair amount of hallucinations will turn out to be correct. Meanwhile, the LLM has no concept of "true" or "false" or "reality" or "fiction".
There's no meta-cognition. It's just "what word probably comes next?" How is that just "an engineering problem [that] can be solved"?
ben_w|2 years ago
We're full of seemingly weird cognitive biases: Roll a roulette wheel in front of people before asking them the percentage of African counties are in the UN, their answers correlate with the number on the wheel.
Most of us judge logical strengths of arguments by how believable the conclusion is; by repetition; by rhyme; and worse, knowledge of cognitive biases doesn't help as we tend to use that knowledge to dismiss conclusions we don't like rather than to test our own.
zyuiop|2 years ago
krapp|2 years ago
It's almost entirely shit now. Instead of being educated, people are manipulated into bubbles of paranoid delusion and unreality, fed by memes and disinformation. Instead of liberation from corporate ownership, everything is infested with dark patterns, data mining, advertising, DRM and subscriptions. You will own nothing and be happy. Instead of liberation from government, the internet has become a platform for government surveillance, propaganda and psyops. Everyone used to have personal webpages and blogs, now everything is siloed into algorithmically-driven social media silos, gatekeeping content unless it drives addiction, parasociality or clickbait. What little that remains on the internet that's even worth anyone's time is all but impossible to find, and will eventually succumb to the cancer in due time.
LLMs will go the same way, because there is no other way for technology to go. Everything will be corrupted by the capitalist imperative, everything will be debased by the tragedy of the commons, every app, service and cool new thing will claw its way down the lobster bucket of society, across our beaten and scarred backs, to find the bottom common denominator of value and suck the marrow its bones.
But at least I'll be able to run it on a cellphone. Score for progress?