(no title)
smiley1437 | 7 months ago
I have friends who are highly educated professionals (PhDs, MDs) who just assume that AI\LLMs make no mistakes.
They were shocked that it's possible for hallucinations to occur. I wonder if there's a halo effect where the perfect grammar, structure, and confidence of LLM output causes some users to assume expertise?
bayindirh|7 months ago
AI, in all its glory, is seen as an extension of that. A deterministic thing which is meticulously crafted to provide an undisputed truth, and it can't make mistakes because computers are deterministic machines.
The idea of LLMs being networks with weights plus some randomness is both a vague and too complicated abstraction for most people. Also, companies tend to say this part very quietly, so when people read the fine print, they get shocked.
viccis|7 months ago
I think it's just that LLMs are modeling generative probability distributions of sequences of tokens so well that what they actually are nearly infallible at is producing convincing results. Often times the correct result is the most convincing, but other times what seems most convincing to an LLM just happens to also be most convincing to a human regardless of correctness.
throwawayoldie|7 months ago
> In computer science, the ELIZA effect is a tendency to project human traits — such as experience, semantic comprehension or empathy — onto rudimentary computer programs having a textual interface. ELIZA was a symbolic AI chatbot developed in 1966 by Joseph Weizenbaum and imitating a psychotherapist. Many early users were convinced of ELIZA's intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations.
throwawayoldie|7 months ago
(To be fair, in many cases, I'm not terribly interested in learning the details of their field.)
yifanl|7 months ago
emporas|7 months ago
There are lies, statistics and goddamn hallucinations.
rplnt|7 months ago
jasonjayr|7 months ago
We are barely talking modern media literacy, and now we have machines that talk like 'trusted' face to face humans, and can be "tuned" to suggest specific products or use any specific tone the owner/operator of the system wants.
dsjoerg|7 months ago
Highly educated professionals in my experience are often very bad at applied epistemology -- they have no idea what they do and don't know.