The disclaimer really should be much tougher: "Every LLM consistently makes mistakes. The mistakes will often look very plausible. NEVER TRUST ANY LLM OUTPUT."
that doesn't sound like a helpful attitude. everything you read might be wrong, llm or not - it's just a numbers game. with gpt3 i'll trust the output a certain amount. it's still useful for some tasks but not that many. gpt4 i'll trust the output more
LLMs are impressively good at confidently stating false information as fact though. They use niche terminology from a field, cite made-up sources and events, and speak to the layman as convincingly knowledgable on a subject as anyone else who's actually an expert.
People are trusting LLM output more than they should be. And search engines that people have historically used to find information are trying to replace results with LLM output. Most people don't know how LLMs work, or how their search engine is getting the information it's telling them. Many people won't be able to tell the difference between the scraped web snippets Google has shown for years versus a response from an LLM.
It's not even an occasional bug with LLMs, it's practically the rule. They don't know anything so they'll never say "I don't know" or give any indication of when something they say is trustworthy or not.
But it’s correct. Without independent verification, you can never, ever trust anything that the magic robot tells you. Of course this may not matter so much for very low-stakes applications, but it is still the case.
willsmith72|1 year ago
that doesn't sound like a helpful attitude. everything you read might be wrong, llm or not - it's just a numbers game. with gpt3 i'll trust the output a certain amount. it's still useful for some tasks but not that many. gpt4 i'll trust the output more
hbn|1 year ago
People are trusting LLM output more than they should be. And search engines that people have historically used to find information are trying to replace results with LLM output. Most people don't know how LLMs work, or how their search engine is getting the information it's telling them. Many people won't be able to tell the difference between the scraped web snippets Google has shown for years versus a response from an LLM.
It's not even an occasional bug with LLMs, it's practically the rule. They don't know anything so they'll never say "I don't know" or give any indication of when something they say is trustworthy or not.
rsynnott|1 year ago