(no title)
Okawari | 10 months ago
I don't like LLMs for two reasons:
* I can't really get a feel for the veracity of the information without double checking it. A lot of context I get from just reading results from a traditional search engine is lost when I get an answer from a LLM. I find it somewhat uncomfortable to just accept the answer, and if I have to double check it anyways, the LLM's answer is kind of meaningless and I might as well use a traditional search engine.
* I'm missing out on learning opertunities that I would usually get otherwise by reading or skimming through a larger document trying to find the answer. I appreciate that I skim through a lot of documentation on a regular basis and can recall things that I just happened to read when looking for a solution for another problem. I would hate it if an LLM would drop random tidbits of information when I was looking for concrete answers, but since its a side effect of my information gathering process, I like it.
If I were to use an AI assistant that could help me search and curate the results, instead of trying to answer my question directly. Hopefully in a more sleek way than Perplexity does with its sources feature.
SoftTalker|10 months ago
At least that has been my experience. I admit I don't use LLMs very much.
wruza|10 months ago
mips_avatar|10 months ago
graemep|10 months ago
This is my main reason for not using LLMs as a replacement for search. I want an accurate answer. I quote often search for legal or regulatory issues, health, scientific issues, specific facts about lots of things. i want authoritative sources.
Froedlich|10 months ago
supportengineer|10 months ago
da_chicken|10 months ago
You check the information you decide should be verified.
ryandrake|10 months ago
An LLM response without explicit mention of its provenance... There's no way to even guess whether it is authoritative.
pdabbadabba|10 months ago
theamk|10 months ago
What do you even use for double-check? Some random low-quality content farm? A glitchy LLM? An dodgy mirror of official docs full of ads? Or do you actually dig the source code for this?
And do you keep double-checking with all other information on the page... "A TOMLDecodeError will be raised on an invalid TOML document." - are you going to start an interactive session and check which error will be raised?
npoc|10 months ago
Just because you can find multiple independent sources saying the same thing doesn't mean it's correct.
worik|10 months ago
"What I tell you three times is true"
__d|10 months ago
Part of why I prefer to use a search engine is that I can see who is saying it, in what context. It might be Wikipedia, but also CIA world fact book. Or some blog but also python.org.
Or (lately) it might be AI SEO slop, reworded across 10 sites but nothing definitive. Which means I need to change my search strategy.
I find it easier (and quicker) to get to a believable result via a search engine than going via ChatGPT and then having to check what it claims.
leptons|10 months ago
And this is how LLMs perform when LLM-rot hasn't even become widely pervasive yet. As time goes on and LLMs regurgitate into themselves, they will become even less trustworthy. I really can't trust what an LLM says, especially when it matters, and the more it lies, the more I can't trust them.
bluGill|10 months ago