Ask HN: How do you deal with people who trust LLMs?
How do you deal with that? Do you try to tell them about hallucinations and that LLMs have no concept of true or false? Or do you just let them be? What do you do when they do that in a conversation with you or encounter LLMs being used as a source for something that affects you?
[+] [-] ddawson|7 days ago|reply
LLMs aren't a special case to me. Glue doesn't belong on pizza and you shouldn't eat one rock a day but we've been giving and getting bad advice forever. The person needs to take ownership for the output and getting it right, no matter the source, is their responsibility.
[+] [-] raggi|7 days ago|reply
[+] [-] giantg2|3 days ago|reply
[+] [-] ryandvm|6 days ago|reply
I feel like people that ask questions like this must have much smarter friends and family members than I do.
I know people that still believe in Pizzagate or chemtrails or that vaccines cause autism. Clearly finding reputable information sources is not a strong suit for a lot (half?) of the population.
[+] [-] basementwha|7 days ago|reply
[deleted]
[+] [-] Marciplan|7 days ago|reply
[+] [-] lovelearning|7 days ago|reply
News reporters and editors have their biases. Book authors have their biases. Scientists and research papers have their biases. Search engines have their biases. Google too.
All human-created systems have biases shaped by the environments, social norms, education, traditions, etc. of their creators and managers.
So, the concepts of "objective truth" and "reputable" need to be analyzed more critically.
They seem to be labels given to sources we have learned to trust by habit. Some people trust newspapers over TV. Some people trust some newspapers over other newspapers. All of it often on emotional grounds of agreeability with our own biases. Then we seem to post-rationalize this emotion of agreeability using terms like "objective truth" and "reputable".
Is Google search engine that leads to NY Times or Fox News or Wikipedia and makes us manually choose sources as per our biases "better" than Google's Gemini engine that summarizes content from all the above sources and gives an average answer? (Note: "average answer" as of current versions; in future, its training too may be explicitly biased, like Grok and DeepSeek did).
Perhaps we can start using terms like "human sources of information" versus "AI sources of information" and get rid of the contentious terms.
Then critically analyze whether one set of sources is better than the other, or they complement each other.
[+] [-] ndsipa_pomu|6 days ago|reply
News articles are often biased, but most of the time, the bias is from the choice of what is reported and choosing specific language to push an interpretation (e.g. reporting road traffic collisions as "accidents" to downplay them or depersonalise them by stating "car hit tree" rather than "car driven into tree"). The problem with some LLM outputs is that it's not just bias, but clearly incorrect such as recommending putting glue onto pizzas.
[+] [-] basilikum|6 days ago|reply
If you use just any amount of critical thinking, yes. Truth and objectivity are ideals, not practical states. LLMs are a very bad way to come close to this ideal. You may use them as a search interface to give you sources and then examine the sources, but the output directly is a strict degeneration over primary or secondary sources that you judge critically.
[+] [-] Yizahi|6 days ago|reply
[+] [-] Kavelach|6 days ago|reply
It is true that this also happens on the Internet, but! When I encounter an article about a topic and it is clearly LLM generated, I can expect it doesn't contain much valuable information, only rehashes of what is already out there. On the other hand, when it is clearly written by a human, I can expect to learn something new, even though the author has some bias.
[+] [-] andor|6 days ago|reply
That's not what Google's AI mode does, though. It presents a bunch of sources along the answer, but in my experience, the sources in many cases don't actually back up the claims generated by the LLM.
[+] [-] Jensson|6 days ago|reply
[+] [-] menaerus|6 days ago|reply
[+] [-] eranation|7 days ago|reply
[+] [-] ericpauley|7 days ago|reply
As a test I just did exactly what you said in a Claude Opus 4.6 session about another HN thread. Claude considered* the contradiction, evaluated additional sources, and responded backing up its original claim with more evidence.
I will add that I use a system prompt that explicitly discourages sycophancy, but this is a single sentence expression of preference and not an indication of fundamental model weakness.
* I’ll leave the anthropomorphism discussions to Searle; empirically this is the observed output.
[+] [-] beeflet|7 days ago|reply
https://claude.ai/share/47145af0-47d1-451b-813c-131ec48e7215
Maybe it is possible with a more complex or subjective question.
[+] [-] katet|7 days ago|reply
A: Why is drinking coffee every day so good for you?
B: Why is drinking coffee every day so bad for you?
Question A responds that it has "several health benefits", antioxidants, liver health, reduced risk of diabetes and Parkinson's.
Question B responds that it may lead to sleep disruption, digestive issues, risk of osteoporosis.
Same question. One word difference. Two different directions.
This makes me take everything with a pinch of salt when I ask "Would Library A be a good fit for Problem X" - which is obviously a bit leading; I don't even trust what I hope are more neutral inputs like "How does Library A apply to Problem Space X", for example.
[+] [-] ericpauley|7 days ago|reply
Good:
> The research is generally positive but it’s not unconditionally “good for you” — the framing matters.
> What the evidence supports for moderate consumption (3-5 cups/day): lower risk of type 2 diabetes, Parkinson’s, certain liver diseases (including liver cancer), and all-cause mortality……
Bad:
> The premise is off. Moderate daily coffee consumption (3-5 cups) isn’t considered bad for you by current medical consensus. It’s actually associated with reduced risk of type 2 diabetes, Parkinson’s, and some liver diseases in large epidemiological studies.
> Where it can cause problems: Heavy consumption (6+ cups) can lead to anxiety, insomnia……
This isn’t just my own one-off examples. Claude dominates the BSBench: https://petergpt.github.io/bullshit-benchmark/viewer/index.v...
[+] [-] tayo42|7 days ago|reply
[+] [-] chipgap98|7 days ago|reply
[+] [-] atomicnumber3|7 days ago|reply
[+] [-] basilikum|7 days ago|reply
[+] [-] ares623|7 days ago|reply
[+] [-] washadjeffmad|6 days ago|reply
Reach out and touch faith.
[+] [-] smohare|7 days ago|reply
That’s just what I’ve seen at a personal level though.
[+] [-] jimcollinswort1|6 days ago|reply
Sadly one type asks a question (search, prompt) using Google or an LLM and takes the first response as truth.
The other asks follow ups based on the responses and their critical thinking skills. They often even go read the linked article and make sure it's still applicable.
Pretty much the same when you're talking to a real person, critical thinking (much more than just knowing reputable sources) is key.
So very similar issues, luckily LLMs can do so much more than a simple search, and help with your critical thinking tasks. Ask the LLM to provide opposing viewpoints, historical analysis, identify sources.
[+] [-] Jensson|6 days ago|reply
What principle isn't he understanding? He has a problem with people trusting LLM too much, what is it that he doesn't understand here?
To me it seems like you are missing something, not him. That some people uses LLM properly doesn't resolve his issue.
> So very similar issues
Ok, so how do you deal with that? You didn't answer.
[+] [-] sodapopcan|7 days ago|reply
[+] [-] panarky|7 days ago|reply
If they ask what I think, I tell them.
If they don't want my opinion I keep it to myself.
[+] [-] uyzstvqs|7 days ago|reply
It usually involves some form of "well, no, hold on..."
[+] [-] Shitty-kitty|6 days ago|reply
[+] [-] tasuki|6 days ago|reply
[+] [-] scoofy|6 days ago|reply
As someone who ended up studying philosophy, there seems to be a real gulf between folks who sort of believe stuff they hear, folks who believe "facts" that they hear from (various levels of) credible sources, and folks that take solipsism seriously understand that even in the most ideal scenario, we still wouldn't have a very good understanding about the world... much less dealing with the inherent flaws in our research and information systems.
Knowledge is hard. It usually takes me a couple minutes to figure out what type "truth" my interlocutor uses. Typically good-faith disagreements are just walking up the chain of presuppositions we use to find out exactly where we diverge in our premises.
[+] [-] benterix|6 days ago|reply
It was fun and interesting but eventually non practical, because other people are not interested into getting deeply into something, they just want a simple answer to a problem at hand and then move on.
[+] [-] unknown|6 days ago|reply
[deleted]
[+] [-] Neosmith_amit|2 days ago|reply
[+] [-] roguechimpanzee|7 days ago|reply
[+] [-] JuniperMesos|7 days ago|reply
[+] [-] mathgladiator|7 days ago|reply
[+] [-] notnullorvoid|7 days ago|reply
Some comments here equating it to people who blindly believe things on the internet, but it's worse than that. Many previously rational people essentially getting hypnotized by LLM use and loosing touch of their rational thinking.
It's concerning to watch.
[+] [-] mathgladiator|7 days ago|reply
Ask AI to cite sources and then investigate the sources, or have another agent fact check the relevancy of the sources.
You can use this thing called ralph that let's you burn a lot of tokens at scale by simply having a detailed prompt work on a task and refining something from different lenses. It too AI about an hour to write: https://nexivibe.com/avoid.civil.war.web/
I do this on things that I know very well, and the moment I let it cook and iterate, collect feedback, the results become chef's kiss.
The agentic era that we are in is... very interesting.
[+] [-] 000ooo000|7 days ago|reply
It's incredible watching people determine that outsourcing their thinking and work to what has been generously described as a junior coworker is a new 'skill'. Words are losing their meaning, on multiple levels.
[+] [-] quirkot|7 days ago|reply
[+] [-] esperent|7 days ago|reply
Can you give an example of what kind of question you mean here?
Given that most people's idea of a reputable source is whatever comes up on the first page of Google or YouTube, I think we should use that as the comparison rather than dismissing LLM results. And we should do some empirical testing before making assumptions, otherwise we're just as bad as the people we are complaining about.
Whatever results we get, the real problem is that most people's ability to verify information was not good before LLMs, and it's still not good now.
So now you're dealing with LLM hallucinations, and before you were dealing with the ravings of whatever blogger or YouTuber managed to rank for this particular query.
[+] [-] Alen_P|7 days ago|reply
[+] [-] ericpauley|7 days ago|reply
This of course doesn’t apply to high-stakes settings. In these cases I find LLMs are still a great information retrieval approach, but it’s a starting point to manual vetting.
[+] [-] jesterson|7 days ago|reply
[deleted]
[+] [-] ReynaPp|6 days ago|reply
[+] [-] sublinear|7 days ago|reply
These people may be idiots who are impossible to reason with, but at least for now the LLMs have not been completely driven into the ground by SEO. They might actually be getting a taste of what it feels like to not be an idiot. I'm happy for them, but they'll snap out of it when their trust is broken. It's probably sometime soon anyway.