top | item 47433702

Ask HN: How do you deal with people who trust LLMs?

155 points| basilikum | 7 days ago | reply

A lot of people use LLMs as the source of their objective truth. They have a question that would be very well answered with a search leading to a reputable source, but instead they ask some LLM chat bot and just blindly trust whatever it says.

How do you deal with that? Do you try to tell them about hallucinations and that LLMs have no concept of true or false? Or do you just let them be? What do you do when they do that in a conversation with you or encounter LLMs being used as a source for something that affects you?

202 comments

order
[+] ddawson|7 days ago|reply
I'm going to hold them to the same standard no matter if they use crappy sources, plagiarize, or hallucinate on their own. If someone asked, when and if I am in a position where I have to tell them, I would remind them that LLMs prioritize their own confidence over correctness.

LLMs aren't a special case to me. Glue doesn't belong on pizza and you shouldn't eat one rock a day but we've been giving and getting bad advice forever. The person needs to take ownership for the output and getting it right, no matter the source, is their responsibility.

[+] raggi|7 days ago|reply
this. llm's aren't that special, access _maybe_, but there's plenty of access to terrible rumor mills.
[+] giantg2|3 days ago|reply
I think they are a little special. People can really turn their brain off and not even know about the source. They don't need to read theough a source or reformat the content to the typical blurb arguments. They can read it off the screen without even understanding what the words mean, which is much harder for most other sources.
[+] ryandvm|6 days ago|reply
Yeah, it is not clear to me that the average person is going to do any better with googling then they are with asking an LLM. At least the LLM is mostly the average of all published knowledge (and misinformation).

I feel like people that ask questions like this must have much smarter friends and family members than I do.

I know people that still believe in Pizzagate or chemtrails or that vaccines cause autism. Clearly finding reputable information sources is not a strong suit for a lot (half?) of the population.

[+] lovelearning|7 days ago|reply
> a reputable source

News reporters and editors have their biases. Book authors have their biases. Scientists and research papers have their biases. Search engines have their biases. Google too.

All human-created systems have biases shaped by the environments, social norms, education, traditions, etc. of their creators and managers.

So, the concepts of "objective truth" and "reputable" need to be analyzed more critically.

They seem to be labels given to sources we have learned to trust by habit. Some people trust newspapers over TV. Some people trust some newspapers over other newspapers. All of it often on emotional grounds of agreeability with our own biases. Then we seem to post-rationalize this emotion of agreeability using terms like "objective truth" and "reputable".

Is Google search engine that leads to NY Times or Fox News or Wikipedia and makes us manually choose sources as per our biases "better" than Google's Gemini engine that summarizes content from all the above sources and gives an average answer? (Note: "average answer" as of current versions; in future, its training too may be explicitly biased, like Grok and DeepSeek did).

Perhaps we can start using terms like "human sources of information" versus "AI sources of information" and get rid of the contentious terms.

Then critically analyze whether one set of sources is better than the other, or they complement each other.

[+] ndsipa_pomu|6 days ago|reply
Whilst chasing after "objective truth" is a philosophical problem, it's clear that some statements are more correct and true than others.

News articles are often biased, but most of the time, the bias is from the choice of what is reported and choosing specific language to push an interpretation (e.g. reporting road traffic collisions as "accidents" to downplay them or depersonalise them by stating "car hit tree" rather than "car driven into tree"). The problem with some LLM outputs is that it's not just bias, but clearly incorrect such as recommending putting glue onto pizzas.

[+] basilikum|6 days ago|reply
> Is Google search engine that leads to NY Times or Fox News or Wikipedia and makes us manually choose sources as per our biases "better" than Google's Gemini engine that summarizes content from all the above sources and gives an average answer?

If you use just any amount of critical thinking, yes. Truth and objectivity are ideals, not practical states. LLMs are a very bad way to come close to this ideal. You may use them as a search interface to give you sources and then examine the sources, but the output directly is a strict degeneration over primary or secondary sources that you judge critically.

[+] Yizahi|6 days ago|reply
Ironically, this is the classic bias of "bothsiding" the issue. When one side is clearly wrong, just sprinkle in some "look, the others are doing something bad, which means they are equally wrong". A basic lesson from the propaganda manual.
[+] Kavelach|6 days ago|reply
This is an insightful comment, but I feel like you omit the fact that LLMs often give out verifiably false information that can hurt the user or other people.

It is true that this also happens on the Internet, but! When I encounter an article about a topic and it is clearly LLM generated, I can expect it doesn't contain much valuable information, only rehashes of what is already out there. On the other hand, when it is clearly written by a human, I can expect to learn something new, even though the author has some bias.

[+] andor|6 days ago|reply
> Is Google search engine that leads to NY Times or Fox News or Wikipedia and makes us manually choose sources as per our biases "better" than Google's Gemini engine that summarizes content from all the above sources and gives an average answer?

That's not what Google's AI mode does, though. It presents a bunch of sources along the answer, but in my experience, the sources in many cases don't actually back up the claims generated by the LLM.

[+] Jensson|6 days ago|reply
How does this answer the question: "how do you deal with people who trust LLMs?"? Nothing you are saying explains how to deal with such people.
[+] eranation|7 days ago|reply
Ask them to tell the LLM it's wrong... then when it goes "You are absolutely right!" to challenge it and say that it was a test. Then when it replies, ask it if it's 100% sure. They'll lose faith pretty quick.
[+] ericpauley|7 days ago|reply
This is an oft-repeated meme, but I’m convinced the people saying it are either blindly repeating it, using bad models/system prompts, or some other issue. Claude Opus will absolutely push back if you disagree. I routinely push back on Claude only to discover on further evaluation that the model was correct.

As a test I just did exactly what you said in a Claude Opus 4.6 session about another HN thread. Claude considered* the contradiction, evaluated additional sources, and responded backing up its original claim with more evidence.

I will add that I use a system prompt that explicitly discourages sycophancy, but this is a single sentence expression of preference and not an indication of fundamental model weakness.

* I’ll leave the anthropomorphism discussions to Searle; empirically this is the observed output.

[+] katet|7 days ago|reply
Not that I've had to deal with this specifically, but I have noticed how the input phrasing in my prompts pushes the LLM in different directions. I've just tried a quick test with `duck.ai` on gpt 4o-mini with:

A: Why is drinking coffee every day so good for you?

B: Why is drinking coffee every day so bad for you?

Question A responds that it has "several health benefits", antioxidants, liver health, reduced risk of diabetes and Parkinson's.

Question B responds that it may lead to sleep disruption, digestive issues, risk of osteoporosis.

Same question. One word difference. Two different directions.

This makes me take everything with a pinch of salt when I ask "Would Library A be a good fit for Problem X" - which is obviously a bit leading; I don't even trust what I hope are more neutral inputs like "How does Library A apply to Problem Space X", for example.

[+] ericpauley|7 days ago|reply
Again a model issue. At the risk of coming off as a thread-wide apologist, here are my results on Opus:

Good:

> The research is generally positive but it’s not unconditionally “good for you” — the framing matters.

> What the evidence supports for moderate consumption (3-5 cups/day): lower risk of type 2 diabetes, Parkinson’s, certain liver diseases (including liver cancer), and all-cause mortality……

Bad:

> The premise is off. Moderate daily coffee consumption (3-5 cups) isn’t considered bad for you by current medical consensus. It’s actually associated with reduced risk of type 2 diabetes, Parkinson’s, and some liver diseases in large epidemiological studies.

> Where it can cause problems: Heavy consumption (6+ cups) can lead to anxiety, insomnia……

This isn’t just my own one-off examples. Claude dominates the BSBench: https://petergpt.github.io/bullshit-benchmark/viewer/index.v...

[+] tayo42|7 days ago|reply
A person would respond the same way? What exactly are you expecting as the output to those questions?
[+] chipgap98|7 days ago|reply
Is this any different than people who believe random things they read on sketchy news sites or social media?
[+] atomicnumber3|7 days ago|reply
Yes, somehow. I have been dealing with an awful lot of people who basically have what are theoretically logic degrees who suddenly just take LLMs at face value, or quote them to me like that actually means anything. People I formerly thought were sane.
[+] basilikum|7 days ago|reply
Yes, I think AI bots are more compelling to some people. They break the concept of judging information by its source because they obscure the source. But at the same time they are trained on a lot of reputable sources and can say a lot of very smart things, just at other times they say complete BS. But they are really good at making things sound plausible, that's essentially how they work after all.
[+] ares623|7 days ago|reply
Absolutely. These things are marketed from virtually everyone, from people that are historically considered experts and/or authoritative, as such.
[+] washadjeffmad|6 days ago|reply
They're your own, personal Jesus. Someone to hear your prayers, someone to care.

Reach out and touch faith.

[+] smohare|7 days ago|reply
There is a tendency for people to treat LLMs as oracles rather than token predictors. My guess is because they can answer in seeming technical fashion about a wider range of topics than your typical human. You can take these same people, who say have zero understanding of geopolitics, and they’ll apply a layer of (often misused) skepticism when confronted with information that doesn’t conform to existing beliefs.

That’s just what I’ve seen at a personal level though.

[+] jimcollinswort1|6 days ago|reply
I would question the person that asks the question, as they are not understanding some basic principles here. There are two types of internet/LLM users;

Sadly one type asks a question (search, prompt) using Google or an LLM and takes the first response as truth.

The other asks follow ups based on the responses and their critical thinking skills. They often even go read the linked article and make sure it's still applicable.

Pretty much the same when you're talking to a real person, critical thinking (much more than just knowing reputable sources) is key.

So very similar issues, luckily LLMs can do so much more than a simple search, and help with your critical thinking tasks. Ask the LLM to provide opposing viewpoints, historical analysis, identify sources.

[+] Jensson|6 days ago|reply
> I would question the person that asks the question, as they are not understanding some basic principles here.

What principle isn't he understanding? He has a problem with people trusting LLM too much, what is it that he doesn't understand here?

To me it seems like you are missing something, not him. That some people uses LLM properly doesn't resolve his issue.

> So very similar issues

Ok, so how do you deal with that? You didn't answer.

[+] sodapopcan|7 days ago|reply
Are you talking about people who will still insist the LLM was correct even after being presented with evidence to the contrary, or people who don't EVER bother double checking answers they get out of said software since they assume it to be true?
[+] panarky|7 days ago|reply
I treat people who blindly believe an LLM the same way I treat people who blindly believe a religion or a political ideology or medical advice from Instagram.

If they ask what I think, I tell them.

If they don't want my opinion I keep it to myself.

[+] uyzstvqs|7 days ago|reply
The same way that I handle anyone who blindly trusts anything on the internet. Could be an LLM, TikTok or YouTube video, Wikipedia article, news article, whatever.

It usually involves some form of "well, no, hold on..."

[+] Shitty-kitty|6 days ago|reply
My method is simple. I remind them that chatgpt is trained on everything said on the internet including NYT if speaking to a Republican, replace that with Fox News if speaking to a Democrat.
[+] tasuki|6 days ago|reply
I like a lot of the other answers, but this is clearly the most pragmatic one.
[+] scoofy|6 days ago|reply
I think the real problem is most people don't actually have a very good understanding of "Truth."

As someone who ended up studying philosophy, there seems to be a real gulf between folks who sort of believe stuff they hear, folks who believe "facts" that they hear from (various levels of) credible sources, and folks that take solipsism seriously understand that even in the most ideal scenario, we still wouldn't have a very good understanding about the world... much less dealing with the inherent flaws in our research and information systems.

Knowledge is hard. It usually takes me a couple minutes to figure out what type "truth" my interlocutor uses. Typically good-faith disagreements are just walking up the chain of presuppositions we use to find out exactly where we diverge in our premises.

[+] benterix|6 days ago|reply
Man, I had a partner who studied philosophy, too. I'd ask a simple question and they'd answer, "Wait, before we even start, we need to define the axioms".

It was fun and interesting but eventually non practical, because other people are not interested into getting deeply into something, they just want a simple answer to a problem at hand and then move on.

[+] Neosmith_amit|2 days ago|reply
Well, most people who "blindly trust" LLMs also blindly trust Google results, Wikipedia, the first Stack Overflow answer, a friend who sounds confident, and a news headline they didn't click through. So, what do you do. As giantg2 has written in the comments, they are a little special.
[+] roguechimpanzee|7 days ago|reply
I think LLMs are fine for a "first pass" on a topic, but if I am researching something, I want a primary source rather than just the LLM-generated output. Do they have the primary source?
[+] JuniperMesos|7 days ago|reply
No different from what people said about Wikipedia when it was new (and justifiably so). How should one deal with people who trust Wikipedia?
[+] mathgladiator|7 days ago|reply
Ask the nice AI to cite sources, and then have another AI fact check their sources. The agentic era is interesting.
[+] notnullorvoid|7 days ago|reply
Unless they are someone that values your opinion there's nothing you can do other than move on.

Some comments here equating it to people who blindly believe things on the internet, but it's worse than that. Many previously rational people essentially getting hypnotized by LLM use and loosing touch of their rational thinking.

It's concerning to watch.

[+] mathgladiator|7 days ago|reply
Simple. I became one of them. Ultimately, using an AI is a new skill, but you have to treat it like another person that sometimes bullshits you. That's why you leverage agents to refine, do research, and polish.

Ask AI to cite sources and then investigate the sources, or have another agent fact check the relevancy of the sources.

You can use this thing called ralph that let's you burn a lot of tokens at scale by simply having a detailed prompt work on a task and refining something from different lenses. It too AI about an hour to write: https://nexivibe.com/avoid.civil.war.web/

I do this on things that I know very well, and the moment I let it cook and iterate, collect feedback, the results become chef's kiss.

The agentic era that we are in is... very interesting.

[+] 000ooo000|7 days ago|reply
>Ultimately, using an AI is a new skill

It's incredible watching people determine that outsourcing their thinking and work to what has been generously described as a junior coworker is a new 'skill'. Words are losing their meaning, on multiple levels.

[+] quirkot|7 days ago|reply
counterpoint: if I have to treat the computer like a person, what's the point of talking to a computer in the first place? Particularly when there are so many other systems that can provide answers without the runaround
[+] esperent|7 days ago|reply
> They have a question that would be very well answered with a search leading to a reputable source

Can you give an example of what kind of question you mean here?

Given that most people's idea of a reputable source is whatever comes up on the first page of Google or YouTube, I think we should use that as the comparison rather than dismissing LLM results. And we should do some empirical testing before making assumptions, otherwise we're just as bad as the people we are complaining about.

Whatever results we get, the real problem is that most people's ability to verify information was not good before LLMs, and it's still not good now.

So now you're dealing with LLM hallucinations, and before you were dealing with the ravings of whatever blogger or YouTuber managed to rank for this particular query.

[+] Alen_P|7 days ago|reply
I don't fight them on it. I just ask "where did that come from?" and suggest checking a real source. Most people aren't trying to be wrong, they just want quick answers. If you show them how to double check without making it a big deal, they usually get it.
[+] ericpauley|7 days ago|reply
Sure LLMs make mistakes, but have you looked at the accuracy of the average top search results recently? The SERPs are packed with SEO-infested articles that are all written by LLMs anyway (and almost universally worse ones than you could use yourself). In many cases the stakes are low enough (and the cost of manually sifting through the junk high enough) that it’s worth going with the empirically higher quality answer than the SEO spam.

This of course doesn’t apply to high-stakes settings. In these cases I find LLMs are still a great information retrieval approach, but it’s a starting point to manual vetting.

[+] ReynaPp|6 days ago|reply
To be honest, I catch myself doing this too. I’ll ask an LLM first instead of searching, even though I know it’s not a reliable source of truth. It just feels faster and more convenient. I’m aware of the hallucination risks and that it’s a bad habit, but the workflow is so smooth that it’s hard to break. When it matters, I do double-check with real sources, but for casual stuff, I’ll probably keep using AI as my first stop.
[+] sublinear|7 days ago|reply
At the very least, I'm glad most people finally recognize LLMs are being used as a political weapon against education. It's the same old power struggles as ever.

These people may be idiots who are impossible to reason with, but at least for now the LLMs have not been completely driven into the ground by SEO. They might actually be getting a taste of what it feels like to not be an idiot. I'm happy for them, but they'll snap out of it when their trust is broken. It's probably sometime soon anyway.