A counter example to this is that I asked it about NovaMinĀ® 5 minutes ago and it essentially told me to not bother and buy whatever toothpaste has >1450 ppm fluoride.
Such is the nature of probabilistic systems. Generally speaking, LLMs read the top N search results on the topic in question and uncritically summarize them in their answer. Emphasis on uncritically, therefore the quality of LLM answers is strongly correlated with the quality of top search results.
This is why I am so excited about the way GPT-5 uses its search tool.
GPT-4o and most other AI-assisted search systems in the past worked how you describe: they took the top 10 search results and answered uncritically based on those. If the results were junk the answer was too.
GPT-5 Thinking doesn't do that. Take a look at the thinking trace examples I linked to - in many of them it runs a few searches, evaluates the results, finds that they're not credible enough to generate an answer and so continues browsing and searching.
That's why many of the answers take 1-2 minutes to return!
I frequently see it dismiss information from social media and prefer to go to a source with a good reputation for fact-checking (like a credible newspaper) instead.
A year ago I asked it to do deep research on Biomin F + a comparison to NovaMin & fluoride. It gave a comprehensive answer detailing the benefits of BioMin & NovaMin over regular fluroide.
What's incredible about that is that you are acting like that was a success story but it is a nuanced topic and it swallowed all the nuance and convinced you.
You're now here telling us how it gave you the right answer, which seems to mostly be due to it confirming your bias.
dns_snek|5 months ago
Relevant blog post: https://housefresh.com/beware-of-the-google-ai-salesman/
simonw|5 months ago
GPT-4o and most other AI-assisted search systems in the past worked how you describe: they took the top 10 search results and answered uncritically based on those. If the results were junk the answer was too.
GPT-5 Thinking doesn't do that. Take a look at the thinking trace examples I linked to - in many of them it runs a few searches, evaluates the results, finds that they're not credible enough to generate an answer and so continues browsing and searching.
That's why many of the answers take 1-2 minutes to return!
I frequently see it dismiss information from social media and prefer to go to a source with a good reputation for fact-checking (like a credible newspaper) instead.
the_pwner224|5 months ago
yeasku|5 months ago
How do you know it did not made it up. Are you an expert in the field?
typpilol|5 months ago
therein|5 months ago
You're now here telling us how it gave you the right answer, which seems to mostly be due to it confirming your bias.