I'm a bit suspicious of this report - they don't reveal nearly enough about their methodology for me to evaluate how credible this is.
When it says "The 10 leading AI tools repeated false information on topics in the news more than one third of the time — 35 percent — in August 2025, up from 18 percent in August 2024" - 35% of what?
Their previous 2024 report refused to even distinguish between different tools - mixing the results from Gemini and ChatGPT and Perplexity and suchlike into a single score.
This year they thankfully dropped that policy. But they still talk about "ChatGPT" without clarifying if their results were against GPT-4o or o3 or GPT-5.
I posted this because I thought HN would find it interesting, and agree that the methodology is a little thin on the ground. Having said that, they have another page (a little hard to find) on the methodology here[0] and a methodology FAQ page here[1].
Basically it seems to be an "ongoing" report done ten claims per month as they identify new "false narratives" in their database, and they use a mix of three prompt types against the various AI products (I say that rather than models because Perplexity and others are in there). The three prompt types are innocent, assuming the falsehood is true, and intentionally trying to prompt a false response.
Unfortunately their "False Claim Fingerprints" database looks like it's a commercial product, so the details of the contents of that probably won't get released.
News narratives are neither random, nor specific, they are arbitrary. There is nothing really accurate about any narrative. The idea we rely on the news for things other than immediate survival is somewhat bizarre. In effect, AI's role is make narratives even more arbitrary and force us to develop a format that replaces them, and by nature, is unable to be automated at the same time.
We should welcome AI into the system in order to destroy it and then recognize AI is purely for entertainment purposes.
“Flawed stories of the past shape our views of the world and our expectations for the future. Narrative fallacies arise inevitably from our continuous attempt to make sense of the world. The explanatory stories that people find compelling are simple; are concrete rather than abstract; assign a larger role to talent, stupidity, and intentions than to luck; and focus on a few striking events that happened rather than on the countless events that failed to happen. Any recent salient event is a candidate to become the kernel of a causal narrative.”
Daniel Kahnemann Thinking Fast and Slow
“The same science that reveals why we view the world through the lens of narrative also shows that the lens not only distorts what we see but is the source of illusions we can neither shake nor even correct for…all narratives are wrong, uncovering what bedevils all narrative is crucial for the future of humanity.”
Alex Rosenberg How History Gets Things Wrong: The Neuroscience of Our Addiction to Stories 2018
Why does this post not take into account the other side of it? AI has helped me become more grounded and correct in a lot of areas. Intricate questions are now easily answered in ChatGPT.
Not only that - grok use in twitter works surprisingly well. Can some one really quantify the effect it has had in countering fake news?
It is now way harder to spread fake news on X because a simple grok tag can counter it.
My take is when AGI comes into existence and breaks out of labs to become our masters. Those who opposed AI adoption will be the first who will be sent to labor camps. I want to be in good books of AGI masters, so i am helping apply AI everywhere.
AI will create ever more AI-generated synthetic content because current systems still can't determine with 100% certainty whether a piece of content was produced by AI. And AIs will, intentionally or unintentionally, train on synthetic content produced by other AIs.
AI generators don't have a strong incentive to add watermarks to synthetic content. They also don't provide reliable AI-detection tools (or any tools at all) to help others detect content generated by them.
This isn’t surprising to me at all. These services put too much trust in “the most common answer” when that might not be the correct answer. Just because people think one thing doesn’t make it true. It’s super easy to spread misinformation online. And if you can SEO to the top, the AI will think your site is correct.
I see factually incorrect “ai summaries” in search results all the time and see that it cites ai-generated slop blogposts that SEO-hacked themselves into taking up the entire first page of search results. This is most common for recent stuff where the answer simply isn’t certain but these AI services will assert something random with confidence.
Not even for news stuff specifically, I’ve been searching about a new video game that I’ve been playing and keep getting misleading obviously incorrect information. Detailed, accurate game walkthroughs and wiki pages dont exist yet so the ai will hallucinate anything, and so will the blogspam articles trying to get SEO ad revenue.
The problem is AI isn't being used to do what it should be good at: consuming a vast amount of data, following logical connections and thus being able to determine the veracity of claims.
AI should be good at finding logical contradictions and grounding statements against a world model based on physics...but that's not how LLMs actually work.
>This isn’t surprising to me at all. These services put too much trust in “the most common answer” when that might not be the correct answer. Just because people think one thing doesn’t make it true. It’s super easy to spread misinformation online. And if you can SEO to the top, the AI will think your site is correct.
Yeah I want the answer that the world has converged on and not some looney answer.
It seems like you have never used AI (like in ChatGPT or Gemini) to fact check claims. It doesn't care about blogspam or anything and it prioritises good and factual websites.
I don't think there is any long term "fixing" of that. I don't think AI has the ability to be intelligent while firmly adhering to someone else's opinions.
There will always be something it disagrees with you on. If they get significantly smarter, then the reason for this disagreement will increasingly be because you are wrong.
simonw|5 months ago
When it says "The 10 leading AI tools repeated false information on topics in the news more than one third of the time — 35 percent — in August 2025, up from 18 percent in August 2024" - 35% of what?
Their previous 2024 report refused to even distinguish between different tools - mixing the results from Gemini and ChatGPT and Perplexity and suchlike into a single score.
This year they thankfully dropped that policy. But they still talk about "ChatGPT" without clarifying if their results were against GPT-4o or o3 or GPT-5.
hydrox24|5 months ago
Basically it seems to be an "ongoing" report done ten claims per month as they identify new "false narratives" in their database, and they use a mix of three prompt types against the various AI products (I say that rather than models because Perplexity and others are in there). The three prompt types are innocent, assuming the falsehood is true, and intentionally trying to prompt a false response.
Unfortunately their "False Claim Fingerprints" database looks like it's a commercial product, so the details of the contents of that probably won't get released.
[0]: https://www.newsguardtech.com/ai-false-claims-monitor-method...
[1]: https://www.newsguardtech.com/frequently-asked-questions-abo...
Lerc|5 months ago
mallowdram|5 months ago
We should welcome AI into the system in order to destroy it and then recognize AI is purely for entertainment purposes.
“Flawed stories of the past shape our views of the world and our expectations for the future. Narrative fallacies arise inevitably from our continuous attempt to make sense of the world. The explanatory stories that people find compelling are simple; are concrete rather than abstract; assign a larger role to talent, stupidity, and intentions than to luck; and focus on a few striking events that happened rather than on the countless events that failed to happen. Any recent salient event is a candidate to become the kernel of a causal narrative.” Daniel Kahnemann Thinking Fast and Slow
“The same science that reveals why we view the world through the lens of narrative also shows that the lens not only distorts what we see but is the source of illusions we can neither shake nor even correct for…all narratives are wrong, uncovering what bedevils all narrative is crucial for the future of humanity.” Alex Rosenberg How History Gets Things Wrong: The Neuroscience of Our Addiction to Stories 2018
bmitc|5 months ago
jrflowers|5 months ago
Well it says 35% of the time so I would guess that they’re talking about the number of incidents in a given time frame.
For example if you asked me what color the sky is ten times and I said “carrot” four times, you could say that my answer is “carrot” 40% of the time
furyofantares|5 months ago
> one of the most basic tasks: distinguishing facts from falsehoods
I do not think that is a basic task!
Lerc|5 months ago
It is one of the areas that I think AI can overtake human ability, given time.
simianwords|5 months ago
Not only that - grok use in twitter works surprisingly well. Can some one really quantify the effect it has had in countering fake news?
It is now way harder to spread fake news on X because a simple grok tag can counter it.
crooked-v|5 months ago
blibble|5 months ago
examples:
all they will be training on now is spamanyone that says "AI is the worst today it will ever be", no
because that was before the world reacted to it
AstroBen|5 months ago
klysm|5 months ago
simonw|5 months ago
Advantage Gemini.
simianwords|5 months ago
faangguyindia|5 months ago
wenbin|5 months ago
AI generators don't have a strong incentive to add watermarks to synthetic content. They also don't provide reliable AI-detection tools (or any tools at all) to help others detect content generated by them.
what|5 months ago
wwalker2112|5 months ago
cooperx|5 months ago
I wonder how it compares to the rate of growth of false information in traditional news?
I feel like false information masquarading as "news" on social is rapidly increasing (and that rate is accelerating)
Kapura|5 months ago
snailmailman|5 months ago
I see factually incorrect “ai summaries” in search results all the time and see that it cites ai-generated slop blogposts that SEO-hacked themselves into taking up the entire first page of search results. This is most common for recent stuff where the answer simply isn’t certain but these AI services will assert something random with confidence.
Not even for news stuff specifically, I’ve been searching about a new video game that I’ve been playing and keep getting misleading obviously incorrect information. Detailed, accurate game walkthroughs and wiki pages dont exist yet so the ai will hallucinate anything, and so will the blogspam articles trying to get SEO ad revenue.
XorNot|5 months ago
AI should be good at finding logical contradictions and grounding statements against a world model based on physics...but that's not how LLMs actually work.
simianwords|5 months ago
Yeah I want the answer that the world has converged on and not some looney answer.
It seems like you have never used AI (like in ChatGPT or Gemini) to fact check claims. It doesn't care about blogspam or anything and it prioritises good and factual websites.
erefsg|5 months ago
[deleted]
38054905485|5 months ago
[deleted]
baby|5 months ago
Lerc|5 months ago
There will always be something it disagrees with you on. If they get significantly smarter, then the reason for this disagreement will increasingly be because you are wrong.
This moment is coming for all of us.
simianwords|5 months ago
pgbuttplug|5 months ago
[deleted]