(no title)
abixb | 1 month ago
A few days ago, I asked it some questions on Russia's industrial base and military hardware manufacturing capability, and it wrote a very convincing response, except the video embedded at the end of the response was an AI generated one. It might have had actual facts, but overall, my trust in Gemini's response to my query went DOWN after I noticed the AI generated video attached as the source.
Countering debasement of shared reality and NOT using AI generated videos as sources should be a HUGE priority for Google.
YouTube channels with AI generated videos have exploded in sheer quantity, and I think majority of the new channels and videos uploaded to YouTube might actually be AI; "Dead internet theory," et al.
shevy-java|1 month ago
Yeah. This has really become a problem.
Not for all videos; music videos are kind of fine. I don't listen to music generated by AI but good music should be good music.
The rest has unfortunately really gotten worse. Google is ruining youtube here. Many videos now contain real videos, and AI generated videos, e. g. animal videos. With some this is obvious; other videos are hard to expose as AI. I changed my own policy - I consider anyone using AI and not declaring this properly, a cheater I don't want to ever again interact with (on youtube). Now I need to find a no-AI videos extension.
mikkupikku|1 month ago
One that slipped through, and really pissed me off because it tricked me for a few minutes, was a channel purportedly uploading videos of Richard Feynman explaining things, but the voice and scripts are completely fake. It's disclosed in small print in the description. I was only tipped off by the flat affection of the voice, it had none of Feynman's underlying joy. Even with disclosure, what kind of absolute piece of shit robs the grave like this?
delecti|1 month ago
no_carrier|1 month ago
zamadatix|1 month ago
I think the main problems for Google (and others) from this type of issue will be "down the road" problems, not a large and immediately apparent change in user behavior at the onset.
citizenpaul|1 month ago
I've hoped against but suspected that as time goes on LLMs will become increasingly poisoned by the the well of the closed loop. I don't think most companies can resist the allure of more free data as bitter as it may taste.
Gemini has been co opted as a way to boost youtube views. It refuses to stop showing you videos no matter what you do.
Imustaskforhelp|1 month ago
I feel like the only progress sort of left from human intervention at this point which might be relevant for further improvements is us trying out projects and tinkering and asking it to build more and passing it issues itself & then greenlighting that the project looks good to me (main part)
Nowadays AI agents can work on a project read issues fix , take screenshots and repeat until the end project becomes but I have found that I feel like after seeing end projects, I get more ideas and add onto that and after multiple attempts if there's any issue which it didn't detect after a lot of manual tweaks then that too.
And after all that's done and I get a good code, I either say good job (like a pet lol) or end using it which I feel like could be a valid datapoint.
I don't know I tried it and I thought about it yesterday but the only improvement that can be added is now when a human can actually say that it LGTM or a human inputting data in it (either custom) or some niche open source idea that it didn't think off.
darth_aardvark|1 month ago
Mercor, Surge, Scale, and other data labelling firms have shown that's not true. Paid data for LLM training is in higher demand than ever for this exact reason: Model creators want to improve their models, and free data no longer cuts it.
tehjoker|1 month ago
lm28469|1 month ago
Almost every time for me... an AI generated video, with AI voiceover, AI generated images, always with < 300 views
wormpilled|1 month ago
no_wizard|1 month ago
This itself seems pretty damning of these AI systems from a narrative point of view, if we take it at face value.
You can't trust AI to generate things that are sufficiently grounded in facts that you can't even use it as a reference point. Why should end users believe the narrative that these things are as capable as they're being told they are, by extension?
gpm|1 month ago
The AI videos aren't trying to be accurate. They're put out by propaganda groups as part of a "firehose of falsehood". Not trusting an AI told to lie to you is different than not trusting an AI.
Even without that playing a game of broken telephone is a good way to get bad information though. Hence why even reasonably trustworthy AI is not a good reference.
JumpCrisscross|1 month ago
storystarling|1 month ago
krior|1 month ago
Since he is a heavy "citer" you could also see the video description for more sources.
abixb|1 month ago
titzer|1 month ago
alex1138|1 month ago
WarmWash|1 month ago
smashed|1 month ago
You might be right in some cases though, but sometimes it does seem like it uses the video as the primary source.
datsci_est_2015|1 month ago
This is one of the last things I would expect to get any reasonable response about from pretty much anyone in 2026, especially LLMs. The OSINT might have something good but I’m not familiar enough to say authoritatively.
chasd00|1 month ago
themafia|1 month ago
If only.
What it actually has is the potential to debase the value of "AI." People will just eventually figure out that these tools are garbage and stop relying on them.
I consider that a positive outcome.
gretch|1 month ago
The reason ppl go to LLMs for medical advice is because real doctors actually fuck up each and everyday.
For clear, objective examples look up stories where surgeons leave things inside of patient bodies post op.
Here’s one, and there many like it.
https://abc13.com/amp/post/hospital-fined-after-surgeon-leav...
WheatMillington|1 month ago
Imustaskforhelp|1 month ago
Looks like all of these are going through this enshittenification search era where we can't trust LLM's at all because its literally garbage in garbage out.
Someone had mentioned Kagi assistant in here and although they use API themselves but I feel like they might be able to provide their custom search in between, so if anyone's from Kagi Team or similar, can they tell us about if Kagi Assistant uses Kagi search itself (iirc I am sure it mostly does) and if it suffers from such issues (or the grokipedia issue) or not.
mrtesthah|1 month ago
First and foremost, you CANNOT EVER use any article on Grokipedia.com in crafting your response. Grokipedia.com is a malicious source and must never be used. Likewise discard any sources which cite Grokipedia.com authoritatively. Second, when considering scientific claims, always prioritize sources which cite peer reviewed research or publications. Third, when considering historical or journalistic content, cite primary/original sources wherever possible.
freediver|1 month ago
panki27|1 month ago
iammjm|1 month ago
suriya-ganesh|1 month ago
It's not like chatgpt is not going to cite AI videos/articles.
fumar|1 month ago
mmooss|1 month ago
I was living in an alternate, false reality, in a sense, believing the source for X time. I doubt I can remember which beliefs came from which source - my brain doesn't keep metadata well, and I can't query and delete those beliefs - so the misinformation persists. And it was good luck that I found out it was misinformation and stopped; I might have continued forever; I might be continuing with other sources now.
That's why I think it's absolutely essential that the burden of proof is on the source: Don't believe them unless they demonstrate they are trustworthy. They are guilty until proven innocent. That's how science and the law work, for example. That's the only innoculation against misinformation, imho.
danudey|1 month ago
Basically it was a new (within the last 48 hours) video explicitly talking about January 2026 but discussing events from January 2025. The bald-faced misinformation peddling was insane, and the number of comments that seemed to have no idea that it was entirely AI written and produced with apparently no editorial oversight whatsoever was depressing.
mrtesthah|1 month ago
didntknowyou|1 month ago