My parents were tricked the other day by a fake youtube video of "racist cop" doing something bad and getting outraged by it. I watch part of the video and even though it felt off I couldn't immediately tell for sure if it was fake or not. Nevertheless I googled the names and details and found nothing but repostings of the video. Then I looked at the youtube channel info and there it said it uses AI for "some" of the videos to recreate "real" events. I really doubt that.. it all looks fake. I am just worried about how much divisiveness this kind of stuff will create all so someone can profit off of youtube ads.. it's sad.
spicyusername|1 month ago
Everyone I know has strong opinions on every little thing, based exclusively their emotional reactions and feed consumption. Basically no one has the requisite expertise commensurate with their conviction, but being informed is not required to be opinionated or exasperated.
And who can blame them (us). It is almost impossible to escape the constant barrage of takes and news headlines these days without being a total luddite. And each little snippet worms its way into your brain (and well being) one way or the other.
It's just been too much for too long and you can tell.
_heimdall|1 month ago
Its odd to me to still use "luddite" disparagingly while implying that avoiding certain tech would actually have some high impact benefits. At that point I can't help but think the only real issue with being a luddite is not following the crowd and fitting in.
gooob|1 month ago
Ntrails|1 month ago
It really isn't that hard, if I'm looking at my experience. Maybe a little stuff on here counts. I get my news from the FT, it's relatively benign by all accounts. I'm not sure that opting out of classical social media is particularly luddite-y, I suspect it's closer to becoming vogue than not?
Being led around by the nose is a choice still, for now at least.
b00ty4breakfast|1 month ago
Simulacra and Simulation came out in '81, for an example of how long this has been a recognized phenomenon
bombcar|1 month ago
So we emotionally convince ourselves that we have solved the problem so we can act appropriately and continue doing things that are important to us.
The founders recognized this problem and attempted to setup a Republic as an answer to it; so that each voter didn't have to ask "do I know everything about everything so I can select the best person" and instead were asked "of this finite, smaller group, who do I think is best to represent me at the next level"? We've basically bypassed that; every voter knows who ran for President last election, hardly anyone can identify their party's local representative in the party itself (which is where candidates are selected, after all).
baubino|1 month ago
It’s quite easy actually. Like the OP, I have no social media accounts other than HN (which he rightfully asserts isn’t social media but is the inheritor of the old school internet forum). I don’t see the mess everyone complains about because I choose to remove myself from it. At the same time, I still write code every day, I spend way too much time in front of a screen, and I manage to stay abreast of what’s new in tech and in the world in general.
Too many people conflate social media with technology more broadly and thus make the mistake of thinking that turning away from social media means becoming a luddite. You can escape the barrage of trolls and hottakes by turning off social media while still participating in the much smaller but saner tech landscape that remains.
vitaflo|1 month ago
sdoering|1 month ago
Then I am very proudly one. I don't do TikTok, FB, IG, LinkedIn or any of this crap. I do a bit of NH here and there. I follow a curated list of RSS feeds. And I twice a day look at a curated/grouped list of headlines from around the world, built from a multitude of sources.
Whenever I see a yellow press headline from the German bullshit print medium "BILD" when paying for gas or out shopping, I can't help but smile. That people pay money for that shit is - nowadays - beyond me.
To be fair. This was a long process. And I still regress sometimes. I started my working life at an editorial team for an email portal. Our job was to generate headlines that would stop people from logging in to read their mail and read our crap instead - because ads embedded within content were way better paid than around emails.
So I actually learned the trade. And learned that outrage (or sex) sells. This was some 18 or so years ago - the world changed since then. It became even more flammable. And more people seem to be playing with their matches. I changed - and changed jobs and industries a few times.
So over time I reduced my news intake. And during the pandemic learned to definitely reduce my social media usage - it is just not healthy for my state of mind. Because I am way to easily dopamine addicted and trigger-able. I am a classic xkcd.com/386 case.
bigmeme|1 month ago
Case in point: if you ask for expertise verification on HN you get downvoted. People would rather argue their point, regardless of validity. This site’s culture is part of the problem and it predates AI.
b3lvedere|1 month ago
Customer asked if reporting these kinds of illegal ads would be the best course. Nope, not by a long shot. As long as Google gets its money, they will not care. Ads have become a cancer of the internet.
Maybe i should setup a Pi-Hole business...
bryanrasmussen|1 month ago
Also can you set Windows not to allow Ads notifications through to the notification bar? If not that should also be a point of the law.
Now I bet somebody is going to come along and scold me for trying to solve social problems by suggesting laws be made.
ericmcer|1 month ago
If you have that you will never get scared by a popup in Chrome.
lrvick|1 month ago
On the actual open decentralized internet, which still exists, mastodon, IRC, matrix... bots are rare.
tomaskafka|1 month ago
Any platform that wants to resist bots need to - tie personas to real or expensive identities - force people to add AI flag to AI content - let readers filter content not marked as AI - and be absolutely ruthless in permabanning anyone who posts AI content unmarked, one strike and you are dead forever
The issue then becomes that marking someone as “posts unmarked AI content” becomes a weapon. No idea about how to handle it.
iso1631|1 month ago
People left and never came back.
But those bots were certainly around in the 90s
direwolf20|1 month ago
ryanjshaw|1 month ago
raincole|1 month ago
You're training yourself with a very unreliable source of truth.
lukan|1 month ago
Some people, quite some time ago, also came to that conclusion. (And they did not even had AI to blame)
https://en.wikipedia.org/wiki/I_know_that_I_know_nothing
lesam|1 month ago
Now that photos and videos can be faked, we'll have to go back to the older system.
djeastm|1 month ago
bradgessler|1 month ago
sheept|1 month ago
eru|1 month ago
Why not? Surely you can ask your friendly neighbourhood AI to run a consistent channel for you?
cortesoft|1 month ago
vitaflo|1 month ago
zahlman|1 month ago
phire|1 month ago
Even if I'm 100% certain it's not AI slop, it's still a very strong indicator that the videos are some kind of slop.
fallinditch|1 month ago
quantummagic|1 month ago
InsideOutSanta|1 month ago
In fact, your comment is part of the problem. You are one of the people who want to be outraged. In your case, outraged at people who think racism is a problem. So you attack one group of people, not realizing that you are making the issue worse by further escalating and blaming actual people, rather than realizing that the problem is systemic.
We have social networks like Facebook that require people to be angry, because anger generates engagement, and engagement generates views, and views generate ad impressions. We have outside actors who benefit from division, so they also fuel that fire by creating bot accounts that post inciting content. This has nothing to do with racism or people on one side. One second, these outside actors post a fake incident of a racist cop to fire up one side, and the next, they post a fake incident about schools with litter boxes for kids who identify as pets to fire up the other side.
Until you realize that this is the root of the problem, that the whole system is built to make people angry at each other, you are only contributing to the anger and division.
neilv|1 month ago
Many people seek being outraged. Many people seek to have awareness of truth. Many people seek getting help for problems. These are not mutually exclusive.
Just because someone fakes an incident of racism doesn't mean racism isn't still commonplace.
In various forms, with various levels of harm, and with various levels of evidence available.
(Example of low evidence: a paper trail isn't left when a black person doesn't get a job for "culture fit" gut feel reasons.)
Also, faked evidence can be done for a variety of reasons, including by someone who intends for the faking to be discovered, with the goal of discrediting the position that the fake initially seemed to support.
(Famous alleged example, in second paragraph: https://en.wikipedia.org/wiki/Killian_documents_controversy#... )
hn_throwaway_99|1 month ago
whattheheckheck|1 month ago
watwut|1 month ago
silisili|1 month ago
Not sure how I feel about that, to be honest. On one hand I admire the hustle for clicks. On the other, too many people fell for it and probably never knew it was a grift, making all recipients look bad. I only happened upon them researching a bit after my own mom called me raging about it and sent me the link.
Refreeze5224|1 month ago
[deleted]
blks|1 month ago
pjc50|1 month ago
actionfromafar|1 month ago
Fr0styMatt88|1 month ago
Which will eventually get worked around and can easily be masked by just having a backing track.
fsckboy|1 month ago
SilverSlash|1 month ago
zdc1|1 month ago
Of course there are still "trusted" mainstream sources, expect they can inadvertently (or for other reasons) misstate facts as well. I believe it will get harder and harder to reason about what's real.
munificent|1 month ago
novok|1 month ago
charles_f|1 month ago
We truly live in wonderful times!
alex1138|1 month ago
Answer? Probably "of course not"
They're too busy demonetizing videos, aggressively copyright striking things, or promoting Shorts, presumably
hshdhdhj4444|1 month ago
ekianjo|1 month ago
josfredo|1 month ago
acatton|1 month ago
Yes. And I think this is what most tech-literate people fail to understand. The issue is scale.
It takes a lot of effort to find the right clip, cut it to remove its context, and even more effort to doctor a clip. Yes, you're still facing Brandolini's law[1], you can see that with the amount of effort Captain Disillusion[2] put in his videos to debunk crap.
But AI makes it 100× times worse. First, generating a convincing entirely video only takes a little bit of prompting, and waiting, no skill is required. Second, you can do that on a massive scale. You can easily make 2 AI videos a day. If you want to doctor videos "the old way", you'll need a team of VFX artists to do it at this scale.
I genuinely think that tech-literate folks, like myself and other hackernews posters, don't understand that significantly lowering the barrier to entry to X doesn't make X equivalent to what it was before. Scale changes everything.
[1] https://en.wikipedia.org/wiki/Brandolini%27s_law
[2] https://www.youtube.com/CaptainDisillusion
haxiomic|1 month ago
Nevermark|1 month ago
It is that it is increasingly becoming indistinguishable from not-slop.
There is a different bar of believability for each of us. None of us are always right when we make a judgement. But the cues to making good calls without digging are drying up.
And it won’t be long before every fake event has fake support for diggers to find. That will increase the time investment for anyone trying to figure things out.
It isn’t the same staying the same. Nothing has ever stayed the same. “Staying the same” isn’t a thing in nature and hasn’t been the trend in human history.
jplusequalt|1 month ago
How many people were getting quote tweeted on Twitter with deep fake porn of them before Grok could remove the clothes off your person with a simple prompt?
unknown|1 month ago
[deleted]
BrtByte|1 month ago
atomtamadas|1 month ago
eudamoniac|1 month ago
"Great question! No, we have always been at war with Eurasia. Can I help with anything else?"
NoMoreNicksLeft|1 month ago
If I just feed it to 10 pandas, today, they're all dead.
And I suspect that humanity's position in this analogy is far closer to the latter than the former.
unknown|1 month ago
[deleted]
TiredOfLife|1 month ago
https://youtu.be/xiYZ__Ww02c
mlrtime|1 month ago
Then the comments are all usually not critical of the image but to portray the people supporting the [fake] image as being in a cult. It's wild!
theptip|1 month ago
As others have noted, it’s a long-term trend - agree that as you note it’ll get worse. The Russian psy-ops campaigns from the Internet Research Agency during Trump #1 campaign being a notable entry, where for example they set up both fake far-left and far-right protest events on FB and used these as engagement bait on the right/left. (I’m sure the US is doing the same/worse to their adversaries too.)
Whatever fraction bots play overall, it has to be way higher for political content given the power dynamics.
phatfish|1 month ago
And yes I know the argument about Youtube being a platform it can be used for good and bad. But Google control and create the algorithm and what is pushed to people. Make it a dumb video hosting site like it used to be and I'll buy the "bad and good" angle.