top | item 38288977

YouTube cracks down on synthetic media with AI disclosure requirement

53 points| isaacfrond | 2 years ago |arstechnica.com

70 comments

order
[+] everdrive|2 years ago|reply
I think some people on HN are worried about stifling creativity. The vast, VAST majority of the AI-driven content is going to be gamed product reviews / advertisements, as well as some political propaganda. It’s going to be a hellhole for usefulness.
[+] andybak|2 years ago|reply
Yeah but that stuff was mostly pointless garbage before the advent of AI. Content farms, apps for endlessly generating variations of the same content, low wage and Mechanical Turk jobs etc etc.

Maybe AI will speed up the process of making this stuff either ignored or unacceptable. Things gotta get worse before they get better...

[+] seydor|2 years ago|reply
So , like the "real" content . I think most people don't trust random videos, that's why they follow influencers who they consider reliable.

I think overall the distribution of quality of content will remain the same, just a lot more of it at all levels.

[+] nathanaldensr|2 years ago|reply
I think there are more categories than just that--e.g., kids videos.
[+] andy_ppp|2 years ago|reply
I predict a return to private communities on the Internet and bulletin boards where verified humans hang out, away from the rest of the garbage the Internet will become filled with.
[+] idiotsecant|2 years ago|reply
Unlikely. Most people care about network effect more than they care about authenticity. Small enclaves of hobbyists like HN will always exist but they aren't and won't be the mainstream.
[+] ActionHank|2 years ago|reply
I am expecting stamps on things that say "human made", "designed by humans". Handed out by a nonprofit called something cheesy like "The SOUL Foundation".
[+] asylteltine|2 years ago|reply
One can only hope. I miss the old internet before a bunch of stupid children were on it
[+] Waterluvian|2 years ago|reply
Been doing this via Discord for years and it’s great. I’ve got two hobby servers with maybe 500 people each. A local hobby server of 30 people and a friends server of maybe 15 people.

Verified? Meh. Not necessary. I can’t think of how that would improve my experience.

[+] ajross|2 years ago|reply
Isn't that just Facebook? Facebook is doing just fine.
[+] corobo|2 years ago|reply
Hopefully there's a search filter that comes with this disclosure requirement.

I'm all for people expressing creativity however they go about it, but I still have a preference for actual human content.

[+] ape4|2 years ago|reply
I've noticed a lot of AI narrated videos lately
[+] ajsnigrutin|2 years ago|reply
A bunch of "review" videos are just a referral link to aliexpress, a slideshow of aliexpress photos, some AI thing that takes the item description and condenses the data to a few sentences and text to speech that reads the text...

Although, even with actual humans reviewing stuff, there is a very very high chance that the review is "influenced" (=paid by) the seller/manufacturer of the item.... there are only a few youtubers left, whose reviws I'd actually trust...

[+] wincy|2 years ago|reply
I really hate when the top result for something on YouTube is an obviously AI narrated video. Even weirder when I realize it halfway through because it makes a mistake no human narrator would. I feel like someone cheated me out of my time. If you can’t spare the time to actually narrate something, why should I listen to it?
[+] 542458|2 years ago|reply
Honestly that doesn’t bother me very much, so long as the script is human-written. YouTube has always had videos with TTS narration, and I’d rather be listening to that than an incomprehensible accent or $0.50 microphone.

There’s one clever channel that makes content about the Deus Ex games that uses voice synths trained on the games’s characters - which, in addition to making the narration a bit more interesting is very on-brand for a cyberpunk game.

Here’s a video. You can see that the translation from “real” JC Denton dialogue in the first few seconds to fake AI voice is surprisingly smooth (helped by the admittedly weird voice acting in the game), although there are always a few spots in the video where it breaks down (which the author usually leaves in for laughs): https://youtu.be/jDYVx3nqgxw

[+] techterrier|2 years ago|reply
The David Attenboro 40k lore channels were hillarious
[+] chartpath|2 years ago|reply
There is a lot of synthetic content about academic subjects on YT now, and it's very low quality. I used to search for lectures to listen to while walking or driving but now need to wade through tons of enshittified spam. Even if it's reading wikipedia or other long form articles, the voices and graphics are bad.

Actually I paid for Blinkist recently and really enjoyed it at first. They have a lot of "blinks" that state at the end that the voice was synthetic and I was legitimately surprised at the quality, having not even noticed until they told me.

This seems like a good move for YT to maintain a basic level of quality (which I'm amazed can actually get worse), but I suspect it's a pretext to avoid paying out to "illegitimate creators" for commercial reasons in a way that makes them look like they care about people.

[+] seydor|2 years ago|reply
They should label misleading or fake content, not all "Synthetic content", or else that's discriminatory. There are lots of videos narrated by synthetic AI voices. Lots of users don't speak english well or don't have a good accent, nothing wrong with that.
[+] taneq|2 years ago|reply
Some people just want their Warhammer 40k lore deep-dives narrated by David Attenborough, and I don't see why that's bad.
[+] lern_too_spel|2 years ago|reply
Second sentence of the article: "We’ll require creators to disclose when they've created altered or synthetic content that is realistic [emphasis added]."

From the second paragraph: "For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn't actually do."

They are not requiring labeling of all synthetic content.

[+] ajross|2 years ago|reply
Why would it be bad to have videos narrated by synthetic AI voices carry a label declaring that the voice is synthetic? This isn't a censorship proposal.
[+] wepple|2 years ago|reply
I challenge you to define “misleading”

Synthetic is at least relatively straight forward to categorize; it’s not based on the content but on the production process & output.

I do see your concern about using synthesis to assist with language barriers, but we should be able to distinguish between an actual human video with synthetic narration and the floods of terrible synthetic voice over non-human slide shows & “borrowed” content.

[+] IG_Semmelweiss|2 years ago|reply
this is the kind of thing that algorithms are really good at (determining provenance/AI traces, at scale).

search has been trying to fingerprint users forever, its time to point their agorithmic guns at something far more useful for the actual end user.

it makes sense for search algorithms to spot these and then flag them accordingly if they have not been disclosed.

[+] criley2|2 years ago|reply
Algorithms are NOT good at detecting AI, and every attempt so far has been laughably terrible. For example the internet is already filling up with students complaining about teachers failing their school work for false positives on AI text generation.

And there's the reality of the Survivorship Bias when it comes to examples: the only times you see AI art are the times it didn't fool you, not the full set of all AI images you saw. Similarly, if the algorithm can find the easy ones but miss the same better ones, is it really that useful? Or is it just training more realistic AI to evade both you and the algorithms detection?

The idea that people will fundamentally be able to differentiate AI and human works is nonsensical from a future perspective and is an aberration of AI quality for the next few years at most. If you're not preparing for fully indistinguishable AI text/image/video/audio, then you're not preparing for the future.

[+] orbital-decay|2 years ago|reply
With an automatic system, what will actually happen is even more algorithmic bureaucracy, and real people being banned for false positives or for using any ML-assisted techniques at all. These algorithms are snake oil, there's simply no real way to detect it reliably. Youtube is enough of a dumpster fire already and its automated systems are being exploited left and right.

Don't make any mistakes, this requirement is meant to cover Youtube's ass, not to do anything useful to you. How it will be enforced is entirely up to them. I suspect it will be much closer to "occasionally banning someone to appease the outraged crowd" rather than to "algorithmic banhammer".

[+] beej71|2 years ago|reply
Are we going to see this on every Hollywood blockbuster trailer?
[+] gumballindie|2 years ago|reply
A sensitive choice - procedurally generated content should be disclosed, not only by youtube but also twitter, reddit, and most important news websites.
[+] michaelcampbell|2 years ago|reply
Well, that will sure work. About as well as the Craiglist ad poster who says, "No scammers."
[+] cabirum|2 years ago|reply
Why would "AI" be treated in a special way among other tools in a CG artists toolbox?

Should I label a video where "AI" applied denoising or color grading?

I could hire an actor to professionally fake a voice of a celebrity vs I could generate a voice with "AI".

What even is a definition of "AI" vs a "simple" ML or genetic model or sufficiently advanced algorithm?

Seems like another reason for arbitrary content removal because video is suspected/highly likely to be made with "AI".

[+] wepple|2 years ago|reply
All of the examples you’ve given prevent you from trivially scaling content creation to flood the world with nonsense

I have no idea what the intention behind this is, but I suspect it might be more to do with preventing extremely low quality content that doesn’t have a human in the loop?