I think some people on HN are worried about stifling creativity. The vast, VAST majority of the AI-driven content is going to be gamed product reviews / advertisements, as well as some political propaganda. It’s going to be a hellhole for usefulness.
Yeah but that stuff was mostly pointless garbage before the advent of AI. Content farms, apps for endlessly generating variations of the same content, low wage and Mechanical Turk jobs etc etc.
Maybe AI will speed up the process of making this stuff either ignored or unacceptable. Things gotta get worse before they get better...
I predict a return to private communities on the Internet and bulletin boards where verified humans hang out, away from the rest of the garbage the Internet will become filled with.
Unlikely. Most people care about network effect more than they care about authenticity. Small enclaves of hobbyists like HN will always exist but they aren't and won't be the mainstream.
I am expecting stamps on things that say "human made", "designed by humans". Handed out by a nonprofit called something cheesy like "The SOUL Foundation".
Been doing this via Discord for years and it’s great. I’ve got two hobby servers with maybe 500 people each. A local hobby server of 30 people and a friends server of maybe 15 people.
Verified? Meh. Not necessary. I can’t think of how that would improve my experience.
A bunch of "review" videos are just a referral link to aliexpress, a slideshow of aliexpress photos, some AI thing that takes the item description and condenses the data to a few sentences and text to speech that reads the text...
Although, even with actual humans reviewing stuff, there is a very very high chance that the review is "influenced" (=paid by) the seller/manufacturer of the item.... there are only a few youtubers left, whose reviws I'd actually trust...
I really hate when the top result for something on YouTube is an obviously AI narrated video. Even weirder when I realize it halfway through because it makes a mistake no human narrator would. I feel like someone cheated me out of my time. If you can’t spare the time to actually narrate something, why should I listen to it?
Honestly that doesn’t bother me very much, so long as the script is human-written. YouTube has always had videos with TTS narration, and I’d rather be listening to that than an incomprehensible accent or $0.50 microphone.
There’s one clever channel that makes content about the Deus Ex games that uses voice synths trained on the games’s characters - which, in addition to making the narration a bit more interesting is very on-brand for a cyberpunk game.
Here’s a video. You can see that the translation from “real” JC Denton dialogue in the first few seconds to fake AI voice is surprisingly smooth (helped by the admittedly weird voice acting in the game), although there are always a few spots in the video where it breaks down (which the author usually leaves in for laughs): https://youtu.be/jDYVx3nqgxw
There is a lot of synthetic content about academic subjects on YT now, and it's very low quality. I used to search for lectures to listen to while walking or driving but now need to wade through tons of enshittified spam. Even if it's reading wikipedia or other long form articles, the voices and graphics are bad.
Actually I paid for Blinkist recently and really enjoyed it at first. They have a lot of "blinks" that state at the end that the voice was synthetic and I was legitimately surprised at the quality, having not even noticed until they told me.
This seems like a good move for YT to maintain a basic level of quality (which I'm amazed can actually get worse), but I suspect it's a pretext to avoid paying out to "illegitimate creators" for commercial reasons in a way that makes them look like they care about people.
They should label misleading or fake content, not all "Synthetic content", or else that's discriminatory. There are lots of videos narrated by synthetic AI voices. Lots of users don't speak english well or don't have a good accent, nothing wrong with that.
Second sentence of the article: "We’ll require creators to disclose when they've created altered or synthetic content that is realistic [emphasis added]."
From the second paragraph: "For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn't actually do."
They are not requiring labeling of all synthetic content.
Why would it be bad to have videos narrated by synthetic AI voices carry a label declaring that the voice is synthetic? This isn't a censorship proposal.
Synthetic is at least relatively straight forward to categorize; it’s not based on the content but on the production process & output.
I do see your concern about using synthesis to assist with language barriers, but we should be able to distinguish between an actual human video with synthetic narration and the floods of terrible synthetic voice over non-human slide shows & “borrowed” content.
Algorithms are NOT good at detecting AI, and every attempt so far has been laughably terrible. For example the internet is already filling up with students complaining about teachers failing their school work for false positives on AI text generation.
And there's the reality of the Survivorship Bias when it comes to examples: the only times you see AI art are the times it didn't fool you, not the full set of all AI images you saw. Similarly, if the algorithm can find the easy ones but miss the same better ones, is it really that useful? Or is it just training more realistic AI to evade both you and the algorithms detection?
The idea that people will fundamentally be able to differentiate AI and human works is nonsensical from a future perspective and is an aberration of AI quality for the next few years at most. If you're not preparing for fully indistinguishable AI text/image/video/audio, then you're not preparing for the future.
With an automatic system, what will actually happen is even more algorithmic bureaucracy, and real people being banned for false positives or for using any ML-assisted techniques at all. These algorithms are snake oil, there's simply no real way to detect it reliably. Youtube is enough of a dumpster fire already and its automated systems are being exploited left and right.
Don't make any mistakes, this requirement is meant to cover Youtube's ass, not to do anything useful to you. How it will be enforced is entirely up to them. I suspect it will be much closer to "occasionally banning someone to appease the outraged crowd" rather than to "algorithmic banhammer".
A sensitive choice - procedurally generated content should be disclosed, not only by youtube but also twitter, reddit, and most important news websites.
All of the examples you’ve given prevent you from trivially scaling content creation to flood the world with nonsense
I have no idea what the intention behind this is, but I suspect it might be more to do with preventing extremely low quality content that doesn’t have a human in the loop?
[+] [-] everdrive|2 years ago|reply
[+] [-] andybak|2 years ago|reply
Maybe AI will speed up the process of making this stuff either ignored or unacceptable. Things gotta get worse before they get better...
[+] [-] seydor|2 years ago|reply
I think overall the distribution of quality of content will remain the same, just a lot more of it at all levels.
[+] [-] nathanaldensr|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] andy_ppp|2 years ago|reply
[+] [-] idiotsecant|2 years ago|reply
[+] [-] ActionHank|2 years ago|reply
[+] [-] asylteltine|2 years ago|reply
[+] [-] Waterluvian|2 years ago|reply
Verified? Meh. Not necessary. I can’t think of how that would improve my experience.
[+] [-] pyinstallwoes|2 years ago|reply
[+] [-] ajross|2 years ago|reply
[+] [-] corobo|2 years ago|reply
I'm all for people expressing creativity however they go about it, but I still have a preference for actual human content.
[+] [-] ingen0s|2 years ago|reply
[+] [-] ape4|2 years ago|reply
[+] [-] ajsnigrutin|2 years ago|reply
Although, even with actual humans reviewing stuff, there is a very very high chance that the review is "influenced" (=paid by) the seller/manufacturer of the item.... there are only a few youtubers left, whose reviws I'd actually trust...
[+] [-] wincy|2 years ago|reply
[+] [-] 542458|2 years ago|reply
There’s one clever channel that makes content about the Deus Ex games that uses voice synths trained on the games’s characters - which, in addition to making the narration a bit more interesting is very on-brand for a cyberpunk game.
Here’s a video. You can see that the translation from “real” JC Denton dialogue in the first few seconds to fake AI voice is surprisingly smooth (helped by the admittedly weird voice acting in the game), although there are always a few spots in the video where it breaks down (which the author usually leaves in for laughs): https://youtu.be/jDYVx3nqgxw
[+] [-] techterrier|2 years ago|reply
[+] [-] chartpath|2 years ago|reply
Actually I paid for Blinkist recently and really enjoyed it at first. They have a lot of "blinks" that state at the end that the voice was synthetic and I was legitimately surprised at the quality, having not even noticed until they told me.
This seems like a good move for YT to maintain a basic level of quality (which I'm amazed can actually get worse), but I suspect it's a pretext to avoid paying out to "illegitimate creators" for commercial reasons in a way that makes them look like they care about people.
[+] [-] seydor|2 years ago|reply
[+] [-] taneq|2 years ago|reply
[+] [-] lern_too_spel|2 years ago|reply
From the second paragraph: "For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn't actually do."
They are not requiring labeling of all synthetic content.
[+] [-] ajross|2 years ago|reply
[+] [-] wepple|2 years ago|reply
Synthetic is at least relatively straight forward to categorize; it’s not based on the content but on the production process & output.
I do see your concern about using synthesis to assist with language barriers, but we should be able to distinguish between an actual human video with synthetic narration and the floods of terrible synthetic voice over non-human slide shows & “borrowed” content.
[+] [-] IG_Semmelweiss|2 years ago|reply
search has been trying to fingerprint users forever, its time to point their agorithmic guns at something far more useful for the actual end user.
it makes sense for search algorithms to spot these and then flag them accordingly if they have not been disclosed.
[+] [-] criley2|2 years ago|reply
And there's the reality of the Survivorship Bias when it comes to examples: the only times you see AI art are the times it didn't fool you, not the full set of all AI images you saw. Similarly, if the algorithm can find the easy ones but miss the same better ones, is it really that useful? Or is it just training more realistic AI to evade both you and the algorithms detection?
The idea that people will fundamentally be able to differentiate AI and human works is nonsensical from a future perspective and is an aberration of AI quality for the next few years at most. If you're not preparing for fully indistinguishable AI text/image/video/audio, then you're not preparing for the future.
[+] [-] orbital-decay|2 years ago|reply
Don't make any mistakes, this requirement is meant to cover Youtube's ass, not to do anything useful to you. How it will be enforced is entirely up to them. I suspect it will be much closer to "occasionally banning someone to appease the outraged crowd" rather than to "algorithmic banhammer".
[+] [-] beej71|2 years ago|reply
[+] [-] gumballindie|2 years ago|reply
[+] [-] ChrisArchitect|2 years ago|reply
Lots of discussion yesterday:
https://news.ycombinator.com/item?id=38269656
[+] [-] michaelcampbell|2 years ago|reply
[+] [-] ChrisArchitect|2 years ago|reply
[+] [-] JudithDeren|2 years ago|reply
[deleted]
[+] [-] cabirum|2 years ago|reply
Should I label a video where "AI" applied denoising or color grading?
I could hire an actor to professionally fake a voice of a celebrity vs I could generate a voice with "AI".
What even is a definition of "AI" vs a "simple" ML or genetic model or sufficiently advanced algorithm?
Seems like another reason for arbitrary content removal because video is suspected/highly likely to be made with "AI".
[+] [-] wepple|2 years ago|reply
I have no idea what the intention behind this is, but I suspect it might be more to do with preventing extremely low quality content that doesn’t have a human in the loop?
[+] [-] unknown|2 years ago|reply
[deleted]