Back in the day, before WikiPedia was even born I made a small website called "Adalyn". The website pretended to be a very curious girl that you could teach stuff. Everytime some term would be used it would ask you to explain that term, adding it to the list of definitions. And so on, it got quite clever within a week or so. Then the jerks discovered the site and within hours it was nothing but a huge perverted caricature of what it was intended to be.
The internet has - and will always have - a number of assholes whose only joy it is to make others miserable in whatever way they can. So from that day on I always design whatever I make with abuse in mind from day one. It still doesn't always work, but for the most part that seems to be the only reasonable way to put stuff together for public consumption.
You might enjoy this video. It’s part humor and part technical discussion on how his talking “Banana” bot was brigades by users on Twitch in an effort to make him say racist words and phrases. After he got a strike on Twitch, he patched the software and then it got circumvented... terrific little video about the battle between him and his viewers and ultimately ends with his approach to sterilizing input so that the banana couldn’t speak anything harmful. I thought it was great.
Add in the opportunity for profit and algorithms that can generate content, and you get a magic spiral of abusive and exploitative content that rewards the worst impulses.
That's going to be their downfall, since its priority seems to be to connect people to the worst of the site as firmly as possible. A year or two ago I became curious about the flat earthers and watched 2 videos to try and figure out if there was any sort of deeper meaning to their superficially ridiculous claims (spoiler: there isn't). YT kept shoving flat earth videos at me for weeks like some sort of ontological crack dealer.
>I became curious about the flat earthers and watched 2 videos to try and figure out if there was any sort of deeper meaning to their superficially ridiculous claims (spoiler: there isn't)
Whether by design or accident the flat earth nonsense has the potential effect of unmooring people from the very notion of an accepted reality; and thus habituating them to the idea that what they inherently know about the world cannot be trusted. It literally pulls the world out from under their feet.
That is, if you can get people to believe or even question that something as fundamental as the shape of the earth is a lie they've been told, then you can effectively "clean-sheet" them. They become blank-canvases upon which you can write the alternative reality of your choice. Not to mention this type of nonsense generates mistrust in the status-quo, as it raises the question among its victims: "who is this 'they' who lied to us about such fundamental and important matters?"
So, if you then claim to be against the status quo, then you have sympathizers who are now open to your message. It thus becomes much easier to write your story onto their now blank canvases and serve as their champion.
About 2 months ago I discovered Don Rickles and thought he was hilarious. So of course, I spent a couple of hours going down the Don Rickles YouTube hole. The next weekend I did the same for an hour or so. And in between I would occasionally show my wife a clip that was particularly funny. Then I stopped. I’ve had my fill.
But now, all I see recommended is Don Rickles. Literally the whole sidebar except 1 or 2 videos is Don Rickles even if I’m watching a tech video. Drives me nuts.
Although this does nothing to solve the overall problem (which should not be understated!), you can personally not only delete specific videos in your view history but TURN OFF the logging of video views. See https://myactivity.google.com
If only we could get everyone to turn it off en masse.
I believe there is a technical solution to this that does not involve sanitizing the whole Internet to the level of broadcast TV (good luck with that).
A "Child mode" flag would be set system wide when the device is configured by the parent to be used by a child. This should enforce a top down policy that only allows installing child safe apps or apps that propagate and enforce a similar policy. For example, Chrome would be installed but would only allow access to sites that respect the child safe flag. Youtube would load but only run videos and suggestions form a carefully curated set of publishers that respect the age. Etcetera.
Compared with previous approaches that failed due to lack of adoption (RASCi, ICRA) this system would provide a strong incentive for app and site owners to properly support the age flag, else they are simply invisible in the children market. A strong initial effort would be required from Google/Microsoft to bootstrap the initial whitelists.
The problem isn't technical at all. In fact, the problem stems from trying to use technology to replace humans. The problem comes because the thing that your entire proposal rests on, curation, isn't being done.
> Youtube would load but only run videos and suggestions form a carefully curated set of publishers that respect the age.
Isn't Youtubes whole schtick that they don't want to do any curation, but instead want to just turn the algorithms loose on the firehose of data that's being uploaded and recommend whatever the algorithm spits out? Pivoting to human curation from there sounds like a really difficult task for them.
This could work; whitelisting is a far better and easier approach to trying to weed out "bad" stuff.
By far the best idea I've ever heard was the creation of a .kids TLD that would be essentially what you described. I believe there was a proposal for that that went nowhere.
That was the day we learned about "trolls" on the internet.
I wish this were a better world and that this was unnecessary but in retrospect it was a sweet bonding moment we still laugh about years later.
Kids are resilient as long as you are there to help them make sense of it. And you have too. The internet and it's trolls aren't going anywhere. In the end, goatse comes for us all...
Letting a child wander around the Internet without adult supervision is something akin to child neglect. Way too much clickbait, way too much malevolent content.
Its easy to blame the parents but the problem is some of these shock videos are designed to look like innocent kids videos (eg a Peppa Pig clip) and it's not until the "shock" part has already happened that even the adult is aware of the true nature of the content.
Sure, there are some which are easily identified by the title or an obviously unofficial animation style, but if all of them were like that then I suspect YouTube would have had an easier job training their AI to detect and remove such content.
My kids got help to find kid-friendly things but were essentially free to do pretty much as they pleased online. They were given two tongue-in-cheek guidelines: No porn and no learning how to make bombs.
I think the key is to assume they are good kids who need some help finding what interests them, not bad kids who need a parent jail keeper preventing them from doing bad things.
Having said that, I was fortunate to be stationed in Germany when they were really little with access to only one American TV channel. I had lots of videos for them, not to control content but to give them entertainment. Still, content control was largely baked into the situation.
So I am somewhat self conscious that what worked back in the day may not work now and other disclaimers.
Edit: I will also add that when they were little, we had one computer in a public space. Individual computers in private spaces came much later.
Agreed. Except it is extremely common for parents to let their kids watch YouTube unsupervised (and when I mention the problems they don’t want to hear it; “nah it’s fine, they won’t click weird stuff...”), so this does beg the question whether YouTube should be responsible too (I think they should).
That said, YouTube is banned in our house because of this and Netflix is way superior anyway.
This is odd to hear since that "free roam" idea of letting them wander for miles unsupervised is all the rage on here. I'd say that's much more reckless than letting them use the internet. Unsupervised computer use at a young age was nothing but a positive for me, yet being left unsupervised with an irresponsible neighbor's dumb kids left me with a crushed wrist and internal bruising from a bicycle accident and the beginning of lifelong weight problems from the subsequent aversion to such activity.
After about 2 months of endless “Johnny Johnny” and “Finger Family” videos which were bad enough, a video came up that used the MF word. My wife reacted appropriately and just deleted the YouTube app. Problem solved, no chance encounters with the bad content and more sanity for mom and dad. In retrospect I wish we would have done it sooner.
How are creepy cartoons (aka horror movies) anywhere near "child abuse"?
Not saying the cartoons are right or appropriate, but they're not child abuse unless actual children are molested while producing them (which I don't see how considering it's all computer-generated graphics).
The article did nothing to explain the motivation for making sick videos instead of bland knockoffs. I know my grandchildren will watch anything. Surely a sick video won't gather more views than a bland one.
Maybe the number that are sick is small and not typical.
Ford is creating vans for paedophiles to abuse kids in and do nothing about it!
Home Depot is selling construction materials for paedophiles to build basements to abuse kids!
The topic of grotesquely inappropriate kids videos on YouTube (aka "Elsagate" videos) comes up on HN and offline every so often. I've noticed that people's opinions on it tend to fall into one or two categories: they either get self-righteous about how their child doesn't have to see them because of their supreme parenting skills, or they start waxing philosophical about the limits of AI or importance of human curation in online communities or something like that.
What's missed in these discussions is the pure oddness of these videos. And I don't mean odd as in quirky and funny. I mean odd as in uncategorized - meaning that there's no conceivable way for some of these videos to emerge logically as a function of society. And the sheer effort put into them and endless variety seems to imply some sort of underground economy or industry behind them.
A lot of children's media is edgy and experimental in ways that adults deem inappropriate, and it's been that way for decades. Remember Ren and Stimpy? There's a lot in that that would make today's parents really uncomfortable. And nowadays, kids' movies (like Shrek) will usually have some inappropriate humor encoded in it to keep the adults happy. But it's all just entertainment and the themes are all familiar. You can kind of imagine how it played out. The writer of a kids' show had a slightly-edgier-than-usual idea for a gag and thought it would fly under the company's radar, and happened to be right that time.
Maybe I'm just naive about the state of video technology or the depravity of ordinary people, but I can't possibly imagine the type of mind that dreams this stuff up or the company that provides the resources and manpower for such a project. I get that they're done to farm clicks from 2-year-olds, but that doesn't explain the content of the videos. Stuff like Elsa becoming pregnant and receiving an abortion via Spiderman injecting her with a giant hypodermic needle, or Minnie Mouse blacking out after her friend spikes her cocktail with pills and waking up chained to a bed? The cruelty and inhumanity depicted in these videos using children's characters go way, way beyond the realm of goofy pranks and immature comedy. They indicate an intimate familiarity with the criminal world.
Elephant in the room being that peppa pig is not a copyright free content...
There is no risk at all to put your kid in front of a peppa pig list of episodes on Netflix or another legal video plateform.
But YouTube can’t efficiently curate or filter content like peppa pig because it’s not legal and the only appropriate action following law would be removal.
And that’s their main problem they actually needs illegal kids cartoon to be posted because it’s what people that put kid in front of YouTube are seeking. And Alphabet have paying customers that ask to place their ads in front of kids...
One of the greatest successes of companies like Google is transferring all of the negative externalities of their systems onto the public using some fiction about technology “going awry” of its own accord.
As a parent in the IT world for the past 15yrs (with a background in networks, servers, etc) I find it difficult to reasonably block YouTube from my pre-teen kids. I can delete from their devices, I can set parental controls, etc but somehow they were always able to find some kind of way to watch it. I finally had to tie all smart TVs and devices to a local dns server in my home and filter all in/out traffic as a last resort. Most parents don’t have the slightest clue how to do something like this and I see my kids friends with pretty much open access to the internet and YouTube and it’s very disconcerting.. plus I’ve seen some of the “kid friendly” crap on YouTube, like catching my kids watching adults giving product reviews of toys (basically marketing to them about why they would want the toy) that is marked as kid safe. It’s hours and hours of let’s look at this toy, let’s talk about what it does and why you “need” it. It’s utterly absurd and a form of predatory marketing. I know there are some positive things about YouTube like when I can lookup a quick instructional video on how to do something but in whole, I’d rather see YouTube just completely shutdown.
Shutdown for who? I don’t think anyone wants to see YouTube actually go away entirely. I agree that better filtering of advertising to kids is needed though.
>Whether these videos are deliberately malicious, “merely” trolling, or the emergent effect of complex systems, isn’t the point.
That's an extraordinarily naive statement and it's baffling that the author even questions the obvious intent and, worse, reaches the conclusion that intent isn't important here. He then goes on to say the problem is with the platforms, algorithms, etc. While these have their issues, I think he overweights them by a good bit, and they are secondary to this particular subject. That is, if the algorithms weren't being abused, then they would be just fine for their intended purpose. The problem here is that the content they are working on is problematic to say the least.
Specifically, someone is clearly and deliberately targeting children with disturbing videos that promote extreme violence associated with themes and characters they think to benevolent and loving. He then completely acknowledges this in the following nugget:
>Previously happy and well-adjusted children became frightened of the dark, prone to fits of crying, or displayed violent behaviour and talked about self-harm – all classic symptoms of abuse.
So, the harm is specific and clear. How can he then dismiss this as possible happenstance of secondary importance? We also live in a culture where people are being radicalized online and children are now being murdered in schools en masse, frequently by other (slightly older) children. Does the author not see a thread here?
Now, broaden the context further to consider the armies of troll-bots, etc. on our platforms that we know are being weaponized against us to incite anger, division, etc. There can be no other conclusion except that we are under a sustained attack to divide and destroy our society, and now children are being included on the target list.
This is not some randomly generated content to game platform algorithms for ad revenue that just happens to be disturbing. It is specifically effective in damaging children and is rooted in a very particular line of psychological attack.
Decent job of pointing out that many similar problems exist in different areas. They could also have mentioned Microsoft's attempt at a self-teaching chatbot that went full fascist in a surprisingly short amount of time.
Much of it illustrates how surprisingly difficult it is to get algorithms to replace human judgement, and how easy it is to think you've done it when you haven't. We look at instances of humans making poor judgements and tend to think , "an algorithm could improve on that". It turns out, human judgement does better than you think, compared to the task of automating that in an algorithm. Self-driving cars, I am looking at you.
Having watched a few of these videos, it's not that they have a message but they're a contentless stream of semi-dramatic events (of the events violent). Something that would only entertain a 2 year old. But naturally they are aim at 2 years olds (or there-abouts).
This is also a natural result of parents using video as a baby-sitter for kids too young to really have any judgment of their own. So we have parent who won't exercise judgment, kids who can't and youtube which is expected to do so.
This goes naturally with the recent item about large corporations discriminating against pregnant women. If society isn't offering parents any extra time to take care of their kids, parents fall back in automated methods. We're finding downsides to such automation but that doesn't stop it, given that society isn't offering any time for alternatives.
No, humans don't do better. What you probably mean is you're personally good at applying your own judgement that's arbitrarily based on your own culture and experiences regardless of whether that's good or bad. But would you trust all other humans with that task? Isis was composed of humans but they taught schoolchildren to kill people. I grew being read the bible at school and had to hear stories about entrails being spilled on the ground, hands being cut off, and a father attempting to murder his son in a fit of mental illness. Humans told me those stories. These complaints are just parents being parents and hunting for anything to worry about for their children. The anecdotes about being afraid of the dark, crying and violence obviously cannot be attributed to some particular thing they watched. Even scientists can't work out the causes of child behavior with much reliability.
>Previously happy and well-adjusted children became frightened of the dark, prone to fits of crying, or displayed violent behaviour and talked about self-harm
I'm all for keeping kids away from mature content, and I think parents should monitor their children online. . . but I thought kids were more resilient than this. I don't have kids, but it's hard for me to imagine them displaying violence and self harm because of some disturbing video.
You’re just underestimating how disturbing these are, even for adults. Violence, rape, cannibalism, gore, in the creepiest form possible, involving either your favourite cartoon characters or adults dressed in ragged Spider-Man costumes.
Lego Universe from memory had "content moderators" constantly looking at the world players created to remove inappropriate content (as LEGO is aimed at children), https://www.geek.com/games/penis-detection-derailed-the-lego... which caused it in the end to be shutdown
[+] [-] jacquesm|7 years ago|reply
The internet has - and will always have - a number of assholes whose only joy it is to make others miserable in whatever way they can. So from that day on I always design whatever I make with abuse in mind from day one. It still doesn't always work, but for the most part that seems to be the only reasonable way to put stuff together for public consumption.
It's frustrating, but it's reality.
[+] [-] sangnoir|7 years ago|reply
It was a lesson Microsoft learnt the hard way, a decade+ later with its AI twitter bot (Tay)[1]
1. https://www.theverge.com/2016/3/24/11297050/tay-microsoft-ch...
[+] [-] hbosch|7 years ago|reply
* https://youtu.be/bJ5ppf0po3k
[+] [-] skywhopper|7 years ago|reply
[+] [-] anigbrowl|7 years ago|reply
That's going to be their downfall, since its priority seems to be to connect people to the worst of the site as firmly as possible. A year or two ago I became curious about the flat earthers and watched 2 videos to try and figure out if there was any sort of deeper meaning to their superficially ridiculous claims (spoiler: there isn't). YT kept shoving flat earth videos at me for weeks like some sort of ontological crack dealer.
[+] [-] unclebucknasty|7 years ago|reply
Whether by design or accident the flat earth nonsense has the potential effect of unmooring people from the very notion of an accepted reality; and thus habituating them to the idea that what they inherently know about the world cannot be trusted. It literally pulls the world out from under their feet.
That is, if you can get people to believe or even question that something as fundamental as the shape of the earth is a lie they've been told, then you can effectively "clean-sheet" them. They become blank-canvases upon which you can write the alternative reality of your choice. Not to mention this type of nonsense generates mistrust in the status-quo, as it raises the question among its victims: "who is this 'they' who lied to us about such fundamental and important matters?"
So, if you then claim to be against the status quo, then you have sympathizers who are now open to your message. It thus becomes much easier to write your story onto their now blank canvases and serve as their champion.
[+] [-] afro88|7 years ago|reply
But now, all I see recommended is Don Rickles. Literally the whole sidebar except 1 or 2 videos is Don Rickles even if I’m watching a tech video. Drives me nuts.
[+] [-] SmellyGeekBoy|7 years ago|reply
[+] [-] quadrangle|7 years ago|reply
If only we could get everyone to turn it off en masse.
[+] [-] p3llin0r3|7 years ago|reply
Belief in a flat earth is a dog-whistle for fundamentalist Christians.
So yeah, they don't care about the fact that we can observe the Earth is round or any other proof, their ignorance is proof of their faith.
[+] [-] cornholio|7 years ago|reply
A "Child mode" flag would be set system wide when the device is configured by the parent to be used by a child. This should enforce a top down policy that only allows installing child safe apps or apps that propagate and enforce a similar policy. For example, Chrome would be installed but would only allow access to sites that respect the child safe flag. Youtube would load but only run videos and suggestions form a carefully curated set of publishers that respect the age. Etcetera.
Compared with previous approaches that failed due to lack of adoption (RASCi, ICRA) this system would provide a strong incentive for app and site owners to properly support the age flag, else they are simply invisible in the children market. A strong initial effort would be required from Google/Microsoft to bootstrap the initial whitelists.
[+] [-] prepend|7 years ago|reply
Goog just needs to give up and partner with pbs or something to make kids videos a charity.
[+] [-] s73v3r_|7 years ago|reply
[+] [-] s_kilk|7 years ago|reply
Isn't Youtubes whole schtick that they don't want to do any curation, but instead want to just turn the algorithms loose on the firehose of data that's being uploaded and recommend whatever the algorithm spits out? Pivoting to human curation from there sounds like a really difficult task for them.
[+] [-] drblast|7 years ago|reply
By far the best idea I've ever heard was the creation of a .kids TLD that would be essentially what you described. I believe there was a proposal for that that went nowhere.
[+] [-] tootie|7 years ago|reply
[+] [-] Hello71|7 years ago|reply
[+] [-] noonespecial|7 years ago|reply
That was the day we learned about "trolls" on the internet.
I wish this were a better world and that this was unnecessary but in retrospect it was a sweet bonding moment we still laugh about years later.
Kids are resilient as long as you are there to help them make sense of it. And you have too. The internet and it's trolls aren't going anywhere. In the end, goatse comes for us all...
[+] [-] schappim|7 years ago|reply
1. http://peppapig.wikia.com/wiki/Gabriella_Goat
[+] [-] sremani|7 years ago|reply
Step 2. Install PBS Kids and Duck Duck Moose
Do not let any un-curated automagic near kids. Its not worth the headache!
[+] [-] mattsfrey|7 years ago|reply
[+] [-] matte_black|7 years ago|reply
[+] [-] laumars|7 years ago|reply
Sure, there are some which are easily identified by the title or an obviously unofficial animation style, but if all of them were like that then I suspect YouTube would have had an easier job training their AI to detect and remove such content.
[+] [-] DoreenMichele|7 years ago|reply
My kids got help to find kid-friendly things but were essentially free to do pretty much as they pleased online. They were given two tongue-in-cheek guidelines: No porn and no learning how to make bombs.
I think the key is to assume they are good kids who need some help finding what interests them, not bad kids who need a parent jail keeper preventing them from doing bad things.
Having said that, I was fortunate to be stationed in Germany when they were really little with access to only one American TV channel. I had lots of videos for them, not to control content but to give them entertainment. Still, content control was largely baked into the situation.
So I am somewhat self conscious that what worked back in the day may not work now and other disclaimers.
Edit: I will also add that when they were little, we had one computer in a public space. Individual computers in private spaces came much later.
[+] [-] rvanmil|7 years ago|reply
[+] [-] PretzelFisch|7 years ago|reply
[+] [-] koolba|7 years ago|reply
[+] [-] anonymfus|7 years ago|reply
[+] [-] jimmaswell|7 years ago|reply
[+] [-] rubyfan|7 years ago|reply
[+] [-] new_age_garbage|7 years ago|reply
[+] [-] Rjevski|7 years ago|reply
Not saying the cartoons are right or appropriate, but they're not child abuse unless actual children are molested while producing them (which I don't see how considering it's all computer-generated graphics).
[+] [-] mchahn|7 years ago|reply
Maybe the number that are sick is small and not typical.
[+] [-] JetSpiegel|7 years ago|reply
This works with many companies.
[+] [-] tboyd47|7 years ago|reply
What's missed in these discussions is the pure oddness of these videos. And I don't mean odd as in quirky and funny. I mean odd as in uncategorized - meaning that there's no conceivable way for some of these videos to emerge logically as a function of society. And the sheer effort put into them and endless variety seems to imply some sort of underground economy or industry behind them.
A lot of children's media is edgy and experimental in ways that adults deem inappropriate, and it's been that way for decades. Remember Ren and Stimpy? There's a lot in that that would make today's parents really uncomfortable. And nowadays, kids' movies (like Shrek) will usually have some inappropriate humor encoded in it to keep the adults happy. But it's all just entertainment and the themes are all familiar. You can kind of imagine how it played out. The writer of a kids' show had a slightly-edgier-than-usual idea for a gag and thought it would fly under the company's radar, and happened to be right that time.
Maybe I'm just naive about the state of video technology or the depravity of ordinary people, but I can't possibly imagine the type of mind that dreams this stuff up or the company that provides the resources and manpower for such a project. I get that they're done to farm clicks from 2-year-olds, but that doesn't explain the content of the videos. Stuff like Elsa becoming pregnant and receiving an abortion via Spiderman injecting her with a giant hypodermic needle, or Minnie Mouse blacking out after her friend spikes her cocktail with pills and waking up chained to a bed? The cruelty and inhumanity depicted in these videos using children's characters go way, way beyond the realm of goofy pranks and immature comedy. They indicate an intimate familiarity with the criminal world.
[+] [-] Twisell|7 years ago|reply
There is no risk at all to put your kid in front of a peppa pig list of episodes on Netflix or another legal video plateform.
But YouTube can’t efficiently curate or filter content like peppa pig because it’s not legal and the only appropriate action following law would be removal.
And that’s their main problem they actually needs illegal kids cartoon to be posted because it’s what people that put kid in front of YouTube are seeking. And Alphabet have paying customers that ask to place their ads in front of kids...
[+] [-] EA|7 years ago|reply
[+] [-] knuththetruth|7 years ago|reply
[+] [-] booleandilemma|7 years ago|reply
[+] [-] dpeck|7 years ago|reply
[+] [-] iincuser|7 years ago|reply
[+] [-] yurishimo|7 years ago|reply
[+] [-] sireat|7 years ago|reply
I remember there was a second party website of human curated youtube videos for kids some 4-5 years ago, Kideos .
It worked really well at the time (for my first kid) but site seems to be broken now.
[+] [-] unclebucknasty|7 years ago|reply
That's an extraordinarily naive statement and it's baffling that the author even questions the obvious intent and, worse, reaches the conclusion that intent isn't important here. He then goes on to say the problem is with the platforms, algorithms, etc. While these have their issues, I think he overweights them by a good bit, and they are secondary to this particular subject. That is, if the algorithms weren't being abused, then they would be just fine for their intended purpose. The problem here is that the content they are working on is problematic to say the least.
Specifically, someone is clearly and deliberately targeting children with disturbing videos that promote extreme violence associated with themes and characters they think to benevolent and loving. He then completely acknowledges this in the following nugget:
>Previously happy and well-adjusted children became frightened of the dark, prone to fits of crying, or displayed violent behaviour and talked about self-harm – all classic symptoms of abuse.
So, the harm is specific and clear. How can he then dismiss this as possible happenstance of secondary importance? We also live in a culture where people are being radicalized online and children are now being murdered in schools en masse, frequently by other (slightly older) children. Does the author not see a thread here?
Now, broaden the context further to consider the armies of troll-bots, etc. on our platforms that we know are being weaponized against us to incite anger, division, etc. There can be no other conclusion except that we are under a sustained attack to divide and destroy our society, and now children are being included on the target list.
This is not some randomly generated content to game platform algorithms for ad revenue that just happens to be disturbing. It is specifically effective in damaging children and is rooted in a very particular line of psychological attack.
[+] [-] rossdavidh|7 years ago|reply
Much of it illustrates how surprisingly difficult it is to get algorithms to replace human judgement, and how easy it is to think you've done it when you haven't. We look at instances of humans making poor judgements and tend to think , "an algorithm could improve on that". It turns out, human judgement does better than you think, compared to the task of automating that in an algorithm. Self-driving cars, I am looking at you.
[+] [-] joe_the_user|7 years ago|reply
This is also a natural result of parents using video as a baby-sitter for kids too young to really have any judgment of their own. So we have parent who won't exercise judgment, kids who can't and youtube which is expected to do so.
This goes naturally with the recent item about large corporations discriminating against pregnant women. If society isn't offering parents any extra time to take care of their kids, parents fall back in automated methods. We're finding downsides to such automation but that doesn't stop it, given that society isn't offering any time for alternatives.
[+] [-] lopmotr|7 years ago|reply
[+] [-] vivekd|7 years ago|reply
I'm all for keeping kids away from mature content, and I think parents should monitor their children online. . . but I thought kids were more resilient than this. I don't have kids, but it's hard for me to imagine them displaying violence and self harm because of some disturbing video.
[+] [-] ricardobeat|7 years ago|reply
EDIT: a few samples here https://www.reddit.com/r/ElsaGate/comments/6o6baf/what_is_el...
[+] [-] plasma|7 years ago|reply