top | item 39049799

Meta documents show 100k children sexually harassed daily on its platforms

171 points| hackernj | 2 years ago |theguardian.com | reply

145 comments

order
[+] cat_plus_plus|2 years ago|reply
Hey nobody wants even a single child being harassed. There is lots of harassment in real life too and it can happen on any street corner. But most people would agree that recording everything on every street corner 24/7 is not acceptable in a free society. As more and more of our lives have an online component, online services also can not prevent 100% of crime and lesser unpleasantness and still remain powerful enough to fulfill user needs and enable running of free society.

There are both unique challenges and unique opportunities online. Child accounts can be more locked down. But children can lie and use parent's id/phone/etc to create adult accounts. Plus government regulation can either discourage a service from offering child accounts in the first place or make these too restrictive to be useful, forcing everyone to lie.

Ultimately parents and perhaps schools have to educate children on realistic dangers they can face both online and on street corners and supervise them until they are ready to responsibly use these spaces alone.

[+] raincole|2 years ago|reply
> But most people would agree that recording everything on every street corner 24/7 is not acceptable in a free society.

Do we?

No, this is a serious question. London seems to have 20,873 CCTVs on its streets[1]. If people don't protest against 20k CCTVs, do you really believe they'll protest against 30k, 50k or 70k? 70k is when by average one street has one CCTV. It's still not one per corner, but practically it's more than enough to track everyone's path.

Slippery slope? Yes. Are we going to slide down? Probably.

[1]: https://clarionuk.com/resources/how-many-cctv-cameras-are-in...

[+] cj|2 years ago|reply
I dislike the "it can happen on any street corner" comparison. It's apples and oranges.

The cost of harassing someone online, especially anonymously (even on HN) is virtually zero.

In real life, in person, the cost of harassing someone is many times higher. It's way harder to do so anonymously in person, it's way harder to do it without other people noticing in person, and it has to happen synchronously in person compared to asynchronously online. I could go on... it's just not the same thing.

> Ultimately parents and perhaps schools have to educate children on realistic dangers they can face both online and on street corners and supervise them until they are ready to responsibly use these spaces alone.

For the reasons I mentioned above, neither parents nor schools (on average) are equipped to mitigate online harassment because it's a relatively new attack vector on children. The average parent of a teenager (or the average school administrator, age 40-60) didn't grow up experiencing online harassment in their teenage years.

The danger I see in your comment is assuming that online and offline harassment share the same dynamic. The simple fact is online vs. offline harassment is wildly different in nearly every way. If you were harassed in real life, that doesn't automatically mean you understand the dynamics of online harassment.

Edit: Also feel compelled to say I'm not biased in any way. i didn't suffer from much harassment online or offline. I also don't have kids. It's like saying a Zoom date and a coffee date are basically the same. Or saying "dinner with grandma" is the same in person or via facetime. It's apples and oranges, let's not pretend they're comparable.

[+] godelski|2 years ago|reply
I think what bugs me most is that I've yet to see convincing evidence that any of these surveillance systems even help prevent such things as child abuse and terrorism. There's even pretty good evidence that these systems enable those things. We like to think child abuse happens from strangers, but usually it happens from someone the child knows, and often a relative. How many parents monitor their child's every move? For short term rewards you have ensured that you fail as a parent: teaching your child to be dependent and ensuring they are unable navigate the dangers and complexities of the world without you. The road to hell is indeed paved with good intentions and I completely understand the desire, but sometimes the kid has to burn their hand on the stove (hopefully only a little).

In addition to this, it seems that when we do use these tools to go after people that we end up just going after the low hanging fruit. It's the same reason the drug war has been a failure. Instead of going after manufacturers and distributors we go after users. It's easier and we create incentivize structures and metrics that make these the best path to optimization. I am absolutely okay with introducing a little friction. I'm far less concerned with someone looking at child abuse as I am about the persons creating and distributing it. Both are bad but clearly one is much worse and should be prioritized. If the worse group isn't prioritized then it isn't security, it is theater.

[+] hammyhavoc|2 years ago|reply
I live in the UK, a "free society", we have CCTV everywhere. It didn't stop me being sexually abused as a child, it's not there to catch pedophiles because pedophiles generally aren't like muggers. Encountered pedophiles at all three schools I went to—you put far too much trust in authority figures like teachers. We got educated by the same people who sexually abused us.

Where there's people, there's problems. From fraud to racism to pedophilia, and everything in-between.

Instead of worrying about children lying to access services for adults, worry about the problem in a non-technological way. Maybe we need greater deterrents in terms of legal repercussions? Maybe we can even help these people figure out what makes them this way?

[+] thelock85|2 years ago|reply
> Ultimately parents and perhaps schools have to educate children on realistic dangers they can face both online and on street corners and supervise them until they are ready to responsibly use these spaces alone.

Agreed but I believe this is maybe 50% of the effective solution.

Ideologically I’d like to say 99% but the defense (educating children) is always reacting to the offense (predator tactics).

To affect the offense, the rules must change and that power rests with Meta (and to a lesser extent, one’s ability to ignore Meta products).

[+] lolinder|2 years ago|reply
> Child accounts can be more locked down. But children can lie and use parent's id/phone/etc to create adult accounts.

Teenagers can also use a fake ID to buy alcohol, but we don't use that as an excuse for not having laws preventing children from buying alcohol.

I'm reminded of patio11's The optimal amount of fraud is non-zero [0]. It is worthwhile to have laws that are designed to substantially reduce the amount of harm done. It is also worthwhile to be aware that there is a threshold beyond which those laws would do more harm than good.

However, just because we can imagine laws beyond that threshold does not mean that our current laws are optimal. There's a delicate balance to be struck, but from the evidence that I've seen so far I'd say our laws regarding social media and children are way too far on the permissive side.

[0] https://www.bitsaboutmoney.com/archive/optimal-amount-of-fra...

[+] pyuser583|2 years ago|reply
We need to seriously rethink how children’s safety is handled online.

The scariest moment in my parenting career is when my eldest daughter turned 13. Suddenly all of her kids accounts are grownup accounts.

This happens overnight with no in-between phase.

My main concern isn’t even controlling what she sees as knowing what she sees.

If I could set her browsing history to “non-private, no-delete” I’d be happy as a clam.

[+] InCityDreams|2 years ago|reply
>But most people would agree that recording everything on every street corner 24/7 is not acceptable in a free society.

Uk government joins the chat.

[+] matheusmoreira|2 years ago|reply
> But most people would agree that recording everything on every street corner 24/7 is not acceptable in a free society.

I admire your optimism. A huge number of cameras just popped up out of nowhere in my city last year. People are actually excited about it. When I tried to say something, they treated me like one of those crazies that escaped the mental hospital.

[+] karaterobot|2 years ago|reply
What's the base rate to compare this against? For other social networks, for other websites, for society as a whole. My general attitude about meta is that it's cool to point out any flaw that might get people mad at them, but I just want to know if there is any reason to think this is a special problem they have. And, even though it's a double standard, I do get my hackles up when they start testing the waters to see how effective it is to imply that end-to-end encryption causes child abuse.
[+] WesternWind|2 years ago|reply
I think the issue is that they could have taken steps to lower their rate, but didn't. There are steps even privacy-conscious companies can take.

Like it doesn't even have to be E2E messaging, they could check the birthday of people, train an LLM to differentiate appropriate vs inappropriate comments on children's Instagram posts, and use that to help surface problematic comments. They could bring up a popup asking any child to report the conversation which would be triggered by client-side javascript based on a hashed word list of words commonly used by groomers. They could warn the kids whenever they upload images to share directly in DMs rather than publicly on their insta or fb.

In a perfect world, the rate of grooming would be zero, but at Meta's scale, just creating even a little more friction that makes it harder to groom kids should be seen as worthwhile.

[+] Animats|2 years ago|reply
What's the definition of "sexually harassed"? Does it include being spammed with some form of sexual content? That would inflate the numbers.
[+] robcohen|2 years ago|reply
You see the same problem with universities, the military, the peace corps, etc.

People have an agenda to push and they want to construct a simple narrative that supports that agenda. Simple as. Thanks for taking the time to point out that the narrative may be flawed, we need more people to do that.

[+] boomboomsubban|2 years ago|reply
It's bizarre to end this by denouncing encryption. "There's been this long lasting problem on Facebook, but a large part of it is something introduced last month."
[+] standardUser|2 years ago|reply
> Child safety experts, policymakers and law enforcement have argued encryption obstructs efforts to rescue child sex-trafficking victims and the prosecution of predators. Privacy advocates praised the decision for shielding users from surveillance by governments and law enforcement.

Seriously. They're practically mocking "privacy advocates" as criminals trying to evade law enforcement, whereas the other side of the argument is supported by a veritable gathering of paragons. I did not expect this kind of naked emotional appeal from The Guardian, but maybe that's my fault. And note that this is not an op-ed.

Edit: I meant op-ed, not editorial.

[+] matheusmoreira|2 years ago|reply
It's a propaganda piece using children as political weapons in an effort to weaken encryption worldwide. Would be bizarre if encryption denouncement was not present. If you see anyone using children as an argument, they're arguing in bad faith. It's worse than Godwin's law.
[+] everdrive|2 years ago|reply
If only people were just as serious about denouncing social media platforms. Connecting everyone has done much more harm than encrypted comms. But for some reason, only encryption falls under scrutiny.
[+] mortallywounded|2 years ago|reply
Let's face it-- there's a war going on against encryption by governments all around the world. Shifting public opinion is part of that war campaign.

"Think of the children!" is a classic argument when you have an agenda to push. This isn't anything new.

[+] lannisterstark|2 years ago|reply
This reads like a propaganda piece for anti-encryption mouthpieces.

"But think of the children" that ends with "oh noes the encryption."

[+] 2OEH8eoCRo0|2 years ago|reply
> an internal 2017 email describes executive opposition to scanning Facebook Messenger for “harmful content” because it would place the service “at a competitive disadvantage vs other apps who might offer more privacy”, the lawsuit states.
[+] renewiltord|2 years ago|reply
Amusing. Facebook trying to protect users' privacy. Newspapers killing them for supporting E2EE without CSAM scanning on the server. Boy if that isn't the collision of two hot-button issues. I wonder who will win.
[+] estebarb|2 years ago|reply
I don't understand why they claim that scanning messages erodes privacy: it can be done in the recipient side totally on client side. It could be as easy as a image classifier: "hey, this photo may contain unwanted content (we may be wrong), do you want to see it? Yes, No, Report, Report & Block".

Unfortunately Facebook do an awful job blocking harmful content: I have reported posts where there is obvious phishing, hate speech or even murder threats and... They do nothing. But if you write "tinta negra HP" (HP black ink) they ban you immediately.

[+] rijx|2 years ago|reply
How can anyone believe law enforcement would catch more criminals by having more data if they’re understaffed already?
[+] BLKNSLVR|2 years ago|reply
They're understaffed because they cut headcount assuming that more data will help them to catch more 'bad guys' with fewer resources.

Part of the problem is that they don't employ enough people smart enough to use the data in mathematically and statistically accurate ways.

They only look at 'bad', and don't seem to have any realisation that there could be an offset for 'good', or at least 'not bad'. I'm strongly biased in this opinion having personally been on the end of police misinterpretation of data.

I was specifically told the following (by a Lead Investigator, not just a plod):

- Use of Mega is suspicious

- Having virtual machines is suspicious

- Having tor on your computer (their wording) is suspicious

Top of their fucking class they must have been.

[+] hackerlight|2 years ago|reply
Because whoever works there can use that data to more effectively and efficiently carry out their work. How is this even a serious question? Not everything is about headcount.
[+] belorn|2 years ago|reply
Which platforms do meta allow children to create accounts? To my understanding, they stopped allowing children on their platforms since social networks can cause major harm to children from things like bullying, and that children can't give consent to the model of data collections that meta do.
[+] doublepg23|2 years ago|reply
You can add barriers and policies but I've been astonished what parents will work around to get their children to use these platforms (I recall a family member using their ID to allow their 11yr old child on Discord).
[+] clipsy|2 years ago|reply
They have "Messenger Kids" targeted at the under-13 market with safeguards ostensibly in place, and as far as I know children 13 and over can create FB/IG accounts.
[+] cj|2 years ago|reply
> they stopped allowing children on their platforms

AFAIK, anyone can sign up for an account by simply changing their birth year during the sign up flow.

[+] bimguy|2 years ago|reply
It is mind boggling how a high level Apple employee would not use Apple's own parental control abilities to safe guard their child's online activity. They aren't even supposed to be using the IG platform at twelve years old. I hope Meta would throw that back in Apple's face if they kicked up a stink.

Obviously this kind of harassment should not exist in the first place but parents who work in tech should be held more accountable for not protecting their kids imo.

[+] throwitaway222|2 years ago|reply
Perhaps this might be the first great use-case of AI - stopping people from doing inappropriate harassment..
[+] ivanjermakov|2 years ago|reply
You mean appropriate harassment exists?
[+] jimbob45|2 years ago|reply
A unique 100k each day or is there overlap? Also are these genuine messages or just wide-cast spam messages? Technically we all get sexually harassed daily if we include our GMail spam folders.
[+] dudeinjapan|2 years ago|reply
There should be a big banner at the top of every page "Surgeon General's Warning: This app contains a gratuitous amount of dicks." From that point forward it would be caveat emptor.
[+] Dylan16807|2 years ago|reply
So what percentage of the sexual harassment involves adults, and what percentage is grooming?

Tossing everything into a single number isn't very helpful.

[+] RecycledEle|2 years ago|reply
Very true.

I was teaching a 9th grade class a few years ago and realized that men from a certain demographic showing pics of their reproductive organs had traumatized the young females in my to the point most of them showed signs if PTSD. They gasped, closed their eyes, and/or recoiled at any image suddenly popping up on their computer screens or on the projector.

I pointed out that this was felony child sexual abuse, and that the perpetrators were easy to find. Law enforcement did not care.

[+] null0pointer|2 years ago|reply
Now do Discord.
[+] GaryNumanVevo|2 years ago|reply
As a parent, Discord is the number 1 threat to my children. I gave them the whole "stranger danger" talk, and I regularly check in to see if anyone is harassing them. My kids are honest and my 14 year old has gotten multiple people asking for inappropriate photos in one of the Fortnite server (not sure if it's an official one?).

I don't want to ostracize them, since they use it for playing games with their friends in private servers. Actually another parent I know specifically admins his own kids server and only lets them communicate with their friends via that server.

[+] wredue|2 years ago|reply
Seriously though. People always comment about how Call of duty voice chat was braving the depths of degeneracy, but those people have obviously never done random pick up groups in gaming discords.
[+] meepmorp|2 years ago|reply
I'm perhaps alone in this, but that's fewer than I would have expected.
[+] spencerchubb|2 years ago|reply
Me too, I expected more. Given 2 billion daily active users, then 0.005% of the users are children who are being sexually harassed.
[+] pokstad|2 years ago|reply
The real conspiracies are always in plain sight