The situation might not be so much better on other services of that scale (Youtube, image hosters,...) but what really makes the difference with FB for me is that it reminded me of a tweet from a FB employee who claimed that the amount of good that Facebbok does (might have been 'can do'?) is unlimited. Unlimited, ffs? I can only wonder how it comes that self-perception and reality can diverge so enormously.
(My eternal gratitude to the one who can dig up the tweet. All my efforts were futile so far.)
>They also said others were pushed towards the far right by the amount of hate speech and fake news they read every day.
There's something wrong with mainstream reporting if mere exposure to social media turns people far right. It really strikes me that people are trapped in some pretty strong filter bubbles to the point mere exposure is enough to change political belief.
Spend a week on a far right community and you'll be shown more stats that point to a far-right conclusion than you can critically evaluate. In any internet discussion of police racism for instance FBI crime stats will be mentioned in a heartbeat but I don't think I've seen a mainstream journo bring them up once. Social media and mainstream media fundamentally follows different schemas of information simply because even bringing up certain data can cause a mainstream journo reputational damage.
This is also causing an inverse filter bubble where hateful ideas which actually have refutations don't get refuted because people refuse to discuss the ideas on principle. Much of the data cited is crap and much of the interpretations are crap but they're not meaningfully contested.
> There's something wrong with mainstream reporting if mere exposure to social media turns people far right. It really strikes me that people are trapped in some pretty strong filter bubbles to the point mere exposure is enough to change political belief.
A different conclusion to draw from this is that far-right interests are responsible for the majority of the objectionable content on social media. One might further suggest that said content is deliberate propaganda, designed to push people to the right, and that this is a central pillar of their strategy that isn't shared to the same degree or extreme by other political factions.
This isn't "mere exposure". I haven't read the article so please correct me if I'm wrong but this is a job, a place they go to sit every day to be bombarded with this crap. To some extent they have to sit and let it wash over them—I don't imagine the people doing these jobs have much career mobility. IMO it's not realistic to suggest that if they were just better-informed, they wouldn't suffer these effects. The mind is not an inviolable fortress—no matter how strong you think your defenses are, they can be worn down.
> In a 2015 study, researchers discovered that familiarity can overpower rationality and that repetitively hearing that a certain fact is wrong can affect the hearer's beliefs.
I think it's more that there are personality types that really want a community feel like they belong to and are thus easily persuadable. Social media was a huge factor in radicalizing people to Islamic terrorism, and that ended when the networks started censoring that stuff.
I see what you're saying - when mainstream outlets consistently hide swaths of current events, statistics, and long-form analysis from their front pages or even the back pages out of fear of giving airtime to "Republican talking points" the result is that independents and liberals who ignorant of that information are susceptible to the spin of the first person who presents it - whether that person is a moderate or an extremist.
I recently met a guy in Berlin who is employed by a company that is a FB sub-contractor for content screening. When I met him, he had been on sick leave for 2 weeks, because of the awful working conditions and the stuff he had to go through daily.
He was obviously looking for a new position and, I can attest, he looked like "damaged".
One idea: To cope with all this, reduce the flagged posts presented by the algorithm to maybe 2 - 3 hours a day. For the remaining 5 hours, the algorithm could show them chats, pictures and videos that are harmless and uplifting. This might create a counterbalance, showing that there is still something good in social media.
I think that's how RL works in most countries. You are not running through the streets, getting slapped with crime and therelike all the time.
Of course, this also means you need more people to deal with the same amount of content as you do today.
I appreciate the idea (and I've personally administered a dose of /r/eyebleach on lesser occasions), but institutionalizing that sounds to me like something out of a dystopian future story, not a decent solution.
Maybe a better solution starts with reducing the scale of the problem, and includes measures like not tolerating so many fake accounts, being willing to lose real users permanently when you ban them, and cultivating better culture.
And at least pay the moderators better for their trauma, rather than exploit people who have no better options. New-grad programmers get six figures, to entice them to work for a 'social media' company they know is a bit sketchy, but some of the people who bear some of the darkest side of the business get paid peanuts.
Alternatively, one could change the business and infrastructure models, to distributed and interoperating common carriers (away from trying to snoop on, and manipulate, things people say, hear, and think). ISPs and protocols and programs, like we briefly had. But that requires no one becoming a billionaire by grabbing power over people.
So you're paying them for 2-3 hours of real work a day, and then another 5 hours of fake work? Why not just pay them only for the real work and give them the rest of the time off?
I spent most of last Friday keeping an eye on 4chan's /pol/ after finding out that morning that users were planning an attack on my job.
Even just looking for one day, it took a serious emotional toll on me. I've definitely seen some awful things on the Internet but the constant bombardment of hate speech, racism, anti-Semitism, and all sorts of disturbing images and text over the course of 6 hours made me feel physically sick a number of times, and I had to take extra care to rest the next day.
This is anecdotal of course, but I can't imagine what the Facebook moderators go through having to process at least one ticket a minute.
There are many aspects to this, but it seems that the most lasting and strong effects are due to visual content (especially raw content, violence...) rather than text (even though text can be violent).
Which is why automated Image/Video moderation solutions (such as Vision, Rekog, Sightengine.com, Hive) will continue to grow. Not only because it is cheaper/faster, but because it becomes a necessity. Or at least as a first filter to weed out the "worst" content.
At a $previousJob I had some tangential contact with professionals who track child pornography, trying to identify and free the kids (people involved in catching https://en.wikipedia.org/wiki/Christopher_Paul_Neil). They felt that automation was of little help for what they were doing, and that every image had to be looked at by at least one human (most of the images by more than 1). They had a few tricks (apparently looking at the image in B/W helped lessen the trauma) but they did not find value in the automated tools we tried to build to help them.
Now, they felt much more empowered than what Facebook was doing: they kept going because the goal was to stick cuffs on the wrists of the guys who were doing this, and get those kids away from him, and they could put up with all of the rest for that goal. They were treated as rockstars by the rest of the people they interacted with, because they were the ones who got kids away from the predators. They had frequent opportunities to take breaks and could set their own schedule, with only the guilt that came from the longer they delayed, the more time passed with the kids in the predators hands to drive them.
Ultimately, feeling empowered to make a difference in the world is key, and if Facebook treated screening as an important job and gave their moderators more power to set their own working conditions I suspect that it would improve their mental health by quite a bit.
They told us that robots would save humans from doing dangerous work in hostile environments. Who knew the danger and hostility would be entirely psychological!?
We just saw Tumblr try that and discover that trying to automate it can destroy your platform.
The problem is that context is even more important in visual content than in textual content, and we still don’t have any algorithms that can parse context as successfully as humans can.
To expose people to this stuff continuously seems wrong.
Then again so does exposing everyone to it / would probabbly kill the service if it wasn't dealt with by someone.
Another concern is finding people who can handle these situations in a healthy way, might be few and far between, and generally the folks exposed to it folks hired into low pay / outsourced warm bodies in chairs kinda situations.
What if the answer is to kill social-media-as-we-know-it?
There are some sick and "extremist" people in this world. If their stage were relegated to email threads I imagine we would not be having this discussion.
But I think what bothers me the most, and this is eluded to in the article, is not that some extremist is posting crap, no surprise there. But that seemingly once-normal relatives of mine are consuming this crap and then reposting it — becoming the extremists themselves.
Keep things small and community-moderated. Keeping social networks self-moderated in a small community, say a city, would quickly ostracise people sharing this sort of thing.
Yet another reason to stop trying to have a central authority, in this case Facebook moderators, police speech. It can't be done effectively or without major side effects. Let people filter on their own, we all do every day in the real world and it works just fine.
You are a lot less likely to run into violence, gore, exploited people, etc. in person. The answer to, "human moderators get hurt by the constant stream of truly awful stuff." is not, "let the stream of truly awful stuff just go straight to everyone".
Will not really work for the use case of private chats such as :
> We have rich white men from Europe, from the US, writing to children from the Philippines … they try to get sexual photos in exchange for $10 or $20.
Is your argument that any content platform that allows posting by the general public (vs say, employees) should be obligated to carry all the content that is posted?
Because that's where you end up if you don't want any filtering.
The next step after that is that no one allows posting by the general public anymore.
It is way cheaper to flood your internet content sources with beheading videos or videos of live kittens being tossed off highway overpasses than it is to do the same “in the real world”. The internet would quickly devolve to 8chan...
What strikes me about this issue as a whole is what it says about the true state of "AI". This is a perfect job for such technologies. I mean how is it that we're already making 'deep fake' videos and audio but can't feed a video stream which is just a stream of images to an algorithm which can determine if it's inappropriate. I recognize that some such tech is being utilized on the front end in this case, and that the problem is non trivial, but I see this as FB saying 'good enough' and not pushing as hard as they could to improve the tech to where it can be trusted to make the decision. I sense that they may be telling themselves they're doing social good by 'creating jobs'. Why must humans be subjected to this torture? What happened to "move fast and break things"? Why not put the algorithms out front and let them have the final say, and let them learn and improve quickly? I suppose just because meat is cheaper than chips.
> I recognize that some such tech is being utilized on the front end in this case, and that the problem is non trivial, but I see this as FB saying 'good enough' and not pushing as hard as they could to improve the tech to where it can be trusted to make the decision. I sense that they may be telling themselves they're doing social good by 'creating jobs'.
This seems like a weird take to me. Why would this be your conclusion rather than that the technology isn't good enough yet?
If you think about it, you have a large group in society who spend 8 hours a day watching content so controversial it won't even reach the rest of us. Just like any other company, people eventually quit, and new people join. Now if you were the bad guy wanting to nudge a % of society in a specific political direction, wouldn't this moderator group be a perfect target? Just bombard [insert social media platform] with propaganda content and you _will_ reach this group of people even if the content never appears on the platform. What a messed up situation.
Not really feasible; a group of a couple thousand converts are useless if they aren't targeted at a specific region; with Facebook's love of outsourcing you can't even be sure they are in the same country.
Maybe you need people who are just numb to it? It seems like the kind of gig that would attract the kinds of people who like seeing it. I know that's morbid to think about but look at subreddits like RIP /r/watchpeopledie, /r/enoughinternet, morbidreality is still around luckily. You get people who don't mind the gore and terrible things, pay them to sift through it, and see what the outcome is there?
I'm fairly sure the content FB mods have to sift through makes those subreddits seem like a walk in the park. Even I have seen some horrible things back in the earlier days of the internet that makes those look tame.
There is no upper bound in repulsiveness for the things FB mods watch. At least with Reddit there's the sitewide rules.
Back in 1999 when I had AOL 4.0 at the age of 16, I would frequent www.goregallery.com and other various gore-related websites that showcased real crime & accident scenes from around the world. Still to this day, I am fascinated by that kind of content. I don't really seek it out like I did as a teenager, but it excites me nonetheless.
The solution here may be for all of us to flag posts that we know are benign, like puppies or birthday greetings. This would reduce the percentage of disturbing content moderators are forced to see, and costs us nothing at all.
Just to be clear: There are no statistics in this article, just some anecdotes selected to push a particular narrative.
I don't claim to know whether working as a FB moderator is 'catastrophic' or not, but this article makes an emotional case for it rather than a rational case for it. It reminds me of reporting around the Foxconn suicides - sure, each is a tragedy, but it turns out the Foxconn suicide rate matches the national rate.
Yes it is an emotional argument. I used to work somewhere with content moderation. From my conversations with people on that team, the effects in this article are spot on, and I would argue are damaging. Maybe not catastrophic but journalists gotta make bank too.
Think of the disgust or shock you might feel if you saw, say an adult flirting with a kid, or if that’s not graphic enough, having sex. Now imagine you need to see that every hour or so for your job, 5 days a week, forever. There’s no end, there’s no stopping people.
At best you lose some of your humanity and innocence - nothing really shocks you any more. At worst it causes some form of PTSD or you even sympathize with what you are seeing. I can see how that could be less painful in short term than continuing to be shocked and depressed with things you don’t agree with.
What statistics would you gather to prove this? Not everything in life is cheap and easy to quantify.
Some things are just stupefyingly obvious. For example, constantly looking at videos of puppies getting microwaved or people getting beheaded in front of their kids for 8 hours a day, 5 days a week just might really fuck with you.
A particular narrative? Isn't it common sense that watching extreme graphic violence all day every day will lead to a lot of psychological issues for a significant percentage of people? This article about FB moderators in the U.S. paints a more detailed picture: https://www.theverge.com/2019/6/19/18681845/facebook-moderat...
"There are no statistics in this article, just some anecdotes selected to push a particular narrative."
Please don't reduce people to data like this. Plese don't automatically assume bad faith on the part of the authors of the article. Consider there are actual people involved here, reporting their experiences. Don't make it into some kind of social-justice battleground or science experiment, just consider their perspective without instant judgment.
The very fact that Facebook outsources its censorship to contractors underlines the precarity of a workforce that is considered both disposable in the short term and irrelevant in the long term.
Ultimately Facebook wishes to replace this workforce with AI automation where possible, and then heteromate the remaining human work by enticing users of the platform to inform on each other with posts that don't abide by the platforms implicit social norms or opaque moderation rules.
There seems to be a fundamental hypocrisy in a legal system that considers some material is so fundamentally corrupting that it must be illegal for people to possess or see it.
If this were truly the case, it should also be illegal to compel someone to look at this same material as part of their employment.
And from what I've seen they are quite happy to receive tips about abusers from the public and businesses that care enough to review their content. For society to work some of us have to step forward and agree to do the dirty work.
That some people are doing this for a living without proper counseling and guidance is another matter, you can lay that at Facebook's door. The few times that I've been exposed to that crap was enough to make me shut down some services.
In fact, I think that any business that deals in user generated content should assume the cost of business that goes with that and have a system of flagging and escalation in place.
[+] [-] _Microft|6 years ago|reply
(My eternal gratitude to the one who can dig up the tweet. All my efforts were futile so far.)
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] Kiro|6 years ago|reply
[+] [-] TheOperator|6 years ago|reply
There's something wrong with mainstream reporting if mere exposure to social media turns people far right. It really strikes me that people are trapped in some pretty strong filter bubbles to the point mere exposure is enough to change political belief.
Spend a week on a far right community and you'll be shown more stats that point to a far-right conclusion than you can critically evaluate. In any internet discussion of police racism for instance FBI crime stats will be mentioned in a heartbeat but I don't think I've seen a mainstream journo bring them up once. Social media and mainstream media fundamentally follows different schemas of information simply because even bringing up certain data can cause a mainstream journo reputational damage.
This is also causing an inverse filter bubble where hateful ideas which actually have refutations don't get refuted because people refuse to discuss the ideas on principle. Much of the data cited is crap and much of the interpretations are crap but they're not meaningfully contested.
[+] [-] caconym_|6 years ago|reply
A different conclusion to draw from this is that far-right interests are responsible for the majority of the objectionable content on social media. One might further suggest that said content is deliberate propaganda, designed to push people to the right, and that this is a central pillar of their strategy that isn't shared to the same degree or extreme by other political factions.
This isn't "mere exposure". I haven't read the article so please correct me if I'm wrong but this is a job, a place they go to sit every day to be bombarded with this crap. To some extent they have to sit and let it wash over them—I don't imagine the people doing these jobs have much career mobility. IMO it's not realistic to suggest that if they were just better-informed, they wouldn't suffer these effects. The mind is not an inviolable fortress—no matter how strong you think your defenses are, they can be worn down.
[+] [-] Timshel|6 years ago|reply
> In a 2015 study, researchers discovered that familiarity can overpower rationality and that repetitively hearing that a certain fact is wrong can affect the hearer's beliefs.
[+] [-] cm2012|6 years ago|reply
[+] [-] diffeomorphism|6 years ago|reply
I honestly would be surprised if any propaganda (no matter the subject matter) left no impression when applied with that intensity.
[+] [-] drak0n1c|6 years ago|reply
[+] [-] koevet|6 years ago|reply
[+] [-] MrGilbert|6 years ago|reply
I think that's how RL works in most countries. You are not running through the streets, getting slapped with crime and therelike all the time.
Of course, this also means you need more people to deal with the same amount of content as you do today.
[+] [-] duxup|6 years ago|reply
Moderator1: "Hey man how is it going today?"
Moderator2: "Pretty good, it's kitten day."
[+] [-] neilv|6 years ago|reply
Maybe a better solution starts with reducing the scale of the problem, and includes measures like not tolerating so many fake accounts, being willing to lose real users permanently when you ban them, and cultivating better culture.
And at least pay the moderators better for their trauma, rather than exploit people who have no better options. New-grad programmers get six figures, to entice them to work for a 'social media' company they know is a bit sketchy, but some of the people who bear some of the darkest side of the business get paid peanuts.
Alternatively, one could change the business and infrastructure models, to distributed and interoperating common carriers (away from trying to snoop on, and manipulate, things people say, hear, and think). ISPs and protocols and programs, like we briefly had. But that requires no one becoming a billionaire by grabbing power over people.
[+] [-] Smithalicious|6 years ago|reply
[+] [-] cmg|6 years ago|reply
Even just looking for one day, it took a serious emotional toll on me. I've definitely seen some awful things on the Internet but the constant bombardment of hate speech, racism, anti-Semitism, and all sorts of disturbing images and text over the course of 6 hours made me feel physically sick a number of times, and I had to take extra care to rest the next day.
This is anecdotal of course, but I can't imagine what the Facebook moderators go through having to process at least one ticket a minute.
[+] [-] papreclip|6 years ago|reply
[+] [-] moofight|6 years ago|reply
Which is why automated Image/Video moderation solutions (such as Vision, Rekog, Sightengine.com, Hive) will continue to grow. Not only because it is cheaper/faster, but because it becomes a necessity. Or at least as a first filter to weed out the "worst" content.
[+] [-] mandevil|6 years ago|reply
Now, they felt much more empowered than what Facebook was doing: they kept going because the goal was to stick cuffs on the wrists of the guys who were doing this, and get those kids away from him, and they could put up with all of the rest for that goal. They were treated as rockstars by the rest of the people they interacted with, because they were the ones who got kids away from the predators. They had frequent opportunities to take breaks and could set their own schedule, with only the guilt that came from the longer they delayed, the more time passed with the kids in the predators hands to drive them.
Ultimately, feeling empowered to make a difference in the world is key, and if Facebook treated screening as an important job and gave their moderators more power to set their own working conditions I suspect that it would improve their mental health by quite a bit.
[+] [-] jbattle|6 years ago|reply
[+] [-] roguecoder|6 years ago|reply
The problem is that context is even more important in visual content than in textual content, and we still don’t have any algorithms that can parse context as successfully as humans can.
[+] [-] forinti|6 years ago|reply
That might soften the blow to the screeners.
[+] [-] duxup|6 years ago|reply
To expose people to this stuff continuously seems wrong.
Then again so does exposing everyone to it / would probabbly kill the service if it wasn't dealt with by someone.
Another concern is finding people who can handle these situations in a healthy way, might be few and far between, and generally the folks exposed to it folks hired into low pay / outsourced warm bodies in chairs kinda situations.
[+] [-] JKCalhoun|6 years ago|reply
There are some sick and "extremist" people in this world. If their stage were relegated to email threads I imagine we would not be having this discussion.
But I think what bothers me the most, and this is eluded to in the article, is not that some extremist is posting crap, no surprise there. But that seemingly once-normal relatives of mine are consuming this crap and then reposting it — becoming the extremists themselves.
[+] [-] Grangar|6 years ago|reply
[+] [-] pjc50|6 years ago|reply
[+] [-] bedhead|6 years ago|reply
[+] [-] Pfhreak|6 years ago|reply
[+] [-] Timshel|6 years ago|reply
> We have rich white men from Europe, from the US, writing to children from the Philippines … they try to get sexual photos in exchange for $10 or $20.
[+] [-] maxerickson|6 years ago|reply
Because that's where you end up if you don't want any filtering.
The next step after that is that no one allows posting by the general public anymore.
[+] [-] spookthesunset|6 years ago|reply
Be careful what you wish for.
[+] [-] raslah|6 years ago|reply
[+] [-] traek|6 years ago|reply
This seems like a weird take to me. Why would this be your conclusion rather than that the technology isn't good enough yet?
[+] [-] thatguyagain|6 years ago|reply
[+] [-] tomatotomato37|6 years ago|reply
[+] [-] post_break|6 years ago|reply
[+] [-] Grangar|6 years ago|reply
There is no upper bound in repulsiveness for the things FB mods watch. At least with Reddit there's the sitewide rules.
[+] [-] apolymath|6 years ago|reply
[+] [-] teachrdan|6 years ago|reply
[+] [-] stickfigure|6 years ago|reply
I don't claim to know whether working as a FB moderator is 'catastrophic' or not, but this article makes an emotional case for it rather than a rational case for it. It reminds me of reporting around the Foxconn suicides - sure, each is a tragedy, but it turns out the Foxconn suicide rate matches the national rate.
[+] [-] xivzgrev|6 years ago|reply
Think of the disgust or shock you might feel if you saw, say an adult flirting with a kid, or if that’s not graphic enough, having sex. Now imagine you need to see that every hour or so for your job, 5 days a week, forever. There’s no end, there’s no stopping people.
At best you lose some of your humanity and innocence - nothing really shocks you any more. At worst it causes some form of PTSD or you even sympathize with what you are seeing. I can see how that could be less painful in short term than continuing to be shocked and depressed with things you don’t agree with.
[+] [-] spookthesunset|6 years ago|reply
Some things are just stupefyingly obvious. For example, constantly looking at videos of puppies getting microwaved or people getting beheaded in front of their kids for 8 hours a day, 5 days a week just might really fuck with you.
Like, what data do you want to back that up?
[+] [-] maxlamb|6 years ago|reply
[+] [-] titzer|6 years ago|reply
Please don't reduce people to data like this. Plese don't automatically assume bad faith on the part of the authors of the article. Consider there are actual people involved here, reporting their experiences. Don't make it into some kind of social-justice battleground or science experiment, just consider their perspective without instant judgment.
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] 0x262d|6 years ago|reply
[+] [-] malandrew|6 years ago|reply
https://theotherlifenow.com/depressive-capitalist-realism/
[+] [-] 0xADADA|6 years ago|reply
Ultimately Facebook wishes to replace this workforce with AI automation where possible, and then heteromate the remaining human work by enticing users of the platform to inform on each other with posts that don't abide by the platforms implicit social norms or opaque moderation rules.
[+] [-] J-dawg|6 years ago|reply
If this were truly the case, it should also be illegal to compel someone to look at this same material as part of their employment.
[+] [-] jacquesm|6 years ago|reply
And from what I've seen they are quite happy to receive tips about abusers from the public and businesses that care enough to review their content. For society to work some of us have to step forward and agree to do the dirty work.
That some people are doing this for a living without proper counseling and guidance is another matter, you can lay that at Facebook's door. The few times that I've been exposed to that crap was enough to make me shut down some services.
In fact, I think that any business that deals in user generated content should assume the cost of business that goes with that and have a system of flagging and escalation in place.