top | item 16033895

The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook

138 points| mef | 8 years ago |wsj.com | reply

65 comments

order
[+] Waterluvian|8 years ago|reply
I found this juxtaposition unsettling:

“I was watching the content of deranged psychos in the woods somewhere who don’t have a conscience for the texture or feel of human connection,”

"...If the managers noticed a few minutes of inactivity, they would ping him on workplace messaging tool Slack to ask why he wasn’t working."

The texture of human connection is severely diminished when you are managing what is essentially a drip-fed trauma survivor remotely using a metric of trauma exposure per minute.

[+] harryf|8 years ago|reply
Likewise this part...

> Whisper no longer employs U.S.-based moderators. It uses a team in the Philippines along with machine-learning technology.

Great that they moved the problem to a place with even fewer protections for workers.

The underlying issue is seeing content moderation purely as cost that should be minimized. Taking YouTube Kids for example I'm sure you could sell parents products based around moderated content that educates their children, supports their morals and beliefs etc. i.e. find ways to generate enough income with moderated content to treat people fairly.

[+] quantummkv|8 years ago|reply
The real problem is not that such content exists, the problem is that Facebook makes it horrifically easy to distribute such content. Back when Facebook was more of interacting with people rather that liking and subscribing to whatever these content and joke pages shoved down these concerns where not present much. Clickbait, etc is just a natural progression of like and subscribing culture.

My Facebook feed about a year ago, when i last opened facebook was filled with the same memes and "inspirational messages" shoved down by a handful of pages being shared by dumb idiots. I had to search and go directly to a person's profile to see what they were up to.

The real problem is that Facebook started behaving like a TV channel that a social network.

If Facebook goes to only keeping profiles of real persons and removes any and all of these pages and blogs and news agencies and the lot, a lot of its woes with regard to content will be solved.

These pages give a sense of anonymity to the people behind them. Take that anonymity away. Once you know that your personal image will be directly tied to whatever you post and held responsible by everyone in your friend list, you will begin to curb your tendencies in public.

[+] Viliam1234|8 years ago|reply
I believe that the "one account = one real person" policy actually makes things worse.

Regardless of what Zuckerberg claims to believe, in real life I have multiple "profiles". For example, I don't debate Java programming with my family members, I don't debate politics or religion (or generally any opinions, because no one knows what will become a hot political topic tomorrow) with my colleagues, etc.

Similarly, I have different rules for reading stuff. I hate memes, but if my cousins post some, I am not going to block them like I would most other people; I will just sigh and scroll down. More importantly, I do not want to read stuff from all my social circles at the same times; sometimes I am in a mood for family stuff, sometimes programming, sometimes other things.

Even better would be to have advanced tools (but with simple user interface) to specify what I want to read; e.g. "what my cousin wrote, but not what someone else wrote and my cousin shared"; to filter posts containing certain words, etc.

Merely removing anonymity will simply divide users into two groups: those who care about not getting fired tomorrow so they only post completely unobjectionable politically correct vanilla stuff, or those who are not very smart or young and inexperienced or too rich to care about what other people think, so they post anything that comes to their minds.

[+] auxym|8 years ago|reply
I've taken to blocking all those random joke pages on sight.
[+] mbrumlow|8 years ago|reply
This sort of stuff at one point was handled by hosting providers. It now seems that the internet is "facebook/google" so it is no wonder they are getting the brunt of this sot of work.

I can tell you back in the day when I worked for Rackshack I never envied the abuse department. They always looked stressed out.

8k post in a day is just too many for one person. Its not about how much work they are doing, its about the content they are subjected to look at and review. To do that you have to actually think about it and make a decision -- and that takes a toll on people -- you don't get to forget.

I am not one for regulation but if there is on place in tech that should be considered for regulation, then I think this is a good place to start.

These workers need paid more, access to therapy and much more time off. I also think there are technical solutions to help ease the work needed, but that cost, and nobody seems to want to pay with human workers left holding the bag. This would also include stricter rules to make filtering easier.

Good job, you guys can put a dancing hotdog on the screen why not use that talent to work and make back-end systems to automate this sort of horrid work away. I know it will be hard, but if NN and deep learning is all cracked up to what keeps being preached then it should be within the realm of possible.

Also, while with the when it comes to the law, I am for the notion that it is okay if 10 bad people get away if it means not falsely convicting 1 good person. However on the internet with regards to what normally amounts to pointless shit that people post on the internet. I am okay with 100 good post being automatically removed it if means 1 bad post is also removed.

[+] razakel|8 years ago|reply
>I can tell you back in the day when I worked for Rackshack I never envied the abuse department. They always looked stressed out.

The police normally rotate staff on and off the analyst post every few months.

I was speaking to a detective, saying I was interested in going into digital forensics, and he just plainly said "no, you don't want to do that". He said he knew he'd spent too long on the job when he was walking through a park, saw a father playing with his daughter, and instead of thinking "aww" thought "you sick fucker".

I have, in the course of my career, come across child abuse content. Personally it didn't really bother me - I just didn't think about it and got on with the job of notifying the police, but I can easily see how being exposed to that sort of thing day in day out would really mess with someone's head.

[+] dbg31415|8 years ago|reply
> Facebook will have 7,500 content reviewers by the end of December, up from 4,500, and it plans to double the number of employees and contractors who handle safety and security issues to 20,000 by the end of 2018.

Seems like the bulk of these roles are contract. "$24/hour" was mentioned in the article. Probably no health benefits. Probably no counseling.

251 working days * 8 hours * $24 = $48,192. Less taxes. Less sick days. Safe to assume these workers aren't all going to work out of the main office in SF.

Seems like Facebook would have to set up some sort of work from home system, like people who transcribe medical records, or some sort of off-shore system, to make this work for workers. But... spreading out the people who do this work is likely to increase security risks that sensitive data will be shared, and reduce workers ability to find comfort with peers.

No way to look at this where automation isn't going to be better.

[+] viraptor|8 years ago|reply
> Its not about how much work they are doing, its about the content they are subjected to look at and review.

If you'd like to experience this, you can make an experiment yourself. If you're a Twitter/Facebook user, look for a group/account you know will have unacceptable content (it's easy, search for white genocide or similar - there's always something available). Then try to find some things that are actually reportable without the need to explain the reason in more than a few words.

After you find 10, you'll likely find it's mentally exhausting to deal with it / look at that content anymore. On the up side, you'll likely report and remove some amounts.

[+] empath75|8 years ago|reply
Any neural network can probably be gamed by people that really want to get around it.
[+] jstanley|8 years ago|reply
> I am okay with 100 good post being automatically removed it if means 1 bad post is also removed.

Why? Posts on the internet are pure data. I'm okay with no posts ever being removed.

[+] thriftwy|8 years ago|reply
> However on the internet with regards to what normally amounts to pointless shit that people post on the internet. I am okay with 100 good post being automatically removed it if means 1 bad post is also removed.

If you don't like Internet then just get off it!

You probably never noticed, but some people have this thing called self-actualization. By removing good posts you step on it with your greasy foot. By removing controversial posts you also step on it. That's a lot of damage that you did to innocent people right here.

We've got to get off HTTP just that people like you never have the chance to step into other people's homes, decide what they can and can't post and view.

[+] rpmcmurphy|8 years ago|reply
The internet really is an incredible cesspool. It would be interesting to see how the public reacted if YouTube and Facebook turned off their content moderation for a week. It would make the goatse meme look like a Sunday school picnic.
[+] saas_co_de|8 years ago|reply
Yes, but really, humanity is the incredible cesspool. Facebook is just mirror.

The interesting thing is that instead of trying to prosecute the criminals behind this content we would rather just censor it so we can pretend it doesn't exist.

[+] Santosh83|8 years ago|reply
Why are we surprised or shocked? Hasn't society always used servants, police, soldiers, miners, loggers, garbage men, wardens, and such to do tasks that the rest of us are loath to do, and to keep certain stuff away from 'civilised' society?

Why would online be different suddenly? Analogues of all the above are needed online too. And somebody who has no other option will be unfortunate enough to fill these roles.

[+] jstanley|8 years ago|reply
> Why would online be different?

Because you don't have to look at anything you don't want. Because it's not bounded by physical space limitations, so everyone can have as much "space" as they want. Because you can travel from any point to any other point almost instantaneously. Because you can be in as many different places simultaneously as you have the capacity to keep track of. Because you can come and go as you please without anyone else even knowing.

The internet is nothing like the physical world.

[+] greyman|8 years ago|reply
We are not surprised or shocked. This topic was also reported before, and the article is just about not adequate protections of those workers. Someone must do this work, until AI will be able to filter out everything, but those companies are rich enough to take better care of those workers. That's all.
[+] gonzo41|8 years ago|reply
What they don't tell you, you can get PTSD from being a witness to trauma. It happens to cops and lawyers investigating child abuse all the time.
[+] radmarshallb|8 years ago|reply
I would expect that social media websites whose content is largely decided democratically (via votes, shares, or the like) would relegate the majority of this content to a place where it is not seen by many. I would argue that the best way to handle this issue is to let the sites mechanisms deal with the content accordingly and then focus efforts on developing processes that will be able to detect and remove it automatically.

The article implies that they are forcing moderators to view the content at a high clip. Why, so as to get false positives back online as quickly as possible? Maybe moderators should only review content that reaches a certain threshold of complaint, and other content is left as is?

[+] legulere|8 years ago|reply
Reddit is such a website and it’s generally consensus there that voting alone does not work.
[+] empath75|8 years ago|reply
If you have voting and subreddits you’d just end up with a community that only upvotes what they think is the best child porn (ie: Reddit’s jailbait subreddits)
[+] wglb|8 years ago|reply
In the very early days of the internet being used at a very large corporation, I had the task of reviewing proxy logs to monitor what was euphemistically called "non-business use of the internet". I started by scanning for URLs with "XXX" in them, then pivoted to make a more extensive list.

I never looked at the content itself. Just the seeing the URLs was corrosive enough.

[+] guuz|8 years ago|reply
It's a good space for regulation. The treatment these workers are subjected to is ignominious.
[+] lyra_comms|8 years ago|reply
One of our team members used to do this job; luckily, she managed to do it with deep learning, so didn't have to spend too much time looking at unpleasant images.

This experience is one of the main drivers that pushes our team to develop an open, nonprofit conversation platform on which harrassment is difficult by design.

www.hellolyra.com/introduction

[+] mikehines|8 years ago|reply
I wonder what AI thinks and then does to us when we have one.
[+] aglavine|8 years ago|reply
The problem is more the job conditions than the job itself.
[+] meritt|8 years ago|reply
It's not a popular opinion but remove the anonymity/fake accounts and you eliminate a very significant portion of these issues.
[+] katastic|8 years ago|reply
South Park did an episode on this. Where everyone was so afraid of "reality" they elected someone to censor it all and keep their tweets only positive.

Butters had to see every depraved thing. And he ended up trying to kill himself.

And in the end, when he almost died, they blamed him for failing to be the perfect filter.