top | item 20995368

(no title)

moofight | 6 years ago

There are many aspects to this, but it seems that the most lasting and strong effects are due to visual content (especially raw content, violence...) rather than text (even though text can be violent).

Which is why automated Image/Video moderation solutions (such as Vision, Rekog, Sightengine.com, Hive) will continue to grow. Not only because it is cheaper/faster, but because it becomes a necessity. Or at least as a first filter to weed out the "worst" content.

discuss

order

mandevil|6 years ago

At a $previousJob I had some tangential contact with professionals who track child pornography, trying to identify and free the kids (people involved in catching https://en.wikipedia.org/wiki/Christopher_Paul_Neil). They felt that automation was of little help for what they were doing, and that every image had to be looked at by at least one human (most of the images by more than 1). They had a few tricks (apparently looking at the image in B/W helped lessen the trauma) but they did not find value in the automated tools we tried to build to help them.

Now, they felt much more empowered than what Facebook was doing: they kept going because the goal was to stick cuffs on the wrists of the guys who were doing this, and get those kids away from him, and they could put up with all of the rest for that goal. They were treated as rockstars by the rest of the people they interacted with, because they were the ones who got kids away from the predators. They had frequent opportunities to take breaks and could set their own schedule, with only the guilt that came from the longer they delayed, the more time passed with the kids in the predators hands to drive them.

Ultimately, feeling empowered to make a difference in the world is key, and if Facebook treated screening as an important job and gave their moderators more power to set their own working conditions I suspect that it would improve their mental health by quite a bit.

hos234|6 years ago

Good point about empowerment.

I hope they are investing in an army of shrinks /psychologists/sociologists to study,improve and supervise these centers, cause this stuff is not going away by just deleting content.

jbattle|6 years ago

They told us that robots would save humans from doing dangerous work in hostile environments. Who knew the danger and hostility would be entirely psychological!?

roguecoder|6 years ago

We just saw Tumblr try that and discover that trying to automate it can destroy your platform.

The problem is that context is even more important in visual content than in textual content, and we still don’t have any algorithms that can parse context as successfully as humans can.

leetcrew|6 years ago

it seems like you would essentially need a general ai to detect stuff like cp. there are (non-digitized) photos of me taking a bath as a small child that were taken by family. if I stumble across one sifting through photos at my house, it's an innocent document of my childhood. if theyre being passed around some cp forum, it changes the nature of the images quite a bit. we're a long way from having an algorithm that understands why it matters who is holding a photo.