top | item 24026728

(no title)

skosch | 5 years ago

Radiolab has a wonderfully deep episode on this topic: https://www.wnycstudios.org/podcasts/radiolab/articles/post-...

The main issue is the sheer volume of content. Four seconds – that's their estimate for how much time employees in Facebook's "censorship farm" centers have, on average, to make a decision about whether a reported post gets deleted or not. As a result, management decided to aim for the lowest bar: consistent enforcement of a limited set of rules. That's hard enough as is: is all blood gore? Are all references to race hate speech? Etc. etc.

So the censors simply have no capacity to fairly fact-check every single political post, and when in doubt, they err on the side of free speech. ML can help suppress certain viral posts, but not consistently so. I'm not a fan of Facebook by any means, but in this case they really are in a tough spot.

discuss

order

kazagistar|5 years ago

Facebook seems to still have some profit, so it clearly has capacity. The insufficient number of people looking at reported violations is by design.

berkes|5 years ago

This assumes that there is a solution 'if you just throw enough money at it'.

I'm not familiar enough with how large scale moderation works, so is this solvable, with near unlimited funds?