top | item 19864994

Tech Companies Are Deleting Evidence of War Crimes

587 points| anfilt | 6 years ago |theatlantic.com | reply

222 comments

order
[+] subjoriented|6 years ago|reply
The article centers around activity in Syria, but the problem is much more endemic. In the United States, Apple has consistently taken down apps providing information about US government airstrikes - including those that strike wedding processions, children, bystanders, etc. Recent efforts to identify and scrub foreign propagandists in the United States have silenced legitimate voices of domestic dissent, as foreign influence campaigns typically attempt to magnify grassroots dissenting opinions (this is true also of US foreign intelligence efforts). There's many reasons why these kinds of apps, comments, and conversations are taken down due to their content.

Fundamentally, being a gateway to information in a legal environment where the hosting the content as a curator puts you under risk creates deep incentives to whitewash content. Those forces are already present for companies which might accidentally sustain 9gag/4chan type cultures/commentary which contradicts however its productizing its platform.

Reddit is another example a company that has recently scrubbed itself of most controversial content, including content critical of or dangerous to its host's countries political and national security naratives. Twitter has started down that journey as well.

It feels like we're experiencing the growing pains of social network and hosted content boom reinvented on web technology over the past couple decades.

It's kind of amazing in retrospect how controversial some of the "mundane" applications of technology to society have been, whereas a couple decades ago most of the moral panic centered around concepts like "online dating", which to date have actually been relatively controversy scarce.

[+] ehnto|6 years ago|reply
Australia lacks safe harbour laws which the US has had for some time. This has a chilling effect on certain types of platforms. Even something like a photo hosting site is very risk heavy in Australia as the type of content on your servers is your responsibility.

I feel like those safe harbour laws in the US are at risk, at least in practice, as the idea of a content "conduit" begins being less plausible as platforms start moderating more and more.

Now, I am all in favour of moderation. Real life communities are moderated, and all the strongest online communities have some kind of moderation. But there needs to be protections for companies who are moderating the best they can and still have evil content on the site. Else these platforms won't be sustainable and will be in heavy legal risk to boot. They won't make much sense anymore.

[+] netcan|6 years ago|reply
We'll put.

The big social media players underappreciate the areas that they've found themselves in, a major part of news media.

Dealing with fake news, intelligence backed propoganda, and censorship of certain content are very difficult tasks. They aren't just "security" or "platform abuse" issues. These are editorial & fact checking issues.

Ultimately, it will be impossible to deal with them via clear policies and software. The only way to do it is with human intelligence, decision making and judgement.

A video of an isis execution could be snuff, critical news, isis propoganda, baathist propoganda...

The types of organisations that do these kinds ds of jobs tend to have it baked into their DNA. Journalistic organizations for examlr, that have editorial philosophies embedded in their DNA the way Facebook has software development embedded into it's.

In terms of online media, Wikipedia is the best example, imo. Dealing with this stuff is at it's core. I can't see FB ever becoming adequately capable of dealing with these types of issues.

Even bans get I to tricky territory once you reach FB/twitter scale.

[+] tracer4201|6 years ago|reply
Not a surprise. Many of us grew up believing we as Americans are exceptional, that we are inherently the good guys in any international conflict.

The reality is our government has interests. Morality and interests are orthogonal, with the latter being influenced primarily by profits or some perceived direct or indirect threat to America’s elite.

[+] coldtea|6 years ago|reply
>The article centers around activity in Syria, but the problem is much more endemic. In the United States, Apple has consistently taken down apps providing information about US government airstrikes - including those that strike wedding processions, children, bystanders, etc. Recent efforts to identify and scrub foreign propagandists in the United States have silenced legitimate voices of domestic dissent >The article centers around activity in Syria, but the problem is much more endemic. In the United States, Apple has consistently taken down apps providing information about US government airstrikes

Well, Apple is not some international beacon of progress and activism.

It's a US-based company, run by people who feel US-patriots, and their differences with this or that sitting president aside, will do what's best for their country's "national interests". Same as Google, Facebook, etc.

Which is why countries with any major claim to sovereignty and with their own interests and stakes, build their own equivalents of major search, social networks, etc (e.g. Baidu).

>Recent efforts to identify and scrub foreign propagandists in the United States have silenced legitimate voices of domestic dissent

I wouldn't say the latter was not part of their purpose. You can't have your country's own citizens dissent to the elite/bipartisan consensus. Trillions are at stake...

[+] baybal2|6 years ago|reply
Things are much more simple I'd say. Those "algorithms" I'd say are very dumb, nor human flaggers that are better.

Even very innocent family videos are being flagged all the time, you just have to speak a foreign language and look like a middle easterner.

Some say that just speaking in elevated tone in Arabic and having a beard is enough to be flagged on big dotcoms.

And if you are openly are a member of just any social organisation, your account deletion is pretty much a matter of time if you are a middle easterner, no matter if you are pro-regime, anti-regime, or have nothing to do with that. The Egyptian national symphonic orchestra was once banned by Facebook as a "terrorist organisation"

This is ironic given that all big social networks including Facebook outsource content censorship to companies in obscure countries like Algeria or in Balkans.

[+] HNthrow22|6 years ago|reply
> Reddit is another example a company that has recently scrubbed itself of most controversial content,

I agree with you on most points however reddit takes no action against extremists and fringe groups proliferating on their platform until substantial pressure is applied from either advertisers or journalists. They go as far as shadowbanning users who bring up their lack of action to shine a light on their culpability, presumably to keep it off of mainstream journalists radar, thankfully it's now too late and enough good journalists have picked up the scent. Expect to see many great pieces about this in the near future.

Reddit also scrubs the profiles and post history of mass shooters who were indoctrinated on their platform, one example is Elliot Rodgers the incel and Santa Barbara shooter from several years ago, truly disgusting and frightening behavior that denies researchers and journalists from understanding the reach and impact of the spread of ideologies and hatred on these platforms.

Just last week in the wake of the Poway synagogue shooting the #1 post on /r/conspiracy was a holocaust denial post, https://pbs.twimg.com/media/D6HG_LMUUAEXYJy.png:large that received 2x reddit gold.

[+] ngold|6 years ago|reply
The internet was founded on a military backbone, then grew to an information is everywhere backbone, now the fences of the garden have been contracting to a walled garden again.

There is a scene in firefly where the dr. character gets a box. It was implied as a openish internet. Pretty sure that's where were headed.

Information is free until the folks in the back of the courtroom make sure it's not.

[+] dillonmckay|6 years ago|reply
Online dating’s purpose is to facilitate ‘positive’ real-world relationships.

The intent of these ‘mundane’ applications is much less admirable.

[+] tetrisgm|6 years ago|reply
The problem is that we've equated some platforms with a certain kind of media. Youtube is video, Twitter is news, Facebook is.. reality tv, I dunno. It's pretty debatable whether those platforms are the right place for the coverage of war crimes.

I'm not going to start an argument about the media not doing their job right. However, this discussion would be moot if there was a news platform on par with the above, and I don't mean Twitter. I'd love to see someone build or modernize a news outlet that isn't driven by attention or clicks as a currency.

[+] andrepew|6 years ago|reply
These platforms aren’t being used to cover war crimes by an entity consciously trying to deliver news.

These posts are first hand accounts of individuals currently living an event. It is as raw as it gets. I don’t think you can very easily direct where content like that gets created. People will use the platform they’re already using for the other parts of their life.

I understand the desire to not traumatize people by causing them to accidentally view horrible events. But at the same time I think deleting the content is a disservice to humanity in the long run for a lot of reasons this article highlights.

[+] intended|6 years ago|reply
We would all love to see that but it is - within the context of the current economy - impossible for the majority of people.

This is a scenario where the regular incentives of the market result in perverse outcomes.

Firstly - it’s not tech which did this. Tech mutated and accelerated a pre existing trend for the worse.

The news cycle effect was well known, driving more and more sensational news reporting till at the cost of reason.

At the same time newspapers were constantly going under or being bought up.

With information distribution at scale, describing reality increasingly became the playground of nations (BBC, and lately various government owned news channels) or major firms (the Murdoch emptier.)

There’s a deep problem at the nexus of human behavior, factual reporting and income.

Attention and fear are easier and more reliable levers to pull in the human consumer.

Humans are easily distracted by sex, violence, gossip and easy to consume content - our brains are wired that way for some of those things, and it’s always pleasing to consume mental sugar than mental vegetables.

The only model which survives this is paying lots of money for specific information - usually linked to your profession.

However general news reporting It is unlikely to recover because of the news cycle effect which creates a tendency to compete on attention, a race to the bottom.

[+] thaumasiotes|6 years ago|reply
> I'd love to see someone build or modernize a news outlet that isn't driven by attention or clicks as a currency.

There are two motivations for running a news outlet, both with extensive historical precedent:

- You want money.

- You want to spread your view of the world, inflicting your culture on other people.

There isn't a lot of immediate financial gain associated with the second approach, but the gains are real enough, long-term, that there tend to be plenty of such outlets receiving subsidies from people who believe in the message.

But in the modern US, while we still have outlets of that type, there is a very strong belief that they shouldn't count as "news", and that they are less legitimate than "neutral" journalism. That reserves moral legitimacy to the money-oriented approach, and that approach is necessarily driven by attention. News that nobody reads can't sell.

[+] mirimir|6 years ago|reply
Back in the 90s, I recall much talk about "citizen-based news". But the reality is far more complicated. I could build a Tor onion site with throwaway clearnet proxies. But how would anyone know that it existed? And even if they did, how would they use it from phones? You'd need apps for that, and so that requires approval from Google and/or Apple.
[+] vixen99|6 years ago|reply
You're right. The arrival of 'a news outlet that isn't driven by attention or clicks as a currency' is exactly what should happen and surely it will. Or will we have Facebook around in 2119?
[+] Jommi|6 years ago|reply
This is the core problem.

And it's just natural. These platforms have a tendency to be natural monopolies. That means it requires government interference.

[+] bilbo0s|6 years ago|reply
This.

If your evidence for war crimes is Twitter, there's a problem. And the problem is not Twitter.

Twitter and Facebook posts should not be evidence of anything. That stuff is so easily faked that courts would be entirely right to laugh such "evidence" out of the courtroom.

Evidence should be gathered by appropriate authorities and kept in accordance with international standards on evidence storage.

Hint: That's not "look at this tweet I got!"

[+] o10449366|6 years ago|reply
This article seems mostly concerned about Facebook and YouTube's opaque filtering systems, but the authors are looking in the wrong place. These platforms aren't and shouldn't be responsible for hosting violent and disturbing content - regardless of its purpose or utility. The authors themselves discuss the can of worms that open when you start accommodating the interests of some groups and not others; it isn't sustainable and it isn't possible.

Instead, this article should have focused less on the decisions of AI systems and neural networks, which can be difficult to decipher and interpret, and more on the decisions of the humans behind these tech companies. The latter is much easier to scrutinize and dissect.

I think the truly concerning initiatives are projects like Google's Dragonfly. There's absolutely nothing opaque about a search engine that will actively assist the Chinese government with whitewashing history for a billion individuals. These are the decisions that should be examined.

[+] djakjxnanjak|6 years ago|reply
>Instead, this article should have focused less on the decisions of AI systems and neural networks, which can be difficult to decipher and interpret, and more on the decisions of the humans behind these tech companies. The latter is much easier to scrutinize and dissect.

I agree with your comment overall but this common assumption bothers me. We can put a debugger in a neural net, we can’t put one in a human brain.

People get angry when a policy decision is made that targets them, and are told by that the people responsible that the decision was implemented with a neural net. Then they get mad at the neural net, which shifts blame away from the humans who control it.

[+] basetop|6 years ago|reply
Tech companies are deleting evidence of war crimes and everything else at the behest of media companies like the atlantic.

Youtube, facebook, reddit, google search, etc have all been targeted by the atlantic, nytimes, washingtonpost, cnn, etc and bullied into scrubbing content and direct their users to "authoritative sources". Ironically enough, in china, russia and most countries, "authoritative sources" are state propaganda organs. But we are different or so I'm told. Our "authoritative sources" are independent news organizations who strangely enough push the same message with the same talking points. What a coincidence. For "independent" "news" organizations, they sure are united in the same message.

This censorship has been going on for at least 5 years now. Maybe the atlantic journalists should investigate their editors as to why the atlantic has supported censorship? Or maybe the atlantic journalists should start investigating nytimes, washingtonpost, cnn, etc. Why so many government/intelligence agency people are working in media. Why so many children of politicians ( Bushes, Clintons, McCains, Cuomos, etc ) have prominent positions in the media?

Or is the atlantic only interested in war crime evidence that suit their agenda ( pushing for war in syria, venezuela, etc )?

This article is so weird. It's like an arsonist setting a house on fire and telling everyone that the house is on fire.

[+] 52-6F-62|6 years ago|reply
What exactly are you claiming is being censored?

The media regularly investigates and reports on itself.

The press is far from perfect, but you’re alleging a vast and deep conspiracy when reality is likely far simpler:

The right hand doesn’t know what the left hand is doing.

Sales and business have little impact on editorial. Editorial flies by the whims of a wide variety of strong personalities.

Politicians and relevant players get columnist and editorial roles because of their inside status and insight (of whatever level of quality that might be).

Suggesting those outlets produce a single unified perspective or ideology is a bit much. Which would explain why you might see contradictory ideas.

I continually stress media literacy as crucial-especially these days. It’s not a mystery to be solved. Most of is pretty straight forward hum drum.

Edited to add:

The Atlantic is a very different machine from WP or NYT. It’s magazine whose content has been analysis and ideas and always has been. Their work will always have perspective woven into their articles. It’s not new to them. But it’s not a conspiracy, it’s part of their business model.

[+] brightball|6 years ago|reply
Stuff like this is why I still see value in own encyclopedias, history books, etc. I simply don't trust resources that can be edited/deleted at any time.
[+] candiodari|6 years ago|reply
> Tech companies are deleting evidence of war crimes and everything else at the behest of media companies like the atlantic.

Don't underestimate governments forcing these companies to hide evidence of their own misdeeds.

Even governments that you wouldn't traditionally think of as abusive. Take for yourself what a realistic number is for police abuse and then how much gets filmed and uploaded. Then search on Youtube. Clearly, someone's asking for removals in the US. Same for Europe. I doubt very much it's the Atlantic doing that.

[+] unethical_ban|6 years ago|reply
This is entirely in your mind until you put some evidence to it, you know, like journalists tend to do.
[+] unknown|6 years ago|reply

[deleted]

[+] navigatesol|6 years ago|reply
>Or is the atlantic only interested in war crime evidence that suit their agenda ( pushing for war in syria, venezuela, etc )?

I'm with you 90% of the way, but I don't understand this argument.

It's like Hilary's emails or WikiLeaks; "But it came from hackers!" Who cares, if it reveals legitimate criminal activity. There's no equivalency.

[+] Digit-Al|6 years ago|reply
So, here's an idea. Let's take the case of YouTube, instead of its algorithm deleting this content, how about it is marked with a flag. Normal users are completely unable to view, or even know of the existence of content marked with this flag. It doesn't come up in searches and even a direct link will just show some "content unavailable" message.

Human rights groups and the like can request permission for a designated user to have a "special administrator" permission. Anyone with that permission will see a permission toggle when they look at someone's user account. That toggle will allow them to give or revoke permission to view these "forbidden" videos.

That will then allows these organisations to give access to anyone they deem suitable to help them police these videos. There would need to be some sort of safeguards to make sure that permission is not accidentally given to a minor or something.

This should solve a lot of the problems I reckon. Thoughts?

[+] jeandejean|6 years ago|reply
Why would Human rights groups be entitled to have this censorship privilege? So they'd decide now what's good or bad to display on Youtube?

If anyone needs evidence, it is Justice prosecutors and it is likely already possible. Deleted from user-facing Youtube and Facebook and the like doesn't mean it's deleted from their storage...

Plus the title is misleading: they're not deleting evidence of war crimes, they're deleting upsetting and distressing videos from their open platform, and that's a good thing! Nothing proves they don't already pass these videos to law enforcement.

[+] toxicFork|6 years ago|reply
How do you sell this to YouTube or enforce this on them, to first accept then prioritise this functionality? What benefits do they gain or what alternatives do they avoid, as a company for profit?
[+] lifeisstillgood|6 years ago|reply
The podcast episode "post no evil" about Facebooks moderation problems, covers something very similar where during the Mexican border drug wars people were posting images of dead bodies hung from overpassed and shootouts.

Facebook had internal debates over whether this counted as news or breached content guidelines

I recommend the podcast greatly.

But perhaps we need a simpler approach - a funded "journalism" archive where the tech firms can shift the posts to online storage (ie not delete it) but section it off for later analysis.

Seems the only likely compromise.

[+] medecau|6 years ago|reply
Something like the Internet Archive?
[+] 6nf|6 years ago|reply
And now Facebook is banning random harmless idiots like Paul Joseph Watson - for 'being a dangerous individual' even though he's clearly not a terrorist or whatever. It's insane.
[+] 8bitsrule|6 years ago|reply
Social media have successfully created the illusion that they are (to some extent, at least) public spaces. This is another reminder that they are not.

OTOH, truly public spaces on the internet ... where reasonable speech can truly be free ... are in very short supply. By accident or intention, this is a crappy state of affairs.

This isn't a new topic. 20 years ago, Lessig wrote, "We can build, or architect, or code cyberspace to protect values that we believe are fundamental, or we can build, or architect, or code cyberspace to allow those values to disappear." https://www.nytimes.com/2001/06/02/arts/adding-up-the-costs-...

If our free speech is limited only to the inside walls of walled gardens, that's not their fault.

[+] onesmallcoin|6 years ago|reply
We couldn't have started the discussion of systematic censorship in a better way It's hard to come up with one rule that works for everyone It's normal to have a knee jerk response to censor something if what your looking at is terrible and you wouldn't want other people to see the bad thing, but it is also important that we as a race try to learn from our mistakes; and censorship straight to deletion is burning the evidence. This may fall short of crazy talk around here: But maybe we need a human element in the curation of our digital libary, and a way for society as a whole to have input on this process, not just in removing the offending content; but also in keeping track of it too help people to be accountable for their actions. Full Disclosure: I'm also a resident of christchurch, new zealand where the mosque shooting happened recently I saw our goverment block multiple websites via DNS, in a country where this kind pf censorship is otherwise unheard of: We honestly didn't know if the systems were in place for this kind of censorship. So my 2c is that if you are in a five-eyes nation, they almost certainly have the capacity to conduct this kind of attack.
[+] rlt|6 years ago|reply
Damned if they do, damned if they don't.
[+] intended|6 years ago|reply
What do people expect?

Isn’t this uncharted territory for everyone?

Everyone can report, but the public sphere where this happens is owned by a company which needs profit.

Humanity was always insulated from what the worst that happened at that minute by the difficulty of transmitting information over older mediums.

This means incentives are not aligned here - speaking the truth, keeping you engaged, and keeping Disney land clean.

A person in the UK who wakes up in the morning doesn’t want to see evidence that South American violence has resulted in an entire village being eradicated.

As a species We were insulated from thinking at species scale. We could do village and perhaps nation scale on and average human basis. A rare few people can think at a planetary or greater scale regularly.

So how precisely should this be structured ?

[+] stef25|6 years ago|reply
Couldn't they take down the content but keep it available for neutral third parties to evaluate and collect?

There has to be a better way to collect evidence of war crimes than leaving decapitation videos visible on social media for all to see.

Maybe the ICC could even launch their own platform where this kind of content can be submitted.

[+] docker_up|6 years ago|reply
This sounds like a perfect reason why sites like LiveLeak exist. I don't know if I care that Facebook doesn't host videos that go against its TOS, but if sites like LiveLeak are forbidden from hosting these videos even though they want to, it would make me worried.
[+] thereisnospork|6 years ago|reply
A thought experiment:

If 9/11 had happened today, what would their policies mean for that footage? What should they do to censor it, if anything? Consider too the increase in camera density over the past two decades.

[+] deanclatworthy|6 years ago|reply
These platforms can do no right. If Facebook/Youtube leave a video online of a beheading of John Smith from <Country X> then <Country X> inevitably calls them out for promoting and hosting this "illegal" content. Now, they're being chastised for taking it down too.

Maybe, just maybe, these companies have internal processes for handing over such materials to the relevant authorities, and don't need to rely on armchair detectives to investigate war crimes.

[+] pergadad|6 years ago|reply
Very interesting article. Essentially: there is a legitimate need for such content, namely to hold the perpetrators accountable. So the question might be: how can this content still be accessed later by e.g. researchers/prosecutors/...? On the other hand that then opens the door for a dictator (we seem to have plenty of them nowadays) to ask for deletion of content and then use the stored materials still to identify e.g. regime critics. Really difficult issue.
[+] iscrewyou|6 years ago|reply
I just wish it was easy to turn their algorithms off. All these companies were good before those algorithms started. Like feed for Facebook & Twitter, recommendations for YouTube.

That would solve a lot of problems. It would make sure people aren’t exposed to things unwillingly and these companies won’t be on the hook for it but it will also make sure things aren’t getting reported and taken down as much as they should (for law enforcement tracking purposes).

[+] happytreefrens|6 years ago|reply
These tech companies decide, often in unison, to deplatform people, types of legal content, or political views. There is clearly coordination among news outlets, social media companies, and tech companies (everything from payment processors to web hosting).

Who is doing this coordination? To what end?

Is there a common thread in the deplatforming, smearing, and silencing campaigns?

[+] Nasrudith|6 years ago|reply
To be frank tech companies are a second order symptom of fucked up societal attitudes pushing the censorship.

The puritan mass market push towards "child friendly mass market" as a default which leaves no place for the ugliness of reality. They don't care about suffering - they just don't want to see it. It is narcissistic sociopathy as a norm - how dare they run out of a burning building in their underwear! Children might see them!

That it is the middle of the night is no concern - they don't want "inappropriate content" in a back alley. They want it gone.

The tech companies tend towards doing nothing until the masses of "think of the children" complaints about "objectionable" content. The absence of those masses of people is what I miss the most about the internet not being mainstream. I personally believe what we need is to offend those busybodies as much as possible until they fuck off into their own bubble.

[+] nunez|6 years ago|reply
Aren’t sites like LiveLeak better for this kind of content? Won’t this kind of content make it there anyway?
[+] indigochill|6 years ago|reply
Stratechery had an article that I think provides a reasonable compromise for displaying and taking responsibility of content some might find objectionable but which others would argue has value. It involves the poster being responsible and owning the consequences for the content as opposed to some third party platform which has to try to please opposing sides.

https://stratechery.com/2019/a-regulatory-framework-for-the-...