I'm wondering if this "coordinated mass reporting" has been automated, in a botnet kind of a way. This happened to a youtuber I follow recently, who has his channel demonetised for hate speech. A Youtube rep stated that it looked like incorrect flagging by viewers.
reminds of the 'accidental glitches' after anti-Modi posts vanished in India, curiously enough shortly after India pressured social media companies into silencing dissent.
People who think of centralised social media as an empowerement of voice of minority dissent have made a grave mistake. It's only contingently supportive as long as the position of the minority voices isn't in conflict with political or economic interests of the powers that be. And big anything be it government, business and so on is always run by people who come from the same power-centres that minorities are trying to speak up against.
I think it is a LOT simpler than what you are suspecting. As someone in the comments has pointed out, mass reporting on certain types of articles or certain sources trigger automated review/removal of that content.
Normally, consensus reporting is a good sign that something is undesirable. In this case it has clearly been weaponized. This is a TUNING problem and it seems like social media companies have to find a way to balance on this
My wild guess is JIDF [1], or a similar organization, is using multiple accounts to flag posts as violating community standards. Once it reaches a certain threshold of “downvotes”, some algorithm by Twitter/Instagram hides the content.
JIDF doesn't exist anymore but others have replaced it, eg ACT.il, they use "hasbara" documents that describe how to use logical fallacies and how to twist history to win online arguments. It's a really disgusting document.
This just goes to demonstrate what might be triggering these false positive auto-moderation, but it shouldn't justify it.
What I mean is that if the model they use for flagging would infer that legit spam (comments to a post by Justin Bieber AFAIK isn't related to the Israeli-Palestinian conflict) could mean that similar content when applied to relevant content is still spam (or marked for deletion for other reasons).
It's a bit like if google page rank algorithm identified a website being copied by multiple websites and punished the website that was the first to publish that content instead of other way around. If that was the case, it would have obviously been exploited to bury competitors pages.
This is similar to the famous "come to Besiktas!" posts. Or to whenever a Turkish or Egyptian footballer comes to a certain team, you can be sure that said team's social media channels will be inundated by Turkish or Egyptians fans for the duration of the player's stay. In other words it is spamming, but it is not State-sponsored spamming, like it happens with Israel.
There is currently also a Palestinian spam campaign on Facebook. Look at recent posts from popular media pages on completely unrelated topics. The comments are flooded with repetitive Palestinian messages.
I have no idea what actually happened here (maybe a mixture of things), and I know a lot of people don't necessarily think some powerful billionaire deserves any pity, or they may think he's actually involved in censoring things himself, but I kind of feel sympathy for Jack Dorsey. What a mess.
Its time to realize that Media companies have been lying their asses off for many years. So many people still believe what their TV tells them. And will grasp for any plausible explanation that still keeps their world view intact. No matter how many twists of logic it may be.
Why do you think Internet media is different? News outlets have political agenda, so do people who comment on reddit or create websites, it's inherently biased.
The Guardian has recently written a series of articles about social media manipulation but it seems to go under the radar because it doesn't affect rich countries. This issue is very pernicious and deadly, that's why "glitches" is not enough of an explanation. I think it's worth investingating for inside operations.
Social media manipulation (a bad, nefarious thing!) is hard to tell apart form social media moderation (a good, virtuous thing!).
Moderation is not just about removing illegal content (yes, most countries make certain content illegal, and publishing it, punishable). It's also about removing or de-emphasizing legal but unwanted content, various hooliganism, but also things the public sees as unwholesome, and flags as such. This, of course, also differs from country to country.
Israel / Arab conflict is one of the worst in this regard. One half of it will tell you about terrible oppressors who capture land, bring military force, and keep peaceful people under a blockade. And they'd be correct. The other side will tell about terrible oppressors who rain rockets on peaceful cities, engage in acts of terror, and proclaim the need to eradicate the other side. They'd also be right.
There is no way to make it peaceful and nice on Twitter when it is not peaceful and nice in the real life, for last like 60 years. There's no way any algorithm would look "fair" to people leaning either side, or even to people not caring enough to take sides.
Sorry, this is an intrusion of ugly reality which technical means cannot conceal. In this regard, there's no good, fair, nice, etc solution for Twitter, or any other social media.
Happened with LinkedIn as well. https://www.linkedin.com/posts/activity-6797910928844230656-... So, as somebody pointed out I guess it's the mass reporting with bunch of humans or bots knowing that the algorithm would trigger. Perhaps like DDoS on Social media posts
There are pretty obvious propaganda and brigading efforts from Palestine's side on many subreddits like r/PublicFreakout -- small wonder that spam detectors are going crazy.
Obviously Israel is doing intelligence work too, but probably subtly enough that it doesn't get flagged.
The amount of blatantly antisemitic “the Jews control the media” trope comments in this thread is truly disturbing. If you have nothing more than insinuation to contribute, don’t.
You mean when they held a political leader to a much lower standard of good conduct than every other person on the platform is subject to including people on with opposing political views?
When it comes to hot-button geopolitics, is there such a thing as a reverse Hanlon's Razor? Where you should be attributing such "glitches" to malice and not stupidity?
I'm no fan of the man, but banning Trump, in retrospect, will be a mistake. From now on, Instagram/Twitter/FB will have to start taking sides in civil wars, religious disputes, and geopolitical disagreements. The Israel/Palestine conflict now has a new front. The Ayatollah is literally calling for more missiles to be fired[1] at Israel on Twitter. The lingering question is how, exactly, do you deal with this?
> I'm no fan of the man, but banning Trump, in retrospect, will be a mistake. From now on
This precedent was set in the late 2000s when Facebook, Google, and Twitter started massive campaigns to investigate, report and remove Islamic extremist content on their platforms. A lot of legitimate non-extremist Islamic content and users got caught up in the censorship.
But the centralized platforms flourish, in many respects, due to being centralized.
Technical aspects aside, you know that there is one Twitter, and one Facebook, where "everybody" is, and "everything" gets reported.
You can possibly think about a few dozen Mastodon instances where to look for "everything"? A bunch of Gnu Social nodes? A hundred of key large Telegram groups?..
The damned network effect pushes everything to a single biggest node :-\
[+] [-] ttmb|4 years ago|reply
In that sense, the "glitch" is that the removal algorithms are susceptible to being gamed.
[+] [-] goatsi|4 years ago|reply
If it has been given the "trusted reporter" status that similar organizations have it could certainly be abused at times like this.
[+] [-] newsclues|4 years ago|reply
I think both are occurring in social media networks to some extent
[+] [-] slg|4 years ago|reply
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] prestigious|4 years ago|reply
[+] [-] leke|4 years ago|reply
[+] [-] inputError|4 years ago|reply
[deleted]
[+] [-] andyxor|4 years ago|reply
one doesn't have to be part of some government-sponsored conspiracy group for this, as some allege here
[+] [-] Barrin92|4 years ago|reply
People who think of centralised social media as an empowerement of voice of minority dissent have made a grave mistake. It's only contingently supportive as long as the position of the minority voices isn't in conflict with political or economic interests of the powers that be. And big anything be it government, business and so on is always run by people who come from the same power-centres that minorities are trying to speak up against.
[+] [-] darkwizard42|4 years ago|reply
Normally, consensus reporting is a good sign that something is undesirable. In this case it has clearly been weaponized. This is a TUNING problem and it seems like social media companies have to find a way to balance on this
[+] [-] naruvimama|4 years ago|reply
Meaning a lot of "moderators" are conditioned to be anti-modi and anti-hindu and pro communist or Islamic inspite of India's tragic history with both.
[+] [-] bobthechef|4 years ago|reply
[+] [-] nelsondev|4 years ago|reply
1 - https://en.m.wikipedia.org/wiki/Jewish_Internet_Defense_Forc...
[+] [-] amoshi|4 years ago|reply
https://www.webcitation.org/query?url=http%3A%2F%2Fwww.newsw...
[+] [-] tartoran|4 years ago|reply
[deleted]
[+] [-] ars|4 years ago|reply
[+] [-] charlesju|4 years ago|reply
For example, look at the top comments on any Justin Bieber post lately: https://www.instagram.com/p/COwqsQRH3Rj/
This probably happening all over the internet and these posts got swept up with the latest bot catching algorithms.
[+] [-] almog|4 years ago|reply
What I mean is that if the model they use for flagging would infer that legit spam (comments to a post by Justin Bieber AFAIK isn't related to the Israeli-Palestinian conflict) could mean that similar content when applied to relevant content is still spam (or marked for deletion for other reasons).
It's a bit like if google page rank algorithm identified a website being copied by multiple websites and punished the website that was the first to publish that content instead of other way around. If that was the case, it would have obviously been exploited to bury competitors pages.
[+] [-] paganel|4 years ago|reply
[+] [-] nradov|4 years ago|reply
[+] [-] meowface|4 years ago|reply
[+] [-] dukeofdoom|4 years ago|reply
[+] [-] curiousgal|4 years ago|reply
[+] [-] TheGuyWhoCodes|4 years ago|reply
[+] [-] cblconfederate|4 years ago|reply
> https://www.theguardian.com/technology/2021/apr/13/facebook-...
Social media will continue to be important for societies, but the commercial ones failed to the point of being dangerous. We need to decentralize that
[+] [-] nine_k|4 years ago|reply
Moderation is not just about removing illegal content (yes, most countries make certain content illegal, and publishing it, punishable). It's also about removing or de-emphasizing legal but unwanted content, various hooliganism, but also things the public sees as unwholesome, and flags as such. This, of course, also differs from country to country.
Israel / Arab conflict is one of the worst in this regard. One half of it will tell you about terrible oppressors who capture land, bring military force, and keep peaceful people under a blockade. And they'd be correct. The other side will tell about terrible oppressors who rain rockets on peaceful cities, engage in acts of terror, and proclaim the need to eradicate the other side. They'd also be right.
There is no way to make it peaceful and nice on Twitter when it is not peaceful and nice in the real life, for last like 60 years. There's no way any algorithm would look "fair" to people leaning either side, or even to people not caring enough to take sides.
Sorry, this is an intrusion of ugly reality which technical means cannot conceal. In this regard, there's no good, fair, nice, etc solution for Twitter, or any other social media.
[+] [-] amrrs|4 years ago|reply
[+] [-] diebeforei485|4 years ago|reply
[+] [-] darthrupert|4 years ago|reply
Obviously Israel is doing intelligence work too, but probably subtly enough that it doesn't get flagged.
[+] [-] bitcurious|4 years ago|reply
[+] [-] Nemrod67|4 years ago|reply
Doesn't mean your local jewish accountant goes to secret robed meetings to further their domination of the world :p
[+] [-] CommanderData|4 years ago|reply
It's simply censorship not a glitch.
[+] [-] throwitaway12|4 years ago|reply
[+] [-] kennywinker|4 years ago|reply
[+] [-] King-Aaron|4 years ago|reply
[+] [-] sneak|4 years ago|reply
It has not been fixed.
[+] [-] seumars|4 years ago|reply
[+] [-] BitwiseFool|4 years ago|reply
[+] [-] majjgepolja|4 years ago|reply
[+] [-] dvt|4 years ago|reply
[1] https://twitter.com/khamenei_ir/status/1392175039181623301
[+] [-] metalliqaz|4 years ago|reply
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] mensetmanusman|4 years ago|reply
[+] [-] heavyset_go|4 years ago|reply
This precedent was set in the late 2000s when Facebook, Google, and Twitter started massive campaigns to investigate, report and remove Islamic extremist content on their platforms. A lot of legitimate non-extremist Islamic content and users got caught up in the censorship.
[+] [-] _____bee|4 years ago|reply
[+] [-] nine_k|4 years ago|reply
Technical aspects aside, you know that there is one Twitter, and one Facebook, where "everybody" is, and "everything" gets reported.
You can possibly think about a few dozen Mastodon instances where to look for "everything"? A bunch of Gnu Social nodes? A hundred of key large Telegram groups?..
The damned network effect pushes everything to a single biggest node :-\
[+] [-] mdoms|4 years ago|reply
[+] [-] slavboj|4 years ago|reply
https://techcrunch.com/2015/03/20/from-the-8200-to-silicon-v...
[+] [-] say_it_as_it_is|4 years ago|reply