top | item 17612012

(no title)

mkempe | 7 years ago

I'd like to understand what reasoning one goes through to embark on such an illiberal path.

(Note: not arguing that they can't legally do it, obviously a private business is free to promote or hinder various messages on its own platform. I'm wondering about internal justifications, and logical consequences. Ethics and morals, in other words.)

discuss

order

DanHulton|7 years ago

From the article: “I'd emphasize that our technology is based on account behavior not the content of Tweets.”

It sounds like it's an automated system attempting to de-emphasize certain uses of the platform, not someone standing there going "Okay, Republican? Shadowbanned. Next? Republican? Shadowbanned. Next?"

i.e. if you don't want to be shadowbanned, maybe stop acting poorly.

ZainRiz|7 years ago

Most likely this is an AI/ML bases system that is doing Natural Language Processing to determine if an account is exhibiting "bad behavior"

That means first they identified many badly behaving accounts, trained their ML model to auto-detect other accounts behaving badly, and then banned them.

Of course, these ML models are famously hard to decipher.

The implication?

Twitter could choose to mention the criteria they used to select the original set of accounts they used to train their model (most likely it was accounts that had been manually banned in the past, these would have been bans which went unnoticed by the media) but they can't say "our model auto-bans accounts which do X, Y, and Z" because the logic used by the ML models is too obscure for any person to understand.

mkempe|7 years ago

[deleted]

lower|7 years ago

From the article:

> “I'd emphasize that our technology is based on account behavior not the content of Tweets.”

dmix|7 years ago

That's a very vague statement which doesn't explain why it's a phenomenon frequently occurring with republican accounts. Besides it's very easy to create rules and only enforce them selectively, then just dismiss criticism with "we're just enforcing the law": as we've seen with over-policing in black communities and excessive probation in sentencing. Why not be more specific? They also said:

> “We are aware that some accounts are not automatically populating in our search box and shipping a change to address this.”

So is it a problem with Twitter or not? What code are they "shipping" to fix this? What type of "behaviour" gets you shadow-banned? How is it enforced (by humans at Twitter, manually reporting by users, machine learning, etc)?

dragonwriter|7 years ago

A publisher or distributor of content provided by third parties selecting which content they wish to relay rather than being content blind is not taking an illiberal path (except, perhaps, if they do so to selectively promote an illiberal viewpoint, but that's not the level you seem to be challenging.)

In fact, it's key to the liberal marketplace of ideas and exactly the behavior freedom of speech and the press exists to support.

repolfx|7 years ago

It's easy. The reasoning goes like this (I don't agree with this reasoning, I'm just spelling it out).

It starts with a perceived axiom, or an intuition if you like: People are very different inside. Although the difference between the best people and worst people is very large, anyone can improve themselves through reflection and thought and listening to the ideas of people better than themselves.

But logically, not everyone does so. Some people are much smarter than others. And some are much more moral than others. Also, if listening to good people can make someone better, then listening to bad people can make someone worse.

This makes certain kinds of people very dangerous. Smart but immoral people can easily influence less smart people and convert them into immoral people too. In fact this is almost sure to happen if smart+immoral people speak and are listened to, because the vast majority of the population is pliable and easily persuaded. They decide how to vote by reading newspapers and even tweets. If bad people have access to a platform like Twitter, then their badness will spread like a virus.

But we don't want people to become bad. We want them to become good. That means it's important to ensure that only good people have platforms on which they can speak and be heard. If we don't deny bad people platforms, then that is itself a form of immoral behaviour, no better than not washing our hands and spreading germs that way. We wouldn't spread physical illness so we shouldn't spread the mental illness of immorality either.

What is immorality? Well, a strong sign someone is a bad person is that they argue against obvious and simple solutions to important problems. They dissemble and prevaricate and may even try to block implementation of solutions. Republicans seem to often deny and fight against solutions to important problems like healthcare, or the plight of refugees from the third world. Whilst they have justifications, they often seem indirect and slippery, like "this change will make things worse rather than better because government is incompetent", although that's hard to believe because smart and moral people are often attracted to public service (smart and immoral people are in contrast attracted to profit-seeking endeavours).

Therefore Republicans seem immoral, but they keep winning elections so they must also be smart. Smart + immoral = dangerous. If they can speak, they might brainwash good people like my friends into becoming Republican too, and that would be terrible, because how will we cure cancer and solve hunger with Republicans in charge?

So - the only good and moral thing left to do is find ways to suppress their speech. It's for the greater good.