top | item 33449791

Ex-Reddit CEO on Twitter moderation

1175 points| kenferry | 3 years ago |twitter.com | reply

937 comments

order
[+] dang|3 years ago|reply
All: this is an interesting submission—it contains some of the most interesting writing about moderation that I've seen in a long time*. If you're going to comment, please make sure you've read and understand his argument and are engaging with it.

If you dislike long-form Twitter, here you go: https://threadreaderapp.com/thread/1586955288061452289.html - and please don't comment about that here. I know it can be annoying, but so is having the same offtopic complaints upvoted to the top of every such thread. This is why we added the site guideline: "Please don't complain about tangential annoyances—e.g. article or website formats" (and yes, this comment is also doing this. Sorry.)

Similarly, please resist being baited by the sales interludes in the OP. They're also offtopic and, yes, annoying, but this is why we added the site guideline "Please don't pick the most provocative thing in an article to complain about—find something interesting to respond to instead."

https://news.ycombinator.com/newsguidelines.html

* even more so than https://news.ycombinator.com/item?id=33446064, which was also above the median for this topic.

[+] ufo|3 years ago|reply
In the US, where Twitter & Facebook are dominant, the current consensus in the public mind is that political polarization and radicalization are driven by the social media algorithms. However, I have always felt that this explanation was lacking. Here in Brazil we have many of the same problems but the dominant social media are Whatsapp group chats, which have no algorithms whatsoever (other than invisible spam filters). I think Yishan is hitting the nail on the head by focusing the discussion on user behavior instead of on the content itself.
[+] jameskilton|3 years ago|reply
Every single social media platform that has ever existed makes the same fundamental mistake. They believe that they just have to remove or block the bad actors and bad content and that will make the platform good.

The reality is everyone, myself included, can be and will be a bad actor.

How do you build and run a "social media" product when the very act of letting anyone respond to anyone with anything is itself the fundamental problem?

[+] bambax|3 years ago|reply
You're confusing bad actors with bad behavior. Bad behavior is something good people do from time to time because they get really worked up about a specific topic or two. Bad actors are people who act bad all the time. There may be some of those but they're not the majority by far (and yes, sometimes normal people turn into bad actors because they get upset about a given thing that they can't talk about anything else anymore).

OP's argument is that you can moderate content based on behavior, in order to bring the heat down, and the signal to noise ratio up. I think it's an interesting point: it's neither the tools that need moderating, nor the people, but conversations (one by one).

[+] gambler|3 years ago|reply
It's not a mistake. It's a PR strategy. Social media companies are training people to blame content and each other for the effects that are produced by design, algorithms and moderation. This reassigns blame away from things that those companies control (but don't want to change) to things that aren't considered "their fault".
[+] dgant|3 years ago|reply
This is something Riot Games has spoken on, the observation that ordinary participants can have a bad day here or there, and that forgiving corrections can preserve their participation while reducing future incidents.
[+] ajmurmann|3 years ago|reply
It sounds like a insurmountable problem. What makes this even more interesting to me is that HN seems to have this working pretty well. I wonder how much of it has to do with clear guidelines of what should be valued and what shouldn't and having a community that buys in to that. For example one learns quickly that Reddit-style humor comments are frowned upon because the community enforces it with downvotes and frequently explanations of etiquette.
[+] phillipcarter|3 years ago|reply
> The reality is everyone, myself included, can be and will be a bad actor.

But you likely aren't, and most people likely aren't either. That's the entire premise behind removing bad actors and spaces that allow bad actors to grow.

[+] stcredzero|3 years ago|reply
The original post is paradoxical in the very way it talks about social media being paradoxical.

He observes that social media moderation is about signal to noise. Then he goes on about introducing off-topic noise. Then, he comes to conclusions that seem to ignore his original conclusion about it being a S/N problem.

Chiefly, he doesn't show how a "council of elders" is necessary to solve S/N problems.

Strangely enough, Slashdot seems to have a system which worked pretty well back in the day.

[+] kjkjadksj|3 years ago|reply
What about fox news? AM radio? These are bastions of radicalization but they dont let anyone come on and say anything. At the end of the day this sort of rhetoric played by these groups is taught in university communications classes as a way to exert influence. Its all just propaganda at the end of the day, and that can come in the form of a pamphlet, or a meeting in a town hall, or from some talking head on tv, or a tweet. Social media is just another avenue for propaganda to manifest just like how the printing press is.
[+] bnralt|3 years ago|reply
Some people are much more likely to engage in bad behavior than others. The thing is, people who engage in bad behavior are also much more likely to be "whales," excessive turboposters who have no life and spend all day on these sites.

Someone who has a balanced life, who spends time at work, with family, in nature, only occasionally goes online, uses most of their online time for edification, spends 30 minutes writing a reply if they decide one is warranted - that type of person is going to have a minuscule output compared to the whales. The whales are always online, thoughtlessly writing responses and upvoting without reading articles or comments. They have a constant firehouse of output that dwarfs other users.

Worth reading "Most of What You Read on the Internet is Written by Insane People"[1].

If you actually saw these people in real life, chances are you'd avoid interacting with them. People seeing a short interview with the top mod of antiwork almost destroyed that sub (and lead to the mod stepping down). People say the internet is a bad place because people act badly when they're not face to face. That might be true to some extent, but we're given online spaces where it's hard to avoid "bad actors" (or people that engage in excessive bad behavior) the same way we would in person.

And these sites need the whales, because they rely on a constant stream of low quality content to keep people engaged. There are simple fixes that could be done, like post limits and vote limits, but sites aren't going to implement them. It's easier to try to convince people that humanity is naturally terrible than to admit they've created an environment that enables - and even relies on - some of the most unbalanced individuals.

[1] https://www.reddit.com/r/slatestarcodex/comments/9rvroo/most...

[+] dfxm12|3 years ago|reply
How do you build and run a "social media" product when the very act of letting anyone respond to anyone with anything is itself the fundamental problem?

This isn't the problem as much as giving bad actors tools to enhance their reach. Bad actors can pay to get a wider reach or get/abuse a mark of authority, like a special tag on their handle, getting highlighted in a special place within the app, gaming the algorithm that promotes some content, etc. Most of these tools are built into the platform. Some though, like sock puppets, can be detected but aren't necessarily built in functionality.

[+] P_I_Staker|3 years ago|reply
At the very least you could be susceptible overreacting because of an emotionally charged issue. Eg. Reddit's boston marathon bomber disaster, when they started trying to round up brown people (actual perp "looked white")

Maybe that wouldn't be your crusade and maybe you would think you were standing up for an oppressed minority. You get overly emotional, and you could be prone to making some bad decisions.

People act substantially differently on reddit vs. hackernews; honestly I have to admit to being guilty of it. Some of the cool heads here are probably simultaneously engaged in flamewars on reddit/twitter.

[+] matt_s|3 years ago|reply
The business plan of massive user scale, user generated content, user “engagement” with ad driven revenue leads to the perceived issues about polarization and content moderation. That and the company structure are the fundamental problems that attract what we see on social media. The data about users is the product sold to advertisers. The platform only cares about moderation in that it supports the goal of more ad revenue, that is why Yishan said spam moderation was job #1, its more harmful to ad revenue than users with poor behavior.

If a social media company’s mission is to have no barrier, anyone and everyone to share ideas, information and “all are welcome” then maybe a company structure like a worker cooperative [0] would be a better match to that mission statement. No CEO that gets massive pay/stock, instead employees are owners. All employees. They decide what features/projects the company does, how to allocate resources, how to moderate content, etc.

[0] https://en.wikipedia.org/wiki/Worker_cooperative

[+] visarga|3 years ago|reply
> The reality is everyone, myself included, can be and will be a bad actor.

Customised filters for anyone, but I am talking about filters completely under the control of the user. Maybe running locally. We can wrap ourselves in a bubble but better that than having a bubble designed by others.

I think AI will make spam irrelevant over the next decade by switching from searching and reading to prompting the bot. You don't ever need to interface with the filth, you can have your polite bot present the results however you please. It can be your conversation partner and you get to control its biases as well.

Internet <-> AI agent <-> Human

(the web browser of the future, the actual web browser runs in a sandbox under the AI)

[+] jbirer|3 years ago|reply
>How do you build and run a "social media" product when the very act of letting anyone respond to anyone with anything is itself the fundamental problem?

That's...that's not a problem.

[+] Melatonic|3 years ago|reply
Not true at all - everyone has the capacity for bad behaviour in the right circumstances but most people are not, in my opinion, there intentionally to be trolls.

There are the minority who love to be trolls and get any big reaction out of people (positive or negative). Those people are the problem. But they are also often very good at evading moderation or laying in wait and toeing the line between bannable offences and just every so slightly controversial comments.

[+] paul7986|3 years ago|reply
Having a a verified public Internet /Reputation ID system for those who want to be bad or good publicly is one way!

All others are just trolls not backed up by their verified public Internet / Reputation ID.

[+] onion2k|3 years ago|reply
The reality is everyone, myself included, can be and will be a bad actor.

Based on this premise we can conclude that the best way to improve Reddit and Twitter is to block everyone.

[+] esotericimpl|3 years ago|reply
Charge them $10 to create an account (anonymous, real, parody whatever), then if they break a rule give them a warning, 2 rule breaks, a 24 hour posting suspension, 3 strikes and permanently ban the account.

Let them reregister for $10.

Congrats, i just solved spam, bots, assholes and permanent line steppers.

[+] Covzire|3 years ago|reply
Give the user exclusive control over what content they can see. The platform should enforce legal actions against users only, as far as bans are concerned.

Everything else, like being allowed to spam or post too quickly, is a bug, and bugs should be addressed in the open.

[+] Invictus0|3 years ago|reply
I'm not a bad actor, I only have 3 tweets and they're all reasonable IMO. So your premise is wrong.
[+] motohagiography|3 years ago|reply
I've had to give this some thought for other reasons, and after a couple decades solving analogous problems to moderation in security, I agree with yishan about signal to noise over the specific content, but what I have effectively spent a career studying and detecting with data is a single factor: malice.

It's something every person is capable of, and it takes a lot of exercise and practice with higher values to reach for something else when your expectations are challenged, and often it's an active choice to recognize the urge and act differently. If there were a rule or razor I would make on a forum or platform, it's that all content has to pass the bar of being without malice. It's not "assume good intent," it's recognizing that there are ways of having very difficult opinions without malice, and one can have conventional views that are malicious, and unconventional ones that are not. If you have ever dealt with a prosecutor or been on the wrong side of a legal dispute, these are people fundamentally actuated by malice, and the similar prosecution of ideas and opinions (and ultimately people) is what wrecks a forum.

It's not about being polite or civil, avoiding conflict, or even avoiding mockery and some very funny and unexpected smackdowns either. It's a quality that in being universally capable of it, I think we're also able to know it when we see it. "Hate," is a weak substitute because it is so vague we can apply it to anything, but malice is ancient and essential. Of course someone malicious can just redefine malice the way they have done other things and use it as an accusation because words have no meaning other than as a means in struggle, but really, you can see when someone is actuated by it.

I think there is a point where a person decides, consciously or not, that they will relate to the world around them with malice, and the first casulty of that is an alignment to honesty and truth. What makes it useful is that you can address malice directly and restore an equillibrium in the discourse, whereas accusations of hate and others are irrevocable judgments. I'd wonder if given it's applicability, this may be the tool.

[+] kmeisthax|3 years ago|reply
This is a very good way to pitch your afforestation startup accelerator in the guise of a talk on platform moderation. /s

I'm pretty sure I've got some bones to pick with yishan from his tenure on Reddit, but everything he's said here is pretty understandable.

Actually, I would like to develop his point about "censoring spam" a bit further. It's often said that the Internet "detects censorship as damage and routes around it". This is propaganda, of course; a fully censorship-resistant Internet is entirely unusable. In fact, the easiest way to censor someone online is through harassment, or DDoS attacks - i.e. have a bunch of people shout at you until you shut up. Second easiest is through doxing - i.e. make the user feel unsafe until they jump off platform and stop speaking. Neither of these require content removal capability, but they still achieve the goal of censorship.

The point about old media demonizing moderation is something I didn't expect, but it makes sense. This is the same old media that gave us cable news, after all. Their goal is not to inform, but to allure. In fact, I kinda wish we had a platform that explicitly refused to give them the time of day, but I'm pretty sure it's illegal to do that now[0], and even back a decade ago it would be financial suicide to make a platform only catering to individual creators.

[0] For various reasons:

- The EU Copyright Directive imposes an upload filtering requirement on video platforms that needs cooperation with old media companies in order to implement. The US is also threatening similar requirements.

- Canada Bill C-11 makes Canadian content (CanCon) must-carry for all Internet platforms, including ones that take user-generated content. In practice, it is easier for old media to qualify as CanCon than actual Canadian individuals.

[+] digitalsushi|3 years ago|reply
I can speak only at a Star Trek technobabble level on this, but I'd like it if I could mark other random accounts as "friends" or "trusted". Anything they upvote or downvote becomes a factor in whether I see a post or not. I'd also be upvoting/downvoting things, and being a possible friend/trusted.

I'd like a little metadata with my posts, such as how controversial my network voted it. The ones that are out of calibration, I can view, see their responses, and then I could see if my network has changed. It would be nice to click on a friend and get a report across months of how similar we vote. If we started drift, I can easily cull them and get my feed cleaned up.

[+] jacobsenscott|3 years ago|reply
The solution is simple - only show users tweets from people they follow. People may say twitter can't make money this way, but with this model you don't need much money. You don't need moderation, or AI, or a massive infrastructure, or tracking, etc. You don't need managers or KPIs or HR, or anything beyond an engineer or two and a server or two. Musk could pay for this forever and it would never be more than a rounding error in his budget.

But this isn't what twitter is for. Twitter is for advertising.

[+] makeitdouble|3 years ago|reply
As any simple solution, it becomes extremely complicated as you get into the details.

I follow official gov accounts notifying of policy changing, deadlines etc.

- Should I see their retweets ? Yes, they retweet relevant info from other gov, accounts.

- Should I see replies to these tweets ? Probably, there’s useful info coming in the comments from time to time, in particular about situations similar to mine.

So as a user, I have valid reason to whant these two mechanism. But then applying them to shitposting accounts, it becomes a hell scape. And with users who bring in valuable info but sometimes shitpost, we’re starting to need nuance. And so on.

We’re back to square one. The “simple” solution expanded to its useful form brings back the moderation issues we’re trying to flee.

[+] raz32dust|3 years ago|reply
This is such a gross simplification. First, even people who you follow might post both wanted and unwanted content, and the platform will be more useful if it can somehow show me the things I want to see. Second, it overlooks content creator side of things. How does a new person without any followers start gaining them, or vice-versa a new person who doesn't know whom to follow yet. People keep saying this is what they want from Twitter. But Twitter is not only for them, it is valuable for these other use cases.
[+] alkonaut|3 years ago|reply
Yes.

I only see tweets from people I follow. There is zero chance I'd use the twitter app/site default timeline, look at "trending" topics etc. I just follow a hundred or so people and I see their tweets in alphabetical order because I use a sane client (In my case Tweetbot, but there are others).

Content/people discovery is not a problem because of retweets. Almost all the people I follow I follow because people I already followed retweeted or quoted them. Then I look at that profile, the content they write, and if they are interesting, I follow them too.

If someone produces content I don't like for whatever reason, I unfollow them. That includes content they quote or retweet too obviously.

[+] Gigachad|3 years ago|reply
This won’t work because it’s not good enough to just not see something you don’t like. You have to ensure that no one else can see the thing you don’t like as well. It should be deplatformed at the IP level.
[+] previnder|3 years ago|reply
Something like this, where the feed is ordered by time, has the added advantage of having a clear cue for when to stop scrolling. When you reach a post from yesterday, you know it's time to stop.
[+] emodendroket|3 years ago|reply
Yeah no that would not actually solve the problem.
[+] jxi|3 years ago|reply
I do want discovery though.
[+] MichaelZuo|3 years ago|reply
There are some neat ideas raised by Yishan.

One is 'put up or shutup' for appeals of moderator decisions.

That is anyone who wishes to appeal needs to also consent to have all their activities on the platform, relevant to the decision, revealed publicly.

It definitely could prevent later accusations of secretiveness or arbitrariness. And it probably would also make users think more in marginal cases before submitting.

[+] whitexn--g28h|3 years ago|reply
This is something that occurs on twitch streams sometimes. While it can be educational for users to see why they were banned, some appeals are just attention seeking. Occasionally though it exposes the banned user’s or worse a victim users personal information, (eg mental health issues, age, location) and can lead to both users being targeted and bad behaviour by the audience. For example Bob is banned for bad behaviour towards Alice (threats, doxxing), by making that public you are not just impacting Bob, but could also put Alice at risk.
[+] wyldberry|3 years ago|reply
This also used to be relatively popular in the early days of League of Legends, people requesting a "Lyte Smite". Players would make inflammatory posts on the forums saying they were banned wrongly, and Lyte would come in with the chatlogs, sometimes escalating to perma-ban. I did always admire this system and thought it could be improved.

There's also a lot of drama around Lyte in his personal life, should you choose to go looking into that.

[+] ItsBob|3 years ago|reply
Here's a radical idea: let me moderate my own shit!

Twitter is a subscription-based system (by this, I mean that I have to subscribe to someone's content) so if I subscribe to someone and don't like what they say then buh-bye!

Let me right click on a comment/tweet (I don't use social media so not sure of the exact terminology the kids use these days) with the options of:

- Hide this comment

- Hide all comments in this thread from <name>

- Block all comments in future from <name> (you can undo this in settings).

That would work for me.

[+] paradite|3 years ago|reply
I recently started my own Discord server and had my first experience in content moderation. The demographics is mostly teenagers. Some have mental health issues.

It was the hardest thing ever.

In first incident I chose to ignore a certain user being targeted by others for posting repeated messages. The person left a very angry message and left.

Comes the second incident, I thought I learnt my lesson. Once a user is targeted, I tried to stop others from targeting the person. But this time the people who targeted the person wrote angry messages and left.

Someone asked a dumb question, I replied in good faith. The conversation goes on and on and becomes weirder and weirder, until the person said "You shouldn't have replied me.", and left.

Honestly I am just counting on luck at this time that I can keep it running.

[+] blfr|3 years ago|reply
> Because it is not TOPICS that are censored. It is BEHAVIOR.

> (This is why people on the left and people on the right both think they are being targeted)

An enticing idea but simply not the case for any popular existing social network. And it's triply not true on yishan's reddit which both through administrative measures and moderation culture targets any and all communities that do not share the favoured new-left politics.

[+] dbrueck|3 years ago|reply
At least one missing element is that of reputation. I don't think it should work exactly like it does in the real world, but the absence of it seems to always lead to major problems.

The cost of being a jerk online is too low - it's almost entirely free of any consequences.

Put another way, not everyone deserves a megaphone. Not everyone deserves to chime in on any conversation they want. The promise of online discussion is that everyone should have the potential to rise to that, but just granting them that privilege from the outset and hardly ever revoking it doesn't work.

Rather than having an overt moderation system, I'd much rather see where the reach/visibility/weight of your messages is driven by things like your time in the given community, your track record of insightful, levelheaded conversation, etc.

[+] kalekold|3 years ago|reply
I wish we could all go back to phpBB forums. Small, dedicated, online communities were great. I can't remember massive problems like this back then.
[+] ptero|3 years ago|reply
This topic was adjacent to the sugar and L-isomer comments. Which probably influenced my viewpoint:

Yishan is saying that Twitter (and other social networks) moderate bad behavior, not bad content. They just strive for higher SNR. It is just that specific types of content seems to be disproportionately responsible for starting bad behavior in discussions; and thus get banned. Sounds rational and while potentially slightly unfair looks totally reasonable for a private company.

But what I think is happening is that this specific moderation on social networks in general and Twitter in particular has pushed them along the R- (or L-) isomer path to an extent that a lot of content, however well presented and rationally argued, just cannot be digested. Not because it is objectively worse or leads into a nastier state, but simply because deep inside some structure is pointing in the wrong direction.

Which, to me, is very bad. Once you reach this state of mental R- and L- incompatibility, no middle ground is possible and the outcome is decided by an outright war. Which is not fun and brings a lot of causalties. My 2c.

[+] hunglee2|3 years ago|reply
"there will be NO relation between the topic of the content and whether you moderate it, because itʻs the specific posting behavior thatʻs a problem"

some interesting thoughts from Yishan, a novel way to look at the problem.

[+] ilyt|3 years ago|reply
It's kinda funny that many of the problems he's mentioning is exactly how moderation on reddit currently works.

Hell, newly revamped "block user" mode got extra gaslighting as a feature, now person blocked can't reply to anyone under the comment of person that blocked them, not just the person that blocked them so anyone that doesn't like people discussing how they are wrong can just ban the people that disagree with them and they will not be able to answer to any of their comments.

[+] csours|3 years ago|reply
Is there a better name than "rational jail" for the following phenomenon:

We are having a rational, non-controversial, shared-fact based discussion. Suddenly the first party in the conversation goes off on a tangent and starts saying values or emotions based statements instead of facts. The other party then gets angry and or confused. The first party then gets angry and or confused.

The first party did not realize they had broken out of the rational jail that the conversation was taking place in; they thought they were still being rational. The second party detected some idea that did not fit with their rational dataset, and detected a jailbreak, and this upset them.

[+] im-a-baby|3 years ago|reply
A few thoughts:

1) Everyone agrees that spam should be "censored" because (nearly) everyone agrees on what spam is. I'm sure (nearly) everyone would also like to censor "fake news", but not everyone agrees on the definition of fake news, which is why the topic is more contentious than spam.

2) Having a "1A mode", where you view an unmoderated feed, would be interesting, if only to shut up people who claim that social media companies are supposed to be an idealistic bastion of "free speech." I'm sure most would realize the utility is diminished without some form of moderation.

[+] karaterobot|3 years ago|reply
There were indeed some intelligent, thoughtful, novel insights about moderation in that thread. There were also... two commercial breaks to discuss his new venture? Eww. While discussing how spam is the least controversial type of noise you want to filter out? I appreciate the good content, I'm just not used to seeing product placement wedged in like that.