top | item 28684696

(no title)

quantumBerry | 4 years ago

The internet itself is unmoderated in any useful sense for content, yet it has lived longer than most of these cheesy "moderated" products that seek to impose their morality on you.

discuss

order

munificent|4 years ago

It looks like you're getting downvoted, but I think this is a good point and worth thinking about.

I believe one key difference here is group identity perception. If you like thinking in business terms, you could say "branding".

Facebook, Reddit, HN, Twitter, etc. all must care about content moderation because there is a feedback loop they have to worry about:

1. Toxic content gets posted.

2. Users who dislike that content see it and associate it with the site. They stop using it.

3. The relative fraction of users not posting toxic content goes down.

4. Go to 1.

Run several iterations of that and if you aren't careful, your "free" site is now completely overrun and forever associated with one specific subculture. Tumblr -> porn, Voat -> right-wing extremism, etc.

Step 2 is the key step here. If a user sees some content they don't like and associates it with the entire site it can tilt the userbase.

The web as a whole avoids that because "the web" is not a single group or brand in the minds of most users. When someone sees something horrible on the web, they think "this site sucks" not "the web sucks".

Reddit is an interesting example of trying to thread that needle with subreddits. As far as I can tell, Reddit as a whole isn't strongly associated with porn, but there are a lot of pornographic subreddits. During the Trump years, it did get a lot of press and negative attention around right-wing extremism because of The_Donald and other similar subreddits, but it has been able to survive that better than other apps like Gab or Voat.

There are still many many thriving, wholesome, positive communities on Reddit. So, if there is a takeaway, it might be to preemptively silo and partition your communities so that a toxic one doesn't take down others with it.

Nasrudith|4 years ago

I personally see it as "plausible deniability" as the cynical actual distinction for what gets people to share blame. Not actual affiliations or whose servers it is run on. Any number of objectionable sites are run on AWS and you basically need to be an international scandal or violating preexisting terms to get booted. Like some malware to governments merchants. Amazon's policies did not care if it was legal just if you were doing so unauthorized. A wise move when international law is really like the Pirate code.

The interlinking between the pages themselves and common branding are what creates the associations. Distributed twitter alternatives like Mastodon can even share the same branding but it is on a per network basis and complex enough to allow for some "innocent" questionable connections.

majormajor|4 years ago

The internet is very moderated, on the contrary, in terms of UGC.

Traditional, non-social, websites have single or known-group authors. When one of them is defaced or modified we call it "hacking" not "unmoderated content." We assume NASA's site has NASA-posted content. We assume Apple's site has Apple-posted content.

Sites with different standards for what they'd publish have been around for decades (for gore, for porn, etc) but many of these still exist in a traditional curated-by-someone fashion, or are more open to UGC but still have some level of moderation.

quantumBerry|4 years ago

The internet is not moderated in any useful sense for content. Drug markets like white house market, and before that silk road have perpetuated for years. Tor and other darknet websites host content that is nearly universally disdained by governments and even most individuals, which I hesitate to even name here what that heinous content is (you and I both know some examples).

> We assume NASA's site has NASA-posted content. We assume Apple's site has Apple-posted content.

Trust in identity is not the same thing as useful moderation of content. That's useful moderation of identity.

>Sites with different standards for what they'd publish have been around for decades (for gore, for porn, etc) but many of these still exist in a traditional curated-by-someone fashion, or are more open to UGC but still have some level of moderation.

Those sites _choose_ to moderate their content, that doesn't exclude others that don't.

Supermancho|4 years ago

The "internet" isn't liable, so moderate is in the form of transparent traffic shaping. When disruptions are small, costs are either absorbed in aggregate by infrastructure owners (and user attention) until traffic is literally moderated away with routing.

oblio|4 years ago

Unmoderated? Probably 99% of traffic goes to the same top 500 sites which are heavily moderated.

commandlinefan|4 years ago

Maybe so (that sounds believable, anyway). What he's saying is that the other 1% is unmoderated because there's no central authority [1]. The problem here isn't that people will share bad things if you don't stop them, the problem is that you're in a position of being held responsible for something outside your control. If it's illegal, it should be reported (or found by law enforcement whose job it is to enforce the law) and if it's offensive, offer some user-side filtering.

[1] this is starting to change, though - Amazon took Parler offline completely at the hosting level. Although they eventually found another hosting provider, it's not unimaginable that in the near future, service providers will collaborate to moderate the underlying traffic itself.