top | item 42821445

(no title)

angrygoat | 1 year ago

How are decentralised platforms managing abusive content? TikTok had some bumps in the road maybe five years back with this, but got it under control. I know I don't want to be scrolling through video content and see illegal or unethical content.

That compliance aspect seems like one thing that pushes us towards centralised architectures for social media, but I'm guessing that AI models to screen images / videos are pretty widely available now and cheaply deployable?

discuss

order

mglz|1 year ago

In these systems you pick what you see and don't rely on a third party algorithm for that. If one instance allows the publications of things others disagree with defederation is an option.

Under no circumstances should we let AI filter the fediverse. Freedom is worth more than mild offense.

grishka|1 year ago

Fediverse-based platforms usually just don't do any proactive moderation. They rely on users spotting violations and reporting them. The reports are federated though. If admins of two servers disagree on what should be allowed, that usually leads to a federation block between them.

jeroenhd|1 year ago

There is some proactive moderation on new servers in my experience. Servers belonging to certain controversial/unpleasant content groups are often blocked beforehand, for instance. Certain porn-oriented parts of the Fediverse basically exist as their own islands because nobody wants the moderation burden.

I think these federated platforms can use some kind of spam detection, though. A bunch of Japanese teenagers completely swamped most of the Fediverse with a shitty prank on another Discord server for instance, and there was that time someone automated posting CSAM across a few servers. It all kind of feels rather 1980s internet in a way, clearly not set up to deal with intentionally malicious people.