(no title)
diegoveralli | 4 years ago
But it's very hard to see how these social network interventions are well thought out and have considered all the possible side effects, many of which are mentioned in other comments. I suppose they're tracking the data and will change course if this doesn't work how they expected..
Still, rather than being a case of deplatforming harmful speech, these look like amputations of entire conversations from the service, maybe to take the spotlight off the degenerate nature of Youtube as a human communication platform.
It's clear that the ability of bad characters to screw up an entire system, as you say, is at least partially enabled by Youtube's incentives, and the features those lead to (the old radicalization = engagement fiasco for example). A better way to combat disinformation would be to understand how Youtube often brings the worst out of its viewers, and to fix that. But it's not clear who has the incentive or the obligation to do it.
The alternative is to move the conversation to other types of social networks, with other incentives. But that seems even harder.
What is clear to me is that having most of the world get their news from a service that algorithmically (I think, it's unclear from the article) bans a fully vaccinated, pro-vaccine M.D. for suggesting people who have been infected have immunity (https://news.ycombinator.com/item?id=28693407), to give an example, is not ideal. If they choose to do this instead of tackling the problems in their recommendation system, which rewards disinformation and other harmful types of content in all sorts of topics, not just vaccines, then it's even worse.
No comments yet.