top | item 33861240

(no title)

debrice | 3 years ago

I think that’s a real threat to all remote social experience we have. It could very well take down the whole internet experience as we know it today.

It’s strangely exciting but what we thought could exist for centuries might be like so many other trends and only last a few decades.

discuss

order

didericis|3 years ago

I’ve been operating under the assumption that most online discourse has been heavily and increasingly gamed in a variety of different ways for at least the past 6 years, and that simpler versions of what we’re seeing publicly released have been in use for a long time.

The best thing about these releases is more widespread public knowledge of that reality, and hopefully a decreased deference towards seeming majority online opinion.

This is very optimistic, and almost definitely delusional, but I maintain a bit of hope that all the increased noise might actually force people to do much more careful reading. People may attempt to get much more context about the writer they’re conversing with in order to verify whether they are talking to a human, and in doing so, be forced to actually expose themselves to a larger context.

Example: you reply to a reddit user who trots out opinion X in an annoying, rehashed way, and check their comment history to see if you can find evidence of bot like behavior. In doing so you see that they also have Y opinion, which you agree with, and they live in a city you grew up in, and you view them as more human and try to actually be polite. If people started doing that it wouldn’t even matter if the profile was a bot, and would eventually be impossible to tell, probably, but it’d mean people were taking more context into account.

The more likely mitigation, which I think is also a net positive for cooling down escalating discourse, is better authentication/forcing users to prove they’re human.

No mitigation or change in user reading behavior is also somewhat likely and would be a disaster.

PebblesRox|3 years ago

I think the problem with reading more carefully is that it's not going to have a strong enough ROI in settings where there's a lot of bot-generated content.

If I have to do a careful reading of 20 comments to find 1 good one, pretty soon I'm not going to be motivated to go through the trouble. Either I'll stop being careful or I'll stop reading entirely.

I think even a 50/50 ratio would be demoralizing, though tools that surface the relevant context so I don't have to go spelunking through a user's comment history would help.