I suffered from an early version of this when I was one of the top Elite Yelpers in my city about a dozen years ago. I started to notice that anytime I went somewhere, I spent much of my time looking for flaws to post about in my reviews instead of enjoying myself. I still notice and mentioned the positive things but for most people, seeing the good is the default when going out. Elevating the flaws that often would have quickly been dismissed otherwise was having a negative impact on my quality of life. Additionally, friends who also started doing reviews were becoming increasingly critical of everything and everyone in their lives, which made me realize I was doing so too. The entire social scene around the local Yelp community was getting increasingly hostile and petty with each other. When I stepped back and looked at it from the outside, it was shocking how much it was changing everyone's personalities.
In the end I quit writing reviews for Yelp and exited the community even though much of my social life at that point had become subsumed by it. Luckily I was still at an age where it was easy to rebuild my social life. For many, social media has become their primary social outlet and without it, they don't have anything else to turn to. Even without the pandemic, I suspect that it's more difficult to build an offline social life than it used to be, especially for the young.
This could be true, but I personally found negative reviews valuable. While I'm not familiar with Yelp, I tend to read negative/mixed reviews on Amazon products much more carefully than positive reviews, or constructive criticisms than blindly agreeing comments on HN. I think we all want a "balanced" view, as it's elusive as it gets. While there's no single ruleset to define what's balanced and what's not, I hope we continue discussing it until the end.
Years back, when Cracked used to be decent, there was an article by John Cheese, which I forget now, wrote the most profound sentence that I have ever read in my life:
> The difference, according to the people who study this sort of thing, is recognizing whether your reaction is designed to actually help you fix the thing you're mad about, or just satisfying the adrenaline and dopamine rush you get from lashing out (the latter, after all, is what makes anger so addictive).
Apparently it's from 2099, retro-loaded to present day.
My thoughts - why do likes accumulate to a user? You want likes to signal that the article/comment is interesting. But what value is it to assign the number of likes to the person who posts it? Like in stack overflow, I haven't used it for ages but my points are still there, they should die off after a while I feel. This would remove the karma chasing you get on reddit and would, perhaps, make things more reasonable. It's nice to get the serotonin kick when you accumulate thousands of points on a post - but its fundamentally meaningless and encourages weird behaviour.
I think you might be onto something. What examples do we have of products that purely use likes or upvotes that benefit the thing shared without benefitting the user?
This is precisely why I opted to exclude voting and "likes" from Sqwok (https://sqwok.im).
From the outset I wanted to build a discussion site that was entirely focused on live conversation, without the gimmicks that have become so ubiquitous across the social media landscape and beyond.
In the real world we signal our approval of a conversation by either engaging or walking away. Other people sense our liking of it by seeing our engagement, not a cheap binary sticker we throw up.
> In the real world we signal our approval of a conversation by either engaging or walking away
What does 'engaging' mean, exactly? You also signal your disapproval by engaging. If I tell someone to go fuck themselves after they tried to get me alone in a dark alley... I engaged with them, I didn't walk away with a tacit approval of their attempted assault and say I remained disengaged.
If someone creeps on me online and I tell them to fuck off, the creepy cunt they are... then I engaged with them too. If I didn't engage and instead walked away, I might have a stalker to deal with because I never said no.
`engage` is doing too much work and, in my mind, it's an inhuman term. You're applying the logic of a toilet cubicle to a human.
> In the real world we signal our approval of a conversation by either engaging or walking away. Other people sense our liking of it by seeing our engagement, not a cheap binary sticker we throw up.
So, is that what your ranking is based on for Sqwok? Clicks and comments? Is it having the effect you'd hoped?
It is not just social media; it is the way our society is structured. Even if social media did not exist, the press would still reward people who act outraged. Just look at newspaper op eds and cable TV talk shows with discussion panels. Sports fans know that ESPN sells outrage and living vicariously through athletes/celebs as a business model.
The crux of the issue is that society rewards attention whoring behavior. I would love to see our leaders promote more "do, not tell" behavior.
Yes! I grew up on Donahue and the Geraldo Rivera show and Mory too. Those shows ratings were entirely built upon outrage. Social media is an iteration which allows all of us to join the audience. Tho I do admit for a romantic longing for 90s internet. It was total discovery and horror and obscenity and surprise but zero outrage. It is hard for me to be mad now about anything online now after spending hours on Cult of the Dead Cow and having my art downloaded by thousands of East Germans to jack off to. It's confusing and the lies I tell myself, but those were good times. The solution is not censorship but meaningful engagement with how human subjectivity is actually constructed vs the dissonance between individuated sovereign subject we are told we are and how we need each other to construct an ideological order/a safe polite backdrop we all agree on so we can privately go on about our lives.
Outrageous, this can't be!
But seriously, it is saddening sometimes to see so much mindshare and engagement wasted on poorly thought out "solutions" to whatever issue is currently trending on Twitter. Even worse, the constant social media outrage machine seems to reduce the inherent kindness that most people have in them (before discovering Twitter).
It seems that the problem is more insidious than this study hints at. We know from the behaviour on social media some morally righteous outrage is satiated by furiously liking or sharing something that you might agree with, but most people just harumph and move on once they've satisfied their itch to "do something".
However there is a certain element in society that gets truly over-stimulated by this stuff - the over-amplified likes and shares are making it seem like the outraged community you are aligned with is much larger than it actually is. Or more precisely, the number of people actively engaged and willing to undertake actions to back up their likes looks much bigger than it actually is. This pushes our over stimulated friends into over-reactions.
This has now fomented a lot of extreme acts - hitting the streets, burning stuff, occupying buildings, safe in the completely misleading knowledge that your in-group is much larger than you think.
The outrage machine has been an interesting social experiment but now people are getting killed because of it and it's probably time to nuke Facebook and Twitter from orbit unless they start taking moderation seriously.
Nuking platforms isn’t the answer, the real answer is to educate the populace to recognize such phenomenon and train them to not be misled. Otherwise new outrage machines will just be created.
One problem is the design of the “Like” button. The target audience - the recipient - of the action of pressing this button is the other people. They will see a higher number next to the thumb up. If this is the visible result - this is what people will use it for. To show support to the opinions they agree with. The stronger the opinion - the stronger the support. Often the content people agree with is not actually what they find useful for themselves.
But what if we designed the “Like” button to be directed not at the others, but at yourself. I mean, when you press the “Like” button it has consequences for you. If you “Like” useless content, then you will get more useless content in the future. Would that change the dynamics of “liking”? Would it make people think more carefully about what they like if their future content recommendations depended on it?
When you upvote an item - you don’t simply increment the counter and make that item rank higher for everyone else. Instead, you connect stronger to other users who upvoted that item before you. The stronger you connect to someone - the more weight their upvote has for you - the higher their other upvoted content ranks in your recommendations. This creates a feedback loop where you use the upvote button no to influence others, but to direct your future recommendations. This is a “filter bubble” - one that you very consciously form.
I remember the way that old style internet forums used to work. It was the posts that generated the most active discussion that naturally "rose to the top" simply because forums were designed to display the posts in order of recent activity. And there was no signal such as "likes" or "votes". So you had to read through the entire thread to decide what you thought about an issue or what your takeaway should be.
They were also self-contained. What you wrote on a forum about skyscrapers didn't have an impact on how you were perceived on a forum about bocce ball. Sites like Reddit that aggregate all of these interests into one place, with one reputation score across all interests, end up leaking the outrage from the most contentious areas into everything else. It used to be that niche hobbies were safe from this but more and more of them are getting overflow from the geysers of outrage coming from the more mainstream areas. If you're the moderator of one of these niche subreddits, trying to keep this out will often lead to raids and complaints to higher management who can replace you at will. This puts more power into the hands of the outrage manufacturers to leak over into everything and everywhere they want.
Most forums did have reputation systems though, which I would argue were the primitive form of likes. But I agree, I do miss the old forum style of communities.
I hypothesize that any "short reactions" like reaction buttons, shares and ~200 characters posts are used to express outrage and hate because it's the only emotions that can be accurately expressed with such limited way of expression.
"Fuck you" has a lot more meaning and impact than a "thank you" without additional context; anger encourages short and impulsive actions, kindness necessitates understanding and sophisticated social interactions. "Thank you for helping me out when my wife threw me out of the apartment" has more punch than just "fuck you" but also requires more involvement to even produce the initial situation that began the chain of event. Researched arguments are nearly impossible to find compared to the sea of impulsive reactions.
As an example of this is the difference between 4chan's /b/ and /r9k/ boards. /b/ is pure random, with nothing off limit except what is illegal under the law while /r9k/ is similar but no duplicates of text or picture are permitted across all the history of the board. While the post on /r9k/ remain vulgar, insults are far less prominent.
Specialized groups/boards/subreddits are also similar where content and researched opinions are the main point of those groups by their nature.
This isn't particularly surprising. Some folks like attention, and giving them attention for something they are doing, results in them doing that thing more. Social media simply taps into that dopamine loop in order to sell page views and ad clicks.
Perhaps the saddest testament to this was a troll who was banned from Twitter and told a journalist "I feel like I don't exist anymore." That is a really awful place to end up. It is also an unhealthy place to be.
David Brin (sf writer) shoe-horned a scene into one of his books where a "terrorist" deliberately poisons a populist politician with a sci-fi drug designed to cure addiction to stop him from making angry speeches.
I have been talking to more people in the real world about social issues or whatever the big topics of the year are... and what I'm finding which shouldn't surprise me is that people are a lot more moderate than you would be led to believe looking at social media or online conversations.
It's not that everybody is at the exact center, but most of the people I've encountered will have a side but be pretty far away from the loud extreme narrative you see online and in the news.
Yeah it's like, social media has you thinking there are literally only two or three overarching positions you can hold but in real life things are more nuanced. I know a lot of EU immigrants and anti-fascists who voted for Brexit, racists who voted remain, sex workers who are against the current de-stigmitization of sex work, trade/student union types who are toxic in the workplace, trans people who don't like jk rowling but idenitfy with her because they have also been alienated from the online trans community, effective activists who will refuse to use their social media platform to share activist messages, former kids of the care system who as adults are very wary of people who say they want to adopt or foster, and so on. Social media has everyone thinking x therefore y (and therefore z) when it just is not the case. I can tell when someone is Very Online because they assume I hold a series of political positions based on very surface level observations about me, or vice versa, they assume because I hold a political position on something, I don't know about or haven't experienced something else (even though they also haven't)
> “Our studies find that people with politically moderate friends and followers are more sensitive to social feedback that reinforces their outrage expressions,” Crockett said. “This suggests a mechanism for how moderate groups can become politically radicalized over time — the rewards of social media create positive feedback loops that exacerbate outrage.”
I don't do social media, except for HN (which has somehow avoided this problem... I think?). My wife, on the other hand, uses Twitter. She likes philosophy, current affairs and a good logical debate. Her social anxiety means that she doesn't like stressful human interaction but in spite of this, she will argue her side of a contentious issue even if she's on the unpopular side and at risk of a pile on. For that reason, she never engages in pile-ons or "cancelling". Probably because of this, she has a surprising number of pretty famous sceptics, writers, philosophers and scientists following her (and she really is a "nobody" in any of these areas).
What annoyed the hell out of her is that her top tweet is one that she typed while drunk after reading something that made her blood boil.
I don't think that likes and shares teach people to express more outrage. I think that the root cause here is that we reward outrage. We love to hear somebody bitch lyrical, more than we like a good joke. We live to read about some "Karen" getting her comeuppance.
I'm thankful for "Like" and "Share" being positive words. It would be much worse if they were "Dislike" and "Shame".
The reinforcement strategy behind AI recommendations (on Facebook, YouTube, Spotify) is to continue showing more similar content. This leads to extremism.
To encourage diversity, we the programmers need to intentionally change the algorithm to present opposing views. Mix some 80s disco in with that punk rock. Remind people that there is another perspective.
We can also encourage "thank you"s - actively creating backlinks so that people know where the thought came from originally, and can learn more.
Hacker News has a great system of self-moderation where downvotes are allowed but only after the user has received 1000 upvotes to establish their credibility. First contribute well, then earn the right to criticise. I hope we can find ways to translate this model into other domains, even real life.
Dislike and Shame are essentially how downvotes on reddit (and HN?) work.
I’m sure somewhere this has been analyzed in depth, but a few years ago it seems the YouTube algorithm underwent a fundamental change in which it shifted from recommending videos based on what other people viewing a video liked to recommending videos based only on your own personal history.
Outrage culture has to be one of the worst mainstream aspects of "normal" social media interaction. Then it spawned cancel culture, which went completely off the rails (IMO).
Facebook’s denial is particularly unpersuasive. It ignores that there may be unintended consequences of the explicit decisions in their algorithm. The analysis in the article somewhat overlooks the that it is quite possible that content showing women with less clothing is more popular.
One thing I’ve never seen Facebook seem to have any recognition of is that the basic concept their news feed is a self reinforcement loop. They choose to show a user a particular type of photo, and then the photos that user interacts with are primarily composed of that particular type of photo, which coincidentally is the only thing Facebook displays to them.
HN seems to avoid this to some degree. Or at at least you can't just express outrage without a reasonable explanation because the community is pretty good at downvoting useless and low effort posts. I suppose outrage may still be common (see recent Apple news) but it seems geared toward more productive conversation than just shouting at each other.
HN also visually avoids gamifying everything. Pay attention to how interactive likes on a site like Twitter are, it's literally a little heart that pops up. A lot of those sites look like slot machines.
Sort of. Back then we’d end up with one forum for a topic run as an independent website/fiefdom. This is one thing if it’s about your hobby but another if it’s about your profession. Imagine if posting on HN was important to your career and pg and dang decided you were a threat to them personally. This same thing can happen now with groups on the corporate platforms, but it was worse when individuals ran the entire sites.
[+] [-] Mountain_Skies|4 years ago|reply
In the end I quit writing reviews for Yelp and exited the community even though much of my social life at that point had become subsumed by it. Luckily I was still at an age where it was easy to rebuild my social life. For many, social media has become their primary social outlet and without it, they don't have anything else to turn to. Even without the pandemic, I suspect that it's more difficult to build an offline social life than it used to be, especially for the young.
[+] [-] euske|4 years ago|reply
[+] [-] N1H1L|4 years ago|reply
Anger is addictive.
[+] [-] carabiner|4 years ago|reply
> The difference, according to the people who study this sort of thing, is recognizing whether your reaction is designed to actually help you fix the thing you're mad about, or just satisfying the adrenaline and dopamine rush you get from lashing out (the latter, after all, is what makes anger so addictive).
Apparently it's from 2099, retro-loaded to present day.
[+] [-] chadcmulligan|4 years ago|reply
[+] [-] mtc010170|4 years ago|reply
[+] [-] emodendroket|4 years ago|reply
[+] [-] holler|4 years ago|reply
From the outset I wanted to build a discussion site that was entirely focused on live conversation, without the gimmicks that have become so ubiquitous across the social media landscape and beyond.
In the real world we signal our approval of a conversation by either engaging or walking away. Other people sense our liking of it by seeing our engagement, not a cheap binary sticker we throw up.
[+] [-] ljm|4 years ago|reply
What does 'engaging' mean, exactly? You also signal your disapproval by engaging. If I tell someone to go fuck themselves after they tried to get me alone in a dark alley... I engaged with them, I didn't walk away with a tacit approval of their attempted assault and say I remained disengaged.
If someone creeps on me online and I tell them to fuck off, the creepy cunt they are... then I engaged with them too. If I didn't engage and instead walked away, I might have a stalker to deal with because I never said no.
`engage` is doing too much work and, in my mind, it's an inhuman term. You're applying the logic of a toilet cubicle to a human.
[+] [-] actually_a_dog|4 years ago|reply
So, is that what your ranking is based on for Sqwok? Clicks and comments? Is it having the effect you'd hoped?
[+] [-] norov|4 years ago|reply
The crux of the issue is that society rewards attention whoring behavior. I would love to see our leaders promote more "do, not tell" behavior.
[+] [-] curation|4 years ago|reply
[+] [-] Unit520|4 years ago|reply
[+] [-] smackeyacky|4 years ago|reply
However there is a certain element in society that gets truly over-stimulated by this stuff - the over-amplified likes and shares are making it seem like the outraged community you are aligned with is much larger than it actually is. Or more precisely, the number of people actively engaged and willing to undertake actions to back up their likes looks much bigger than it actually is. This pushes our over stimulated friends into over-reactions.
This has now fomented a lot of extreme acts - hitting the streets, burning stuff, occupying buildings, safe in the completely misleading knowledge that your in-group is much larger than you think.
The outrage machine has been an interesting social experiment but now people are getting killed because of it and it's probably time to nuke Facebook and Twitter from orbit unless they start taking moderation seriously.
[+] [-] xwdv|4 years ago|reply
[+] [-] lonk11|4 years ago|reply
But what if we designed the “Like” button to be directed not at the others, but at yourself. I mean, when you press the “Like” button it has consequences for you. If you “Like” useless content, then you will get more useless content in the future. Would that change the dynamics of “liking”? Would it make people think more carefully about what they like if their future content recommendations depended on it?
I’m building just such a system at https://linklonk.com
When you upvote an item - you don’t simply increment the counter and make that item rank higher for everyone else. Instead, you connect stronger to other users who upvoted that item before you. The stronger you connect to someone - the more weight their upvote has for you - the higher their other upvoted content ranks in your recommendations. This creates a feedback loop where you use the upvote button no to influence others, but to direct your future recommendations. This is a “filter bubble” - one that you very consciously form.
[+] [-] beachy|4 years ago|reply
I think I am missing your message -isn't this exactly how most platform's algorithms work now? You get more of what you like?
[+] [-] tooltower|4 years ago|reply
[deleted]
[+] [-] davesque|4 years ago|reply
[+] [-] Mountain_Skies|4 years ago|reply
[+] [-] lethologica|4 years ago|reply
[+] [-] droopyEyelids|4 years ago|reply
[+] [-] TeeMassive|4 years ago|reply
"Fuck you" has a lot more meaning and impact than a "thank you" without additional context; anger encourages short and impulsive actions, kindness necessitates understanding and sophisticated social interactions. "Thank you for helping me out when my wife threw me out of the apartment" has more punch than just "fuck you" but also requires more involvement to even produce the initial situation that began the chain of event. Researched arguments are nearly impossible to find compared to the sea of impulsive reactions.
As an example of this is the difference between 4chan's /b/ and /r9k/ boards. /b/ is pure random, with nothing off limit except what is illegal under the law while /r9k/ is similar but no duplicates of text or picture are permitted across all the history of the board. While the post on /r9k/ remain vulgar, insults are far less prominent.
Specialized groups/boards/subreddits are also similar where content and researched opinions are the main point of those groups by their nature.
[+] [-] ChuckMcM|4 years ago|reply
Perhaps the saddest testament to this was a troll who was banned from Twitter and told a journalist "I feel like I don't exist anymore." That is a really awful place to end up. It is also an unhealthy place to be.
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] wisty|4 years ago|reply
Blog article from him here: https://www.davidbrin.com/nonfiction/addiction.html
[+] [-] colechristensen|4 years ago|reply
It's not that everybody is at the exact center, but most of the people I've encountered will have a side but be pretty far away from the loud extreme narrative you see online and in the news.
[+] [-] chaircher|4 years ago|reply
[+] [-] hhs|4 years ago|reply
> “Our studies find that people with politically moderate friends and followers are more sensitive to social feedback that reinforces their outrage expressions,” Crockett said. “This suggests a mechanism for how moderate groups can become politically radicalized over time — the rewards of social media create positive feedback loops that exacerbate outrage.”
[+] [-] raffraffraff|4 years ago|reply
What annoyed the hell out of her is that her top tweet is one that she typed while drunk after reading something that made her blood boil.
I don't think that likes and shares teach people to express more outrage. I think that the root cause here is that we reward outrage. We love to hear somebody bitch lyrical, more than we like a good joke. We live to read about some "Karen" getting her comeuppance.
[+] [-] peterburkimsher|4 years ago|reply
The reinforcement strategy behind AI recommendations (on Facebook, YouTube, Spotify) is to continue showing more similar content. This leads to extremism.
To encourage diversity, we the programmers need to intentionally change the algorithm to present opposing views. Mix some 80s disco in with that punk rock. Remind people that there is another perspective.
We can also encourage "thank you"s - actively creating backlinks so that people know where the thought came from originally, and can learn more.
Hacker News has a great system of self-moderation where downvotes are allowed but only after the user has received 1000 upvotes to establish their credibility. First contribute well, then earn the right to criticise. I hope we can find ways to translate this model into other domains, even real life.
[+] [-] code_duck|4 years ago|reply
I’m sure somewhere this has been analyzed in depth, but a few years ago it seems the YouTube algorithm underwent a fundamental change in which it shifted from recommending videos based on what other people viewing a video liked to recommending videos based only on your own personal history.
[+] [-] TrackerFF|4 years ago|reply
[+] [-] schneems|4 years ago|reply
IMHO “cancel culture” is just “consequence culture.”
[+] [-] code_duck|4 years ago|reply
One thing I’ve never seen Facebook seem to have any recognition of is that the basic concept their news feed is a self reinforcement loop. They choose to show a user a particular type of photo, and then the photos that user interacts with are primarily composed of that particular type of photo, which coincidentally is the only thing Facebook displays to them.
[+] [-] ineedasername|4 years ago|reply
[+] [-] Barrin92|4 years ago|reply
[+] [-] saltedonion|4 years ago|reply
[+] [-] davesque|4 years ago|reply
[+] [-] code_duck|4 years ago|reply
[+] [-] echelon|4 years ago|reply