Related to this, there is an intimacy of small community which makes you feel valued and a proper contributor, that social networks really seem to oppose: they want to make the network bigger, you are part of the biggest world context, everybody on TikTok is eating a habanero while watching Bob Ross, so only if I do the same nonsense do I have a chance of 100 people noticing and liking the video and maybe opting to see more of my content.
When I put it that way it feels banal, but like, you know the “fast-growing subreddits” list on Reddit? There were meetings! Someone worked on that! People literally sat in a room and said effectively, “Hey Fatimeh, what is the status of the ‘make subreddits suck faster’ feature? Management is very interested in delivering that in Q3.” Right? Like this connection from global to personal is just automatically assumed, nobody spends a waking moment thinking it could be anything but that way.
Could you imagine if hypothetically, HN suddenly became the world's most popular social network? It would be unusable for it's current purpose and no amount of moderating could ever fix that.
This sounds harsh but not everyone deserves to be heard all of the time- myself included.
This strongly reminds me of Kornhauser’s three levels of social relations (Politics of Mass Society, 1959):
> We can conceive of all but the simplest societies as comprising three levels of social relations. The first level consists of highly personal or primary relations, notably the family. The third level contains relations inclusive of the whole population, notably the state. The second level comprises all intermediate relations, notably the local community, voluntary association, and occupational group.
Etc., the argument being that much of the socially important parts and ills of industrialization, urbanization, and the ensuing mass society are reducible to the dissolution of the second level and the transferral of its customary characteristics and behaviours (strong attachment, group identity, and so on) to the third.
But that’s mostly me channelling my history teachers, not original thought, so maybe someone who actually knows this stuff can say more.
Lest this be perceived as doomsaying, let me mention that Russian-language LiveJournal seemed to retain that small-town feel up until the very end (early 2010s) despite the considerable number of readers and commenters (thousands of readers for the most popular post on a slow day—much less than on Facebook, much more than you could ever possibly know). Possibly that’s because the number of writers (worth following) was much smaller, due to the higher effort required for long-form posts.
Again, people with more experience in older online communities should know more about this, as that was the only thriving one I experienced personally.
It is banal, at least the largest portion of social media is. Originally, I could see the usefulness of FB with connecting with friends near & far, sharing, planning, etc. All sounds useful, but then FB turned it creepy with their method of monetizing their products and algos to pull the strings to make it as addicting as possible. Insta, TikTok, etc has pretty much fully embraced the dark side of social. LOOK AT ME!!! LIKE ME!!! FOLLOW ME!!! Again, with the tweaking of the algos to up the addiction factor. All in the name of "engagement".
YouTube already made me nervous, but there's definitely some people creating content there that is not about that at all so I don't lump them in with the others.
I've thought of creating a social network that groups people into small groups of 100 or so, by interests and location. I would advertise it as a chance to escape the monoculture and explain that the cost if you must participate a minimum amount, like 1 post or comment per week on average or something. And then you could mix up the groups every year or two. It would have its pros and cons, but it would achieve the goal of letting small groups form and create their own cultures. It would also succeed regardless of its size, so long as you have enough users to form a small group.
i have this nagging feeling that nothing good ever comes out when there is a large group of people involved. and that the most successful communities, teams or groups are small. i have seen small communities ruined when they got big or niche communities wrecked when they went mainstream. time and again. but we keep pushing for larger groups, larger teams, more networking, more friends, etc...
I'm willing to bet that my fyp on tiktok is much different from yours.
Niche subreddits could have been the solution. It is the specific implementation suck, not the idea of openness of a subgroup to new comers e.g., there can be a feeling community while participating in an open source project even if anybody may join.
Growth is always the default expectation. It's part of humanity at this point. I personally feel that it is destructive. I hate the idea that static and stagnant are NEGATIVES. They are not. They have value.
Louis Menand's The Free World, a history of public thought in the cold war era, recently taught me about the relationship of the masses and totalitarianism.
The relationship appears to be : destroy classes & traditional relationships -> produce masses & the mob -> totalitarianism arises out of the masses' anxiety and the mob's despotic idiocy.
Something like how too many chickens in a big barn can't form hierarchies and so live in a perpetual panic, except along comes a Big Chicken who makes everyone feel better because they finally know their place again.
I hope something different and less awful happens. But the early results are worrying.
Can we just pull the plug on this experiment, please?
Somehow this post takes the least interesting part of the source article[0] and draws false conclusions from it.
> In the simulation, the decision whether to rebroadcast is random, rather than being driven by “virality” or cognitive bias, so the simulation is an optimistic one.
> It turns out that message propagation follows a power law: the probability of a meme being shared a given number of times is roughly proportional to an inverse power of that number.
So they implement a textbook model and a textbook result comes out - surprise? There’s nothing to be drawn by this.
I may share the authors sentiment but frankly this blog post is bunk.
There’re some interesting parts in the source though once you get through all the grand-standing fluff.
If it's the paper I'm thinking of then I read the original and that's not really a fair representation of what they did.
What they actually did was they started with some data on things which had gone viral but could only see how many were "infected" (number of views over time). They didn't have direct access to the granular information of who had seen what and who they shared or didn't share with. They decided to work backwards and model virality for some assumed network properties and contagiousness properties and then see which of their models had the similar statistics to actual viral posts.
Network topology: iirc they tested the case of a gaussian distribution of connections meaning on average 95% of nodes had n connections +- 2sigma and they tested a power law type distribution meaning some thin minority of "influencers" had connections to nearly everyone".
Contagious property: they tested the case where different content had a different virality property r0 with more or less contagious content all competing at the same time. They also tested the case of sheer randomness wherein all content is equally likely to be shared and which goes viral basically depends on the luck of who's eye it catches.
The end result was that a power law and random contagiousness best fit the real world statistics.
The Scientific American article does bring up a number of interesting social dynamics. However the modeling described seems bogus to me. The algorithms sound like they are incredibly oversimplified. It seems unreasonable to assume that all people interact with social media in the same way or to assume that their behavior is random.
For perspective, genetic algorithms are a biased random walk and they clearly work quite well.
As soon as your news feed becomes too big for you to read all of it and then decide whether to repost anything, the quality of information you propagate is going to fall because of that filtering process.
I avoid this by unfollowing people very frequently. I see one stupid candid or a pout or a food shot, I just unfollow that person.
Same goes for SJW/White Supremacy types. Whether you are OP or a sharer, you get muted/unfollowed.
This is how I use Quora. When I used Facebook five years ago, I used it this way.
Twitter I use only for professional content. Very strictly. I also set the trending country to Namibia, so that I don't even know what they are talking about.
I am less and less active on social media with the passing of days, but a good social media experience can be achieved by strictly filtering normies, politics, food porn, etc.
This is very nice for you, but the problem is that most people who consume social media are consuming it about as passively as possible and aren't aware that their experience can be curated the way that you are doing. Providing the tools to curate one's own social media feed is not a solution to the problem the article is explaining.
Here's an analogy: Imagine if we discovered that thousands of cars had some fatal flaw that could result in death and instead of recalling them the manufacturer just made a DIY repair kit available for twenty bucks online. How many people would just put it off or ignore the problem until their car exploded?
That's the problem we have with social media: It's got huge problems that most people are only vaguely aware of, and the solutions they offer are mostly esoteric DIY controls that very few people will care about or use.
Perhaps this is unfair, but this just makes you sound a bit antisocial. You're actively trying to remove all of the personality from social media, and filter it down to just professional or single-interest posts. People aren't like that; everyone has many facets to what they're interested in, who they are, and what makes them tick. Removing them from your network as soon as they post anything from your red flag list demonstrates that you're not actually interested in the "social" bit, and really you want a curated list of facts. I guess that's your choice but I suspect you'll find it very isolating eventually.
One of the most important "soft skills" I've learned as developer over the past 25 years is that ignoring people because they're interested in things I'm not interested in makes managing a team incredibly hard. You need to be able to listen even when people are talking about "boring" or irrelevant stuff, because sometimes there's something very relevant buried in there. People aren't one-dimensional and they won't always talk about things you want to hear, so if you reject them on that basis you'll miss the good stuff too. Plus, I've found a lot of the time the rest turns out to be quite interesting too if you take the time to listen.
I unfollow everyone, absolutely everyone on facebook. I do it systematically whenever somebody sends a friend request.
This way I can decide whose posts I see, and not the algorithm.
Ha, take that! Facebook even displays an error message in your feed if you unfollow everyone :)
Interesting. What you say makes some sense for keeping sane where the Internet is one big room of people shouting, but for preference I'd prefer the total opposite.
I would love to be part of small social media groups with bonds of friendship and tolerance. Trends, food, non-extreme politics, in proportion to their magnitude in everyone's lives, varying with people's interests but not magnified by one-issue echo chambers. Not 100% sure what you mean by normies but I suspect I'm in favour of them.
Group chats with close friends and family fulfil this need fairly well, and are quite relaxing compared to wider social networking. But they're likely only to cover a small subset of every person's interests. Something like the infamous 'Circles' might actually be useful on a larger scale.
> as the number of messages in the network rises, the quality of those which propagate falls
... if you assume people don`t read the message at all, but use a coin flip to decide if they should reshape it.
I cannot for the life of me fathom how anyone could make a model that completely ignores content quality in respect to distribution and then use that model to make a statement about average quality of content that gets distributed.
This really is the “sperical cow in a vacuum”equivalent of modeling social networks.
Mathew's blog post is mostly just commentary on a quite excellent Scientific American article from November 2020, "Information Overload Helps Fake News Spread, and Social Media Knows It".
This is based on a simulation of social media which shows that, at least in the model (and with strong evidence that reality follows suit):
[The] winner-take-all popularity pattern of memes, in which most are barely noticed while a few spread widely, could not be explained by some of them being more catchy or somehow more valuable: the memes in this simulated world had no intrinsic quality. Virality resulted purely from the statistical consequences of information proliferation in a social network of agents with limited attention. Even when agents preferentially shared memes of higher quality, researcher Xiaoyan Qiu, then at OSoMe, observed little improvement in the overall quality of those shared the most. Our models revealed that even when we want to see and share high-quality information, our inability to view everything in our news feeds inevitably leads us to share things that are partly or completely untrue.
As always, it's a bad time to be an imaginary person.
It's a big leap to pretend the model reflects the real world, let alone today's real world. As far as I can tell, sharing has become largely fenced into communities (reddit subs, Facebook groups). People who still share with people they know personally appear to me to be a minority.
Don't get me wrong, the principle is sound, but the world changes fast and the described model doesn't seem very relevant nowadays.
Well, it's simpler than the real phenomenon it is simulating, but the fact that even that simplified model gives the same result we see in reality, is itself quite significant. It suggests that it is not some quirk of FB algorithm that is the issue, it's the very essence of what a general purpose social network IS, that's the issue. It suggests that tweaking the algorithm is unlikely to solve the problem.
As contrast in my social circle nearly everything is spoken / shared in private messenger groups today. From my POV this is the current trend. Then again my environment is mostly online privacy aware.
I have no idea what the logic of this post is supposed to be. Message sharing follows a power law and so therefore... something about quality going down when messages go up? And something about filtering? What on earth is he talking about?
...Well, it turns out he's sharing the conclusions of the linked SciAm article, but the SciAm article just says "the super smart researchers plugged numbers into their simulation and this result magically came out". There's no insight or explanation.
Are we expected to just believe that computer simulations built by credentialed scientists prove things about the real world?
> Are we expected to just believe that computer simulations built by credentialed scientists prove things about the real world?
We could model it...
1) each node is an imaginary scientist with a random set of in and outgoing citations, they can only read and cite a finite number of randomized publications, the publications are assumed to vary randomly in quality...
2) ?????
3) Conclusion: As the number of publications in the network rises, the quality of those which propagate falls.
This explains why the crappiest of efforts are so viral, and why things that try harder fail. When I think of the meme templates I've seen, they're all grade 2 mental level, and they don't engage your critcal faculties, but this is their point. They just pass right by. There is a kind of bias where we must think, "this is so crappy, it has to be real!" which is the complement bias to, "this looks too polished to be real." I wonder what examples of things other than memes would be the effect of that bias.
> I wonder what examples of things other than memes would be the effect of that bias.
Looking at a similar phenomenon in a different medium, I'd argue that both Nickelback and Adam Sandler films fall into this same category.
People everywhere love to shit on Nickelback but when they were touring they were selling out stadium after stadium after stadium, having albums go platinum, etc. They were an outright commercial success despite being panned critically and seemingly "universally hated."
Adam Sandler films get the same treatment.
My theory is: Sometimes you just want to not think deeply about a thing and turn your brain off and enjoy, and I think that applies to music, film, and memes equally. There's something inoffensive but satisfying about mediocrity in these things.
The lies and conspiracy theories were always there. The difference now is that there are lies and conspiracies in different directions rather than a consensus of lies and conspiracy theories.
Of course it remains to be seen if it's possible to create a consensus of a nation without centralized media like we had until now. If it's not, we will have to get used to the population fragmenting into different groups with different chains of trust, or we will have to accept the government imposing totalitarian measures on the internet to reestablish a centralized consensus. I guess each option has issues.
You will not find someone who dislikes social media and its effects more than I do, but:
> In the simulation, the decision whether to rebroadcast is random, rather than being driven by “virality” or cognitive bias, so the simulation is an optimistic one.
Why not model the decision to rebroadcast based on something other than a simple dice roll? Why not drive it by virality? Why not drive it by quality? Why not model tribal affiliation? These things seem significant when talking about the "information hellscape" we've created.
What realistic alternatives are there to an algorithmic feed that don't immediately devolve into high-volume posters drowning out the one status a month club?
People may find the original simulation studies [0] oversimplifying but that is the more or less the state-of-the-art of our understanding of social networks using agent models. There is always the nagging concern that your get out what you put in.
We don't have to theorize though. Given all the open source platforms of the fediverse [1] people can setup networks, tweak to their heart's content and explore what "works" when actual humans are involved.
Maybe what is really missing is some sort of crowdsourced framework for collecting and sharing that information.
this is actually something that's been bothering me heavily, lately. i've noticed a lot of content is created in these convoluted and grandiose borderline word vomit. honestly, a lot of the stuff frequently makes me feel a bit dumb as a reader. by the time i get through it, i realize that the main thesis is actually extremely simple but surrounded by flowery words.
furthermore, i'm also noticing at my workplace many people saying a lot of things, to the point of utter rambling and going off topic, in extremely verbose ways but communicating ideas that could have been said in the most basic of ways. i'm not sure if i'm just slowing down as i get older and get dumber, but i am feeling like a lot of information hasn't gotten more complex, but the way its communicated very much has.
I don't quite understand: it seems to me there is an infinite stream of news, and everybody has to find a way to extract bits and pieces relevant to them.
Social networks just make it easier to access the news stream, in that sense. But they are not responsible for creating the news, or at least, not in a significant way - as infinite+whatever amount of "news" SN create is still infinite.
This just in relation to the article - of course SNs create all kinds of problems, but the amount of information is not really the problem caused by SNs.
I am trying to get into self hosting. Can anyone recommend a Twitter or Facebook self hosted app? The idea would be that it would be an invite only social network where I choose who can join.
I can't determine what this article means by "quality", is it gauged by validity or some subjective notion? If it's the latter, I'm not sure how any of this is useful.
As a generating function, all you need to get a "Power Law" distribution of connections is a "preferential attachment" probability either statically or dynamically/transiently.
Finite attention automatically creates that preferential attachment because you want to avoid "wasting the precious resource of attention"!!
[+] [-] crdrost|4 years ago|reply
When I put it that way it feels banal, but like, you know the “fast-growing subreddits” list on Reddit? There were meetings! Someone worked on that! People literally sat in a room and said effectively, “Hey Fatimeh, what is the status of the ‘make subreddits suck faster’ feature? Management is very interested in delivering that in Q3.” Right? Like this connection from global to personal is just automatically assumed, nobody spends a waking moment thinking it could be anything but that way.
[+] [-] 2OEH8eoCRo0|4 years ago|reply
This sounds harsh but not everyone deserves to be heard all of the time- myself included.
[+] [-] mananaysiempre|4 years ago|reply
> We can conceive of all but the simplest societies as comprising three levels of social relations. The first level consists of highly personal or primary relations, notably the family. The third level contains relations inclusive of the whole population, notably the state. The second level comprises all intermediate relations, notably the local community, voluntary association, and occupational group.
Etc., the argument being that much of the socially important parts and ills of industrialization, urbanization, and the ensuing mass society are reducible to the dissolution of the second level and the transferral of its customary characteristics and behaviours (strong attachment, group identity, and so on) to the third.
But that’s mostly me channelling my history teachers, not original thought, so maybe someone who actually knows this stuff can say more.
Lest this be perceived as doomsaying, let me mention that Russian-language LiveJournal seemed to retain that small-town feel up until the very end (early 2010s) despite the considerable number of readers and commenters (thousands of readers for the most popular post on a slow day—much less than on Facebook, much more than you could ever possibly know). Possibly that’s because the number of writers (worth following) was much smaller, due to the higher effort required for long-form posts.
Again, people with more experience in older online communities should know more about this, as that was the only thriving one I experienced personally.
[+] [-] dylan604|4 years ago|reply
It is banal, at least the largest portion of social media is. Originally, I could see the usefulness of FB with connecting with friends near & far, sharing, planning, etc. All sounds useful, but then FB turned it creepy with their method of monetizing their products and algos to pull the strings to make it as addicting as possible. Insta, TikTok, etc has pretty much fully embraced the dark side of social. LOOK AT ME!!! LIKE ME!!! FOLLOW ME!!! Again, with the tweaking of the algos to up the addiction factor. All in the name of "engagement".
YouTube already made me nervous, but there's definitely some people creating content there that is not about that at all so I don't lump them in with the others.
[+] [-] Buttons840|4 years ago|reply
[+] [-] blondin|4 years ago|reply
i have this nagging feeling that nothing good ever comes out when there is a large group of people involved. and that the most successful communities, teams or groups are small. i have seen small communities ruined when they got big or niche communities wrecked when they went mainstream. time and again. but we keep pushing for larger groups, larger teams, more networking, more friends, etc...
[+] [-] stavros|4 years ago|reply
[+] [-] d0mine|4 years ago|reply
Niche subreddits could have been the solution. It is the specific implementation suck, not the idea of openness of a subgroup to new comers e.g., there can be a feeling community while participating in an open source project even if anybody may join.
[+] [-] ramijames|4 years ago|reply
[+] [-] bckr|4 years ago|reply
The relationship appears to be : destroy classes & traditional relationships -> produce masses & the mob -> totalitarianism arises out of the masses' anxiety and the mob's despotic idiocy.
Something like how too many chickens in a big barn can't form hierarchies and so live in a perpetual panic, except along comes a Big Chicken who makes everyone feel better because they finally know their place again.
I hope something different and less awful happens. But the early results are worrying.
Can we just pull the plug on this experiment, please?
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] 3np|4 years ago|reply
> In the simulation, the decision whether to rebroadcast is random, rather than being driven by “virality” or cognitive bias, so the simulation is an optimistic one. > It turns out that message propagation follows a power law: the probability of a meme being shared a given number of times is roughly proportional to an inverse power of that number.
So they implement a textbook model and a textbook result comes out - surprise? There’s nothing to be drawn by this.
I may share the authors sentiment but frankly this blog post is bunk.
There’re some interesting parts in the source though once you get through all the grand-standing fluff.
[0]: https://www.scientificamerican.com/article/information-overl...
[+] [-] ummwhat|4 years ago|reply
What they actually did was they started with some data on things which had gone viral but could only see how many were "infected" (number of views over time). They didn't have direct access to the granular information of who had seen what and who they shared or didn't share with. They decided to work backwards and model virality for some assumed network properties and contagiousness properties and then see which of their models had the similar statistics to actual viral posts.
Network topology: iirc they tested the case of a gaussian distribution of connections meaning on average 95% of nodes had n connections +- 2sigma and they tested a power law type distribution meaning some thin minority of "influencers" had connections to nearly everyone".
Contagious property: they tested the case where different content had a different virality property r0 with more or less contagious content all competing at the same time. They also tested the case of sheer randomness wherein all content is equally likely to be shared and which goes viral basically depends on the luck of who's eye it catches.
The end result was that a power law and random contagiousness best fit the real world statistics.
[+] [-] d110af5ccf|4 years ago|reply
For perspective, genetic algorithms are a biased random walk and they clearly work quite well.
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] rg111|4 years ago|reply
I avoid this by unfollowing people very frequently. I see one stupid candid or a pout or a food shot, I just unfollow that person.
Same goes for SJW/White Supremacy types. Whether you are OP or a sharer, you get muted/unfollowed.
This is how I use Quora. When I used Facebook five years ago, I used it this way.
Twitter I use only for professional content. Very strictly. I also set the trending country to Namibia, so that I don't even know what they are talking about.
I am less and less active on social media with the passing of days, but a good social media experience can be achieved by strictly filtering normies, politics, food porn, etc.
[+] [-] swivelmaster|4 years ago|reply
Here's an analogy: Imagine if we discovered that thousands of cars had some fatal flaw that could result in death and instead of recalling them the manufacturer just made a DIY repair kit available for twenty bucks online. How many people would just put it off or ignore the problem until their car exploded?
That's the problem we have with social media: It's got huge problems that most people are only vaguely aware of, and the solutions they offer are mostly esoteric DIY controls that very few people will care about or use.
[+] [-] onion2k|4 years ago|reply
One of the most important "soft skills" I've learned as developer over the past 25 years is that ignoring people because they're interested in things I'm not interested in makes managing a team incredibly hard. You need to be able to listen even when people are talking about "boring" or irrelevant stuff, because sometimes there's something very relevant buried in there. People aren't one-dimensional and they won't always talk about things you want to hear, so if you reject them on that basis you'll miss the good stuff too. Plus, I've found a lot of the time the rest turns out to be quite interesting too if you take the time to listen.
[+] [-] inDigiNeous|4 years ago|reply
This way I can decide whose posts I see, and not the algorithm. Ha, take that! Facebook even displays an error message in your feed if you unfollow everyone :)
[+] [-] rm445|4 years ago|reply
I would love to be part of small social media groups with bonds of friendship and tolerance. Trends, food, non-extreme politics, in proportion to their magnitude in everyone's lives, varying with people's interests but not magnified by one-issue echo chambers. Not 100% sure what you mean by normies but I suspect I'm in favour of them.
Group chats with close friends and family fulfil this need fairly well, and are quite relaxing compared to wider social networking. But they're likely only to cover a small subset of every person's interests. Something like the infamous 'Circles' might actually be useful on a larger scale.
[+] [-] dbtc|4 years ago|reply
[+] [-] wolverine876|4 years ago|reply
In what way are these 'types' equated?
[+] [-] tobbykop|4 years ago|reply
... if you assume people don`t read the message at all, but use a coin flip to decide if they should reshape it.
I cannot for the life of me fathom how anyone could make a model that completely ignores content quality in respect to distribution and then use that model to make a statement about average quality of content that gets distributed.
This really is the “sperical cow in a vacuum”equivalent of modeling social networks.
[+] [-] fighterpilot|4 years ago|reply
"Simulation shows that coinflip leads to 50/50 outcome." Genius.
[+] [-] dredmorbius|4 years ago|reply
This is based on a simulation of social media which shows that, at least in the model (and with strong evidence that reality follows suit):
[The] winner-take-all popularity pattern of memes, in which most are barely noticed while a few spread widely, could not be explained by some of them being more catchy or somehow more valuable: the memes in this simulated world had no intrinsic quality. Virality resulted purely from the statistical consequences of information proliferation in a social network of agents with limited attention. Even when agents preferentially shared memes of higher quality, researcher Xiaoyan Qiu, then at OSoMe, observed little improvement in the overall quality of those shared the most. Our models revealed that even when we want to see and share high-quality information, our inability to view everything in our news feeds inevitably leads us to share things that are partly or completely untrue.
https://www.scientificamerican.com/article/information-overl...
That was discussed exctensively at the time on HN (166 comments):
https://news.ycombinator.com/item?id=25153716
[+] [-] BarryMilo|4 years ago|reply
It's a big leap to pretend the model reflects the real world, let alone today's real world. As far as I can tell, sharing has become largely fenced into communities (reddit subs, Facebook groups). People who still share with people they know personally appear to me to be a minority.
Don't get me wrong, the principle is sound, but the world changes fast and the described model doesn't seem very relevant nowadays.
[+] [-] swivelmaster|4 years ago|reply
Also, the entirety of Twitter, in a completely different way.
[+] [-] rossdavidh|4 years ago|reply
[+] [-] herbst|4 years ago|reply
[+] [-] arglebarglegar|4 years ago|reply
[+] [-] civilized|4 years ago|reply
...Well, it turns out he's sharing the conclusions of the linked SciAm article, but the SciAm article just says "the super smart researchers plugged numbers into their simulation and this result magically came out". There's no insight or explanation.
Are we expected to just believe that computer simulations built by credentialed scientists prove things about the real world?
[+] [-] 6510|4 years ago|reply
We could model it...
1) each node is an imaginary scientist with a random set of in and outgoing citations, they can only read and cite a finite number of randomized publications, the publications are assumed to vary randomly in quality...
2) ?????
3) Conclusion: As the number of publications in the network rises, the quality of those which propagate falls.
[+] [-] motohagiography|4 years ago|reply
[+] [-] BeefWellington|4 years ago|reply
Looking at a similar phenomenon in a different medium, I'd argue that both Nickelback and Adam Sandler films fall into this same category.
People everywhere love to shit on Nickelback but when they were touring they were selling out stadium after stadium after stadium, having albums go platinum, etc. They were an outright commercial success despite being panned critically and seemingly "universally hated."
Adam Sandler films get the same treatment.
My theory is: Sometimes you just want to not think deeply about a thing and turn your brain off and enjoy, and I think that applies to music, film, and memes equally. There's something inoffensive but satisfying about mediocrity in these things.
[+] [-] edoceo|4 years ago|reply
[+] [-] xg15|4 years ago|reply
If quality is random and decision to rebroadcast is also random - and independent of quality - then why is the result more low-quality messages?
[+] [-] eternalban|4 years ago|reply
[+] [-] Ambolia|4 years ago|reply
Of course it remains to be seen if it's possible to create a consensus of a nation without centralized media like we had until now. If it's not, we will have to get used to the population fragmenting into different groups with different chains of trust, or we will have to accept the government imposing totalitarian measures on the internet to reestablish a centralized consensus. I guess each option has issues.
[+] [-] karaterobot|4 years ago|reply
> In the simulation, the decision whether to rebroadcast is random, rather than being driven by “virality” or cognitive bias, so the simulation is an optimistic one.
Why not model the decision to rebroadcast based on something other than a simple dice roll? Why not drive it by virality? Why not drive it by quality? Why not model tribal affiliation? These things seem significant when talking about the "information hellscape" we've created.
[+] [-] sneak|4 years ago|reply
The algorithmic filtering/ordering of items in a feed is really just censorship of the items that don't benefit Facebook/Instagram/Twitter/YouTube.
These tools serve their owners, not the society they ostensibly serve to connect.
It's censorship on a huge scale, and it's terribly damaging.
[+] [-] BeFlatXIII|4 years ago|reply
[+] [-] streamofdigits|4 years ago|reply
We don't have to theorize though. Given all the open source platforms of the fediverse [1] people can setup networks, tweak to their heart's content and explore what "works" when actual humans are involved.
Maybe what is really missing is some sort of crowdsourced framework for collecting and sharing that information.
[0] https://www.scientificamerican.com/article/information-overl...
[1] https://fediverse.party/
[+] [-] ctoth|4 years ago|reply
[+] [-] RNCTX|4 years ago|reply
An blog post about a pointless research project aimed at drawing a conclusion that could be drawn from common sense without any research at all.
All of which is a somewhat circular exercise in bad content announcing itself to the world in an ontological sense.
[+] [-] volkk|4 years ago|reply
furthermore, i'm also noticing at my workplace many people saying a lot of things, to the point of utter rambling and going off topic, in extremely verbose ways but communicating ideas that could have been said in the most basic of ways. i'm not sure if i'm just slowing down as i get older and get dumber, but i am feeling like a lot of information hasn't gotten more complex, but the way its communicated very much has.
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] notanzaiiswear|4 years ago|reply
Social networks just make it easier to access the news stream, in that sense. But they are not responsible for creating the news, or at least, not in a significant way - as infinite+whatever amount of "news" SN create is still infinite.
This just in relation to the article - of course SNs create all kinds of problems, but the amount of information is not really the problem caused by SNs.
[+] [-] abnry|4 years ago|reply
[+] [-] bickeringyokel|4 years ago|reply
https://www.scientificamerican.com/article/information-overl...
I can't determine what this article means by "quality", is it gauged by validity or some subjective notion? If it's the latter, I'm not sure how any of this is useful.
[+] [-] xyzzy21|4 years ago|reply
As a generating function, all you need to get a "Power Law" distribution of connections is a "preferential attachment" probability either statically or dynamically/transiently.
Finite attention automatically creates that preferential attachment because you want to avoid "wasting the precious resource of attention"!!