I'm not sure how anyone can think discussions on reddit are not manipulated at this point. If you watch carefully, you see the exact same verbiage used from multiple accounts on the same topic used to steer conversations. And as new responses come up, there will be a multi-hour delay, then new verbiage will get posted simultaneously to multiple accounts. There is clearly behind-the-scenes writing efforts going on, then being distributed to accounts. And if I see this just as a casual observer, I can only imagine what you would find if you really dug deep.
It should be feasible to write a slap-drone (nod to Iain Banks) style Reddit bot that trawls all comments in the major subreddits and minor subreddits which their mods request, and performs a fuzzy match on the comment contents. Any comments found with substantively the same content gets the account followed around. Every comment from that account for a period of time gets an immediate reply citing the findings, and a warning to others that the account is possibly a commercial astroturfing account. Monetize by offering a paid opt-in subscription where subscribers who post comments with an appropriate disclaimer don't get slap droned.
This has the advantage that even posting much the same comment into different subreddits using different accounts in each subreddit is detected.
As a heavy user, I find myself using the same verbiage I read. I often find myself repeating, sometimes verbatim, comments I've read days, or even weeks/months ago.
This is the danger with just being in an echo chamber, because you yourself become part of the echo.
On a news or politics subreddit? Sure. I haven't seen much astroturfing on ELI5, or Askscience. Reddit is a big place... fortunately bigger than the various news and politics cesspools that inevitably emerge and rise to popularity.
I thinks the problem; many browsers don't care to fact check, or even click through to the articles underneath. Indeed I just scroll through the list a few times throughout the day, and might click on anything that sounds interesting. But with the constant and obvious BS it all gets a bit tiresome. If there was an alternative site where they had rules in place to deal with the problem, I'd switch in a heartbeat.
It's a shame this has happened. It used to that aggregated news was the best news because it wasn't opinionated. It also didn't need to focus on the big ticket items (murder, sex, drugs) like newspapers, as sales weren't a concern, so you have science, tech, and I kid you not, actual good news to read about!
But now with the pay-to-get-upvotes scams going on, we get utterly biased and even ridiculous stories constantly on the front page. And how to even start on the comments, which just read like blurbs to the title. A great example on the front page at the moment:
"Donald Trump's war on media is 'biggest threat to democracy' says Navy Seal who brought down Osama Bin Laden".
I almost feel like this stuff is AI generated at this point, just throwing together keywords that get clicks. As someone across the pond looking in, the bias is laughable obvious. I just hope that more people realise that manipulation is present, and remember not to believe everything they read.
The political subreddits are almost a weird case study of trends and tactics right now. If you pay close enough attention to /r/politics over the last month or so you saw general tones of the stories being raised to the top: "Trump is being manipulated like a puppet", "Trump is too stupid to be president", "Trump is sabotaging ____".
And neither title is all that surprising withing the context of the echo chamber that exists there, but for a little while you'd see the same types of posts just circulating the page for days when suddenly the entire direction went to the next phase or tactic, there was actually quite little variety in my opinion.
Just because you don't like a headline is not evidence that it was paid for. The main organic bias in crowdsourced news sites are clickbait headlines and it affects all political view points.
... this is a former Navy Seal who oversaw the operation that lead to the killing of Osama Bin Laden AND he said that Trump's sentiment about the media being the enemy of the people is the greatest threat to democracy in his lifetime. The article backs it up with video footage of a portion of his speech.
Link to this post in three years and ponder in its prescients... Web 3.0 will be born in the death of the heavily botted social networks. Reddit, Facebook & Instagram, Twitter... are all basically pay-to-win schemes at this point, benefiting greatly in terms of adoption from the grey market pay-for-likes botnets, where marketers and propagandists know they can make their content highly visible if they're willing to pay.
People pay very little attention to the fragility that has arisen in the Web 2.0 economy. Once there is widespread understanding of the gaming, cheating, and botting, these social network institutions will crumble. There will be piecemeal attempts to reduce botting, but they will annoy end users, and annoy investors and shareholders as they come to realize XX% of their user base never existed.
Web 3.0 will be the death of anonymity, with social networks and APIs that are, by design, hard to automate, and even harder to hide your true identity from. There will be CAPTCHA-like systems (possibly tied to hardware) that facilitate this. This will of course promise to fix the problem of botting, while heavily benefiting surveillance, for state, and for advertisement. There will be a new breed of social networks built around ensuring the content you are seeing is genuine or "organic", and yet the incentives will be much more perverse, and their networks, much more invasive.
I think strgrd must be from the future, I strongly agree with your prediction of the future.
There feels like there is an analogy to our civilization. We existed as very primal beings operating in a world of high anonymity. A real world with high anonymity meant a more dangerous environment (theft, murder, assault, etc). In order for us to congregate & live a collectively improved life, we decreased anonymity. We did this with the idea of names, roles (sheriff), & printed records.
New complexity emerged from this. We needed to collect funds via taxes, make and enforce rules. We needed to know who each other were in order to give the rules some repercussions. Eye witnesses were crucial because of their "ability" to remove anonymity.
These steps of stripping anonymity were done with the intention of improving society. This point here is debateable, feel free to challenge.
We're at a unique moment in time where there's a highly dense "locale" that has full anonymity. What's even more interesting about this locale is that it's permeated across our "real world" society. The implications of this reality are immense, we live in two worlds that are interwoven — one that has little anonymity & the other with abundant anonymity.
As technology increases in power & importance (it will) this is tipping the scales. In general, I don't have much faith in humanity's ability to live within an anonymous world without devolving into very primal pre-civilization behavior (see: 4chan /b/).
I'm not sure this is a stoppable (or advisable to stop) process.
I can only see one cure to this pervasive issue that affects every website on the internet that allows for user comments:
First, verified accounts, like Twitter, are visually separate from non-verified accounts. The distinction has to be visible on every place the username is displayed or when their post is displayed.
Second, only verified users have upvote/downvote privileges. To me it is downright foolish to allow any jackass or botnet to make a ton of accounts and up/down vote the conversation as they please.
I have seen this forum management all over the web and it's turning me off from social media more and more by the day. Newspaper comment sections used to be insightful, and now they are generally cesspools of shills and bots. Double or more for forums like Reddit. I even see it on Hacker News.
We can have open discussions and we can have discussions free of paid influence, but I do not believe it is possible to have both.
This same thing happens on all social networks - facebook, instagram, reddit, twitter, etc all have accounts on them that may have naturally grown and are now paid to post / influence content on their respective platforms. I've seen this first hand on Instagram how large of an effect it can have in promoting apps, products etc. Previously disclosing affiliations / paid promotions was limited to a much smaller set influencers. Now, these platforms give anyone from a kid with spare time to a professional marketing agency a means to build accounts and leverage the vast reach of these networks for their own gain. Facebook is _starting_ to realize this and clamp down on it for fake news. But there are many more avenues and it'll be interesting to see how this really gets solved if at all.
This is a lot more serious than most people think. And a lot farther along technologically. Most of what has been exposed in the wild so far just falls under Level 0 Character.
If you asked me how we could potentially combat this coming information implosion? I would say you fight fire with fire.
I'd noticed something like this on HN a few years ago. A negative comment about Apple might be voted up at first. Then, about an hour after posting, there would be many negative votes. The timing on this was consistent. After that, no more negative votes, and the rating would float up again.
I'll often see a similar pattern: I'll post something which disagrees with the common wisdom here on HN, it'll get a flurry of downvotes, then over time it'll get more upvotes. I figure that it's probably just the result of a few folks who are HN addicts, and that over time more reasonable heads prevail.
Yeah, this happens to my account. I think I must have made some off-hand comment once that someone did not like and it's their petty vengeance. The DVers will wait about 2 hours (+/- 1hour, random interval) to do so, no more after that normal distribution. The comments are stupid stuff too, like some links to wikipedia pages and just random comments. The ANOVA I did on the DVs showed just random timing and comments (p-val was super high though, as I don't have many comments). It's strange.
Only way I can think to fix it is to tell people to stop talking about HN. If you want to keep HN good, keep it to your very close friends and family. We don't want this to turn into reddit or digg.
3. Every user now has a reference pointing to who invited them.
4. Require that you must vouch for the behavior of the people you invite, and their behavior may impact your participation.
5. Use tree traversal and analysis for pruning bad sections of the population of users. (i.e. If you see a series of bad users, go up the tree to find the source and prune it there.)
6. Work on algorithms and moderator tools to improve said analysis.
7. Give merit and incentive to those who contribute. (See: Slashdot and Everything2 systems.)
8. Add anonymous meta-moderation. (Again, see Slashdot.)
9. Give ranks and small power to users based on their merit and participation, and somewhat based on their invited friends.
I can keep going. It's a bit harsh, but I feel it'd work.
I've considered making an example app, but the hard part would be getting a critical mass of users and content. Reddit, for example, faked posting content for their first few years to seed their community. It's hard to grow a new community, if even possible.
P.S. If anyone has a lobste.rs's invite, lemme know. Curious to see how their invite-only website has progressed.
Tiered subreddits. You would need to earn credibility in open discussions, and earn your way up into more serious discussions. So there would end up being multiple layers of credibility, with only vetted accounts participating in the highest levels.
Reddit's voting/karma system needs an overhaul. Karma does absolutely nothing on that site yet it's the only metric of value on that site (people do crazy things for meaningless reddit karma), that and having an ancient account but old accounts only matter to mods/admins for the most part.
Give an allowance of votes per day/week. More can be bought for very cheap, like 2 cents?
Votes are a commodity that can be accumulated and used to vote others. The more popular your posts are the more votes you have pooled to use on others. Votes are slowly dissipated over time to prevent hoarding.
You'd have to find a way to deal with shills/bots. No easy solution here, invite only will stifle growth. Collecting too much personal info will stifle growth. Maybe extreme blacklisting techniques for anyone found to be manipulating?
If you could pull it off then advertisers would actually have to pay to manipulate your site, not perfect but better than reddit's free accounts to manipulate thing they have going on now.
There have been a few articles claiming bots are being used on Twitter to impact trending topics. One solution I talked to a friend about was having a yearly subscription fee to make it less cost effective to create hundreds of fake accounts. Of course hitting a critical mass of users would be very difficult. And even if the service did become wildly popular, and trusted, it would only increase the value for companies to game the system.
I think the general population is becoming more aware of how intrusive ad networks are, possibly moving the needle on making a pay service more viable. But I don't think I know any non-tech people that would be willing to pay for a Twitter/Facebook/Reddit type service today.
So, I have no good solution. But I sure hope someone figures out a good solution. The whole concept of "pick your own truth" has some terrifying consequences.
My idea is for reddit to make as much of the users' posting history available for anyone to download and analyze, but while respecting the privacy of their users. I'm not sure how this can be done (I don't know their terms of service). But the idea is to allow researchers to use reddit's data to find manipulation. I certainly don't expect reddit to solve the problem on their own.
Niche interest subreddits are about the only thing Reddit is good for anymore as far as I'm concerned. I don't subscribe to a single default sub and never check the front page.
I subscribe to subs like sysadmin, woodworking, ruby and bicycling. The front page and defaults are just horribly toxic.
(I would like to give a more substantive comment, so I'll leave the somewhat jaded perspective of rephrasing the title: "<social channel> is being manipulated by <anyone who participates in social engagement/growth hacking/perception management>". The question I contemplate is where we draw the line of what social engagement is above the bar acceptability. (To answer my own question, I typically consider "disclosure" the answer to that))
I think it goes without saying that a social media site with millions of users and place 23 on the Alexa rankings is an interesting advertising platform, and that it happens on a regular basis and a large distributed scale across the platform.
The far more important problem in my opinion in this situation is to be able to differentiate a "normal" user submitted post from an advertisiement, a skill that is missing in 80% of today's youth:
It's not fair to call it a missing skill when the other side of that equation is a $500B+ industry of university-educated advertisers with decades of experience who are working to obfuscate the difference between ads and content, if not practice outright deception.
What exactly are "today's youth" supposed to know in the face of that?
I don't understand why "financial services" are called out in the headline. They're being manipulated by shady digital marketing companies who have some financial clients among many others.
Financial services account for a substantial portion of the US GDP and some may argue has big sway in the way our laws are written, and now in steering conversation about them on a popular site.
This is a very old phenomenon called sockpuppetry[1]. Any online forum of sufficient size and popularity is bound to be a target of it. It's no surprise at all that Reddit would be targeted.
Most reputable forums try to deal with it somehow, but it's difficult to stamp it out completely -- especially if the site administrators are ever themselves compromised.
A much more insidious spam but very common are fly-by-night 'news' sites that copy paste content from reputable sources and then get up-voted to the top of popular subreddits, generating adsense revenue for the webmasters. Adsense is such a pox on the internet. I know companies need to make money but it also gives rise to soooo much spam.
And I wonder which discussion forum isn't... the closer any site gets to money (example: stock discussions), the more is at worth with manipulative conversations.
I think that is a pretty standard and often needed tactic when starting a forum or community. A community is pretty binary; it's either dead or it isn't, so a it needs to be jumpstarted.
[+] [-] codingdave|9 years ago|reply
[+] [-] yourapostasy|9 years ago|reply
This has the advantage that even posting much the same comment into different subreddits using different accounts in each subreddit is detected.
[+] [-] CamelCaseName|9 years ago|reply
This is the danger with just being in an echo chamber, because you yourself become part of the echo.
[+] [-] M_Grey|9 years ago|reply
[+] [-] Nadya|9 years ago|reply
http://i.imgur.com/M6wAJ1A.png
[+] [-] dsschnau|9 years ago|reply
[+] [-] hacker_9|9 years ago|reply
I thinks the problem; many browsers don't care to fact check, or even click through to the articles underneath. Indeed I just scroll through the list a few times throughout the day, and might click on anything that sounds interesting. But with the constant and obvious BS it all gets a bit tiresome. If there was an alternative site where they had rules in place to deal with the problem, I'd switch in a heartbeat.
[+] [-] edgarvaldes|9 years ago|reply
[+] [-] threepipeproblm|9 years ago|reply
[+] [-] hacker_9|9 years ago|reply
But now with the pay-to-get-upvotes scams going on, we get utterly biased and even ridiculous stories constantly on the front page. And how to even start on the comments, which just read like blurbs to the title. A great example on the front page at the moment:
"Donald Trump's war on media is 'biggest threat to democracy' says Navy Seal who brought down Osama Bin Laden".
I almost feel like this stuff is AI generated at this point, just throwing together keywords that get clicks. As someone across the pond looking in, the bias is laughable obvious. I just hope that more people realise that manipulation is present, and remember not to believe everything they read.
[+] [-] TheCapn|9 years ago|reply
And neither title is all that surprising withing the context of the echo chamber that exists there, but for a little while you'd see the same types of posts just circulating the page for days when suddenly the entire direction went to the next phase or tactic, there was actually quite little variety in my opinion.
[+] [-] vostok|9 years ago|reply
What is this an example of? This sounds like exactly the kind of thing that I would expect on the front page if there was no vote manipulation.
[+] [-] guelo|9 years ago|reply
[+] [-] MisterBastahrd|9 years ago|reply
Every single part of the title is true.
[+] [-] strgrd|9 years ago|reply
People pay very little attention to the fragility that has arisen in the Web 2.0 economy. Once there is widespread understanding of the gaming, cheating, and botting, these social network institutions will crumble. There will be piecemeal attempts to reduce botting, but they will annoy end users, and annoy investors and shareholders as they come to realize XX% of their user base never existed.
Web 3.0 will be the death of anonymity, with social networks and APIs that are, by design, hard to automate, and even harder to hide your true identity from. There will be CAPTCHA-like systems (possibly tied to hardware) that facilitate this. This will of course promise to fix the problem of botting, while heavily benefiting surveillance, for state, and for advertisement. There will be a new breed of social networks built around ensuring the content you are seeing is genuine or "organic", and yet the incentives will be much more perverse, and their networks, much more invasive.
[+] [-] chadwittman|9 years ago|reply
There feels like there is an analogy to our civilization. We existed as very primal beings operating in a world of high anonymity. A real world with high anonymity meant a more dangerous environment (theft, murder, assault, etc). In order for us to congregate & live a collectively improved life, we decreased anonymity. We did this with the idea of names, roles (sheriff), & printed records.
New complexity emerged from this. We needed to collect funds via taxes, make and enforce rules. We needed to know who each other were in order to give the rules some repercussions. Eye witnesses were crucial because of their "ability" to remove anonymity.
These steps of stripping anonymity were done with the intention of improving society. This point here is debateable, feel free to challenge.
We're at a unique moment in time where there's a highly dense "locale" that has full anonymity. What's even more interesting about this locale is that it's permeated across our "real world" society. The implications of this reality are immense, we live in two worlds that are interwoven — one that has little anonymity & the other with abundant anonymity.
As technology increases in power & importance (it will) this is tipping the scales. In general, I don't have much faith in humanity's ability to live within an anonymous world without devolving into very primal pre-civilization behavior (see: 4chan /b/).
I'm not sure this is a stoppable (or advisable to stop) process.
[+] [-] nix0n|9 years ago|reply
The bot vs captcha arms race will continue, but I would be very surprised if the captchas started winning.
[+] [-] rm_-rf_slash|9 years ago|reply
First, verified accounts, like Twitter, are visually separate from non-verified accounts. The distinction has to be visible on every place the username is displayed or when their post is displayed.
Second, only verified users have upvote/downvote privileges. To me it is downright foolish to allow any jackass or botnet to make a ton of accounts and up/down vote the conversation as they please.
I have seen this forum management all over the web and it's turning me off from social media more and more by the day. Newspaper comment sections used to be insightful, and now they are generally cesspools of shills and bots. Double or more for forums like Reddit. I even see it on Hacker News.
We can have open discussions and we can have discussions free of paid influence, but I do not believe it is possible to have both.
[+] [-] Zaheer|9 years ago|reply
This same thing happens on all social networks - facebook, instagram, reddit, twitter, etc all have accounts on them that may have naturally grown and are now paid to post / influence content on their respective platforms. I've seen this first hand on Instagram how large of an effect it can have in promoting apps, products etc. Previously disclosing affiliations / paid promotions was limited to a much smaller set influencers. Now, these platforms give anyone from a kid with spare time to a professional marketing agency a means to build accounts and leverage the vast reach of these networks for their own gain. Facebook is _starting_ to realize this and clamp down on it for fake news. But there are many more avenues and it'll be interesting to see how this really gets solved if at all.
[+] [-] kakarot|9 years ago|reply
If you asked me how we could potentially combat this coming information implosion? I would say you fight fire with fire.
http://wiki.project-pm.org/wiki/Persona_Development#Persona_...
[+] [-] sjg007|9 years ago|reply
[+] [-] Animats|9 years ago|reply
[+] [-] wtbob|9 years ago|reply
[+] [-] Balgair|9 years ago|reply
Only way I can think to fix it is to tell people to stop talking about HN. If you want to keep HN good, keep it to your very close friends and family. We don't want this to turn into reddit or digg.
[+] [-] mrfusion|9 years ago|reply
[+] [-] Slackwise|9 years ago|reply
2. Give out X invites to everyone.
3. Every user now has a reference pointing to who invited them.
4. Require that you must vouch for the behavior of the people you invite, and their behavior may impact your participation.
5. Use tree traversal and analysis for pruning bad sections of the population of users. (i.e. If you see a series of bad users, go up the tree to find the source and prune it there.)
6. Work on algorithms and moderator tools to improve said analysis.
7. Give merit and incentive to those who contribute. (See: Slashdot and Everything2 systems.)
8. Add anonymous meta-moderation. (Again, see Slashdot.)
9. Give ranks and small power to users based on their merit and participation, and somewhat based on their invited friends.
I can keep going. It's a bit harsh, but I feel it'd work.
I've considered making an example app, but the hard part would be getting a critical mass of users and content. Reddit, for example, faked posting content for their first few years to seed their community. It's hard to grow a new community, if even possible.
P.S. If anyone has a lobste.rs's invite, lemme know. Curious to see how their invite-only website has progressed.
[+] [-] codingdave|9 years ago|reply
[+] [-] kneel|9 years ago|reply
Give an allowance of votes per day/week. More can be bought for very cheap, like 2 cents?
Votes are a commodity that can be accumulated and used to vote others. The more popular your posts are the more votes you have pooled to use on others. Votes are slowly dissipated over time to prevent hoarding.
You'd have to find a way to deal with shills/bots. No easy solution here, invite only will stifle growth. Collecting too much personal info will stifle growth. Maybe extreme blacklisting techniques for anyone found to be manipulating?
If you could pull it off then advertisers would actually have to pay to manipulate your site, not perfect but better than reddit's free accounts to manipulate thing they have going on now.
[+] [-] sfaxon|9 years ago|reply
I think the general population is becoming more aware of how intrusive ad networks are, possibly moving the needle on making a pay service more viable. But I don't think I know any non-tech people that would be willing to pay for a Twitter/Facebook/Reddit type service today.
So, I have no good solution. But I sure hope someone figures out a good solution. The whole concept of "pick your own truth" has some terrifying consequences.
[+] [-] clamprecht|9 years ago|reply
[+] [-] Aardwolf|9 years ago|reply
[+] [-] xraystyle|9 years ago|reply
I subscribe to subs like sysadmin, woodworking, ruby and bicycling. The front page and defaults are just horribly toxic.
[+] [-] grokas|9 years ago|reply
[+] [-] josho|9 years ago|reply
This is today's equivalent of the same problem. The difference is it is now two monied interests battling for the audience.
I long for a modern usenet newsgroup equivalent that has some kind of protection from monied interests, I suppose a small audience is a guard.
[+] [-] existencebox|9 years ago|reply
https://news.ycombinator.com/item?id=13714159
(I would like to give a more substantive comment, so I'll leave the somewhat jaded perspective of rephrasing the title: "<social channel> is being manipulated by <anyone who participates in social engagement/growth hacking/perception management>". The question I contemplate is where we draw the line of what social engagement is above the bar acceptability. (To answer my own question, I typically consider "disclosure" the answer to that))
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] zython|9 years ago|reply
The far more important problem in my opinion in this situation is to be able to differentiate a "normal" user submitted post from an advertisiement, a skill that is missing in 80% of today's youth:
http://fortune.com/2016/11/23/stanford-fake-news/
[+] [-] rhizome|9 years ago|reply
What exactly are "today's youth" supposed to know in the face of that?
[+] [-] tootie|9 years ago|reply
[+] [-] Rmilb|9 years ago|reply
[+] [-] seppin|9 years ago|reply
[+] [-] choward|9 years ago|reply
[+] [-] pmoriarty|9 years ago|reply
Most reputable forums try to deal with it somehow, but it's difficult to stamp it out completely -- especially if the site administrators are ever themselves compromised.
[1] - https://en.wikipedia.org/wiki/Sockpuppet_%28Internet%29
[+] [-] jasonkostempski|9 years ago|reply
[+] [-] paulpauper|9 years ago|reply
[+] [-] benologist|9 years ago|reply
[+] [-] woliveirajr|9 years ago|reply
[+] [-] unlimit|9 years ago|reply
[+] [-] ffef|9 years ago|reply
[+] [-] Eupolemos|9 years ago|reply