top | item 26425583

How Facebook got addicted to spreading misinformation

128 points| 7d7n | 5 years ago |technologyreview.com | reply

60 comments

order
[+] contemporary343|5 years ago|reply
"In 2017, Chris Cox, Facebook’s longtime chief product officer, formed a new task force to understand whether maximizing user engagement on Facebook was contributing to political polarization. It found that there was indeed a correlation, and that reducing polarization would mean taking a hit on engagement. In a mid-2018 document reviewed by the Journal, the task force proposed several potential fixes, such as tweaking the recommendation algorithms to suggest a more diverse range of groups for people to join. But it acknowledged that some of the ideas were “antigrowth.” Most of the proposals didn’t move forward, and the task force disbanded.

Since then, other employees have corroborated these findings. A former Facebook AI researcher who joined in 2018 says he and his team conducted “study after study” confirming the same basic idea: models that maximize engagement increase polarization. They could easily track how strongly users agreed or disagreed on different issues, what content they liked to engage with, and how their stances changed as a result. Regardless of the issue, the models learned to feed users increasingly extreme viewpoints. “Over time they measurably become more polarized,” he says."

Facebook has repeatedly shown that it will maximize its bottom line and growth (in particular, as measured by engagement metrics) above everything else. Fair enough, they're a for-profit corporation! I just can't really take seriously their claims of 'self-regulation'. They simply can never be trusted to self-regulate because it's just not in their interest, and they've never shown meaningful actions in this regard. They will only respond to the legal stick - and it is most certainly coming.

[+] contemporary343|5 years ago|reply
Another key quote that explains the fundamental misalignment of incentives at the employee level - you're rewarded for increasing engagment metrics. So why do something that might decrease it?

"But anything that reduced engagement, even for reasons such as not exacerbating someone’s depression, led to a lot of hemming and hawing among leadership. With their performance reviews and salaries tied to the successful completion of projects, employees quickly learned to drop those that received pushback and continue working on those dictated from the top down."

[+] 908B64B197|5 years ago|reply
Someone will have to explain to me how is that any different than a tabloid, or even a legacy newspaper/news broadcast.

The more shocking it is, the more eyeballs it attracts and the more add revenue it gets.

[+] rapind|5 years ago|reply
The self regulation argument is ALWAYS bullshit. If a company loses business by self regulating then a competitor may come a long and eat their lunch. We shouldn’t expect it, and shouldn’t be disappointed when it doesn’t happen.

This is what the public (government) should be responsible for. That way regulation is uniform (at least one can hope) across the industry.

[+] eplanit|5 years ago|reply
Very interesting. The parallels with FB (et al.) and tobacco company history is remarkable.
[+] tim333|5 years ago|reply
>They will only respond to the legal stick - and it is most certainly coming

But how are you going to legislate the engagement / polarisation tradeoff?

[+] synaesthesisx|5 years ago|reply
This is spot on. Advertising companies like Facebook, self-optimize for one thing only: user engagement/impressions.
[+] thitcanh|5 years ago|reply
Has anyone ever looked at history and figured out that humans are pretty damn awful? Facebook is just a catalyzer just like Twitter is. Every platform has their own disgusting self-centered bubbles and that’s not because of the platform.

The bar to get banned from any online platform is pretty high. Just look at how long it took for the most prominent Twitter offender to get rejected.

You can ban all public forums you want, then people will just continue sharing BS on WhatsApp without you even noticing.

The problem isn’t Facebook but people. Governments should punish people, not outsource the policing to companies.

[+] loveistheanswer|5 years ago|reply
>Has anyone ever looked at history and figured out that humans are pretty damn awful?

Humans can be awful, and they can be wonderful.

>Facebook is just a catalyzer just like Twitter is. Every platform has their own disgusting self-centered bubbles and that’s not because of the platform.

Pretending like system design has no effect on user behavior is just silly.

[+] everdrive|5 years ago|reply
>The problem isn’t Facebook but people.

I think there's some truth to this, however someone could easily try to use the same argument for different topics. "People are awful, the problem isn't heroin, it's people." There's some truth to that too, but frankly, if we could magically remove 100% of heroin from the world things would be slightly better. In other words, it's clear that Facebook is making this worse, even if we carry most of the blame.

[+] sneak|5 years ago|reply
> Governments should punish people, not outsource the policing to companies.

Governments shouldn't be in the business of deciding what is truth and what is "misinformation", and they certainly shouldn't be punishing people based on the outcomes of such determinations.

Otherwise, we'd have to ban the church and jail all the priests.

[+] sodality2|5 years ago|reply
We know it's not right

We know it's not funny

But we'll stop beating this dead horse

when it stops spitting out money

[+] coldcode|5 years ago|reply
When your founder and CEO makes growth the only thing that matters, everything else is not important. If selling people lies makes more money, sell more lies, etc. If Facebook charged people $1 a month per account, and deleted all the ads, the data selling, the lie promotions, etc, 95% of their company could be laid off. They would also make less money for sure, and that's something Zuckerberg could never support.
[+] syamilmj|5 years ago|reply
You could go on Facebook with no opinion on anything and leave an extremist.
[+] andrepd|5 years ago|reply
Same thing with YouTube. Anecdote: yesterday I popped open youtube to see a video of the President of my country drooling during a state visit. After the 20s video was finished my recommendations were immediately flooded with links for the far-right party's propaganda channel on youtube, complete with inflammatory titles and fake news. For fuck's sake, how much longer will we put up with this? How much longer will we let our political process be undermined so that private companies can make a few more pennies?
[+] barbazoo|5 years ago|reply
The same way you could be on Facebook and not being radicalized at all. At the end of the day it's people who chose to be misinformed, who chose to not question, not look elsewhere, not to be skeptic about how content is making them feel. Facebook just shamelessly takes advantage of people's shortcomings, for profit.
[+] mooneater|5 years ago|reply
Killer article here by Karen Hao, hard hitting ending.

Wonder what fb thought they were getting into here with MTR!

[+] strangeloops85|5 years ago|reply
One take-away I had from reading this excellent and detailed article is the fundamental tension in Facebook's approach to moderating content:

They rely on a hammer to catch what is objectionable or misinformation, but optimize for engagement maximally with everything else. However the hammer misses plenty of newly emerging inflammatory content (for example, anti-Rohingya content would look very different from anything prior), and this content gets maximally amplified because the overall newsfeed algorithms optimize for engagement.

An alternate approach would be to sacrifice engagement overall (perhaps only a little!) to reduce the very real and negative consequences of undesirable content that slips past the detection algorithms (which it always will). I suspect some of the fixes that were proposed, but never implemented, effectively did this. But they were shot down because, well, the top-line engagement numbers would dip.

[+] adjkant|5 years ago|reply
Despite the title, this is very much more than fluff and would highly recommend reading it in full.
[+] ctocoder|5 years ago|reply
When the 1st Facebook API was released, I helped build the largest application called SuperWall. The way to increase virality and activity on the wall, was to allow a copy paste virus to perpetuate throughout the network. The DAU with this virus that "Zuckerberg is going to delete your account if you do not log in" kept DAU and virial growth at peak levels.

Facebook could do this too and get the same results. More DAU more money.

[+] blendo|5 years ago|reply
“I think what happens to most people who work at Facebook—and definitely has been my story—is that there’s no boundary between Facebook and me,” he says. “It’s extremely personal.”
[+] threesmegiste|5 years ago|reply
Again facebook news. Tricky as always. There is only one evil in big tech. Others are angel. Dont say they all same. These kind of frequent news will guide you. And you start to say “ X is also evil i know but not much like facebook”. And that what they want us to think. On safari you dont need to run faster than lion. Just dont be the last among runners. When somebody talk about privacy, sentences start with Facebook and the rest is unimportant. Basic media manupilation and psychology.
[+] sneak|5 years ago|reply
Why does everyone seem to be convinced that the spread of misinformation is somehow suddenly a problem? It has been widespread for centuries in our societies and with the exception of a few cases (say, Galileo, crusades, et c) it hasn't really consistently caused any major issues.

This feels very much to me like a "something must be done!"

[+] helloworld11|5 years ago|reply
The mainstream media remains furious about Trump having won unexpectedly right under their noses in 2016 and couple with this their general disdain for social media platforms or anything at all that steals their power to impose their notion of authoritative opinion to get the current rage over the largely invented demon of misinformation. To make "misinformation" into an even bigger boogeyman, we have a large progressive, left, woke-oriented subset of the population that largely gets pandered to by the main media outlets increasingly polarizing towards absolute intolerance of anything against their cherished dogmas and being more than happy to have a convenient label for even very moderate divergent arguments. It's noteworthy that so much of the supposed issues around the misinformation debate focus entirely on painting all right-oriented opinion as part of the extremes, while visibly ignoring how much polarization in the other direction also exists towards its own extremes.
[+] ldbooth|5 years ago|reply
amazing how the salary changes the employee's perspective.

it has been said elsewhere: engagement = addiction

dial up the addiction, dial up the profit. "Regulators! mount up"

[+] ilamont|5 years ago|reply
The algorithms that underpin Facebook’s business weren’t created to filter out what was false or inflammatory; they were designed to make people share and engage with as much content as possible by showing them things they were most likely to be outraged or titillated by.

I find it interesting that this article does not even mention some of the other platforms that have also grown off this strategy, and have contributed to the societal problems named in the article.

I joke with my wife that you can innocently watch a YouTube press conference featuring our moderate Republican governor discussing COVID policy one day, and before you know it you're being recommended videos by rabid anti-vaxxers and QAnon deep state conspiracy theorists.

It's not just a Facebook problem or an "AI" problem, it's a platform problem which requires wider solutions than the broken internal approaches described in this article.

[+] ggggtez|5 years ago|reply
I think it's worth pointing out that Facebook isn't the only platform, though I don't think every article needs to cover all social media equally. This article does a deeper dive on one particular company's issues (and the company that's widely acknowledged to have known about the issue the longest, and done the least to fix it).

There are other articles that tackle this topic for YouTube [1]. But as far as I know, we're still waiting on any expose that shows that YouTube had done extensive research into the issue and opted to do nothing to continue growth. That dubious honor seems to be reserved for Facebook at least for now.

[1] https://www.usatoday.com/story/tech/2021/02/12/youtube-chann...

> [In 2017,] YouTube limited recommendations on those videos and disabled features such as commenting and sharing. But it didn’t remove them. The company said the crackdown reduced views of supremacist videos by 80%.

[+] TheJoYo|5 years ago|reply
Platforms that stick to chronological order are growing without the influence of this strategy.

Mastodon has been doing very well by simply offering the Email of social media.

[+] throwitaway1235|5 years ago|reply
Stop trying to police what people think. It's disgusting.
[+] andyxor|5 years ago|reply
A better question is how MSM got addicted to spreading misinformation.

'Hate speech' these days means 'speech I hate and want silenced'. Freedom of speech protections exist for this exact reason.

Why should a social media app engage in political censorship and selecting right vs. wrong opinion?