top | item 24712985

Facebook widens ban on political ads as alarm rises over election

124 points| coloneltcb | 5 years ago |nytimes.com

247 comments

order
[+] ogre_codes|5 years ago|reply
This just seems like too little, too late.

Much of the problems I see aren't advertising in the traditional sense, but viral content from sketchy sources.

If they wanted to seriously affect this, they would prevent posts from spreading to tens of thousands/ millions of people without being vetted.

[+] lettergram|5 years ago|reply
> Much of the problems I see aren't advertising in the traditional sense, but viral content from sketchy sources.

Define a credible source? We know for a fact all the major news outlets edit photos, often don’t use their own reporters, write sensational articles (actively stretching facts).

No, I don’t think we can say what is a “sketchy” source. Unless we start wanting Facebook to moderate truth (or just accept the current propaganda machines as truth).

These statements remind me of school. I used to be told not to use Wikipedia and do all my searches in Altavista or Ask Jeeves. I was told this for years, even after Google came out. I think it’s short sighted to try and suppress. Instead we should teach or add a warning (at best) saying “unverified”.

[+] pjc50|5 years ago|reply
This reminds me of the California/Australia wildfires. They're the result of a whole cascade of problems, from global warming to the details of under-supported local firefighting infrastructure, but those at the top aren't interested in any of those until they start to feel the heat approach them.

I think the possibility of election "chaos" is starting to look real; you've already had this year "armed protestors occupy State building", "crowdfunded anti-protestor gunman", huge ongoing demonstrations, voter rights lawsuits, and chaos in the mail system on which mail-in ballots depend. This is going to make the whole Bush vs Gore hanging chad debacle look like a gentlemanly dispute from a vanished age.

Those at the top of Facebook won't mind a clean victory for either side. They don't even mind a one-off clearing of the streets with live ammunition. What they will mind is ongoing violent struggle for the legitimacy of the state.

(absolutely not a given, but definitely a risk. Much like the President catching the virus that's dangerous to unfit people over 65.)

[+] kace91|5 years ago|reply
You make an interesting point: what would a social network of limited range look like?

Something where you couldn't have more than a set number of connections, things didn't spread more than two nodes in the graph, and resharing wasn't an option. I think it would be ideal to fill in the mainstream need for facebook ( keeping in touch with people)

[+] cm2187|5 years ago|reply
Vetted by who? Based on the political opinions of some manager somewhere in California?
[+] systemvoltage|5 years ago|reply
A simple way would be to stop all content from propagating beyond few thousand people before a human vetting it.
[+] SpicyLemonZest|5 years ago|reply
I'm sure they'd like to, but the line between "vetting" and "endorsing" is extremely challenging to draw, and it's not good for anyone if Facebook bans spreading information that they don't as a company endorse.
[+] MattGaiser|5 years ago|reply
A lot of the "viral" content is started with a paid ad though.
[+] newacct583|5 years ago|reply
> viral content from sketchy sources.

Many of those sources are seeded with promoted links, though. It's true that a lot of conspiracy nonsense is genuinely organic, but a LOT of it is not.

[+] pootyblowfish|5 years ago|reply
I think the 'too little too late' you mean is the American education system. If things like critical thinking, personal finance, ethics, philosophy were emphasized in K-12 you think it would play out this way. Heck Facebook might not even exist now, but sheep need their shepherds supposedly.
[+] tim333|5 years ago|reply
Or just a button for 'this is fake news' or some such. Once say 10 people click that they stop it spreading. Bit like the flagging system on HN.
[+] bognition|5 years ago|reply
Yep but the way Facebook ads work is you can temporarily promote specific content to help it go viral more quickly.
[+] rsynnott|5 years ago|reply
> they would prevent posts from spreading to tens of thousands/ millions of people without being vetted.

But _the precious engagement_!

Viral nonsense is, unfortunately, effectively Facebook's business model, so they're unlikely to do much about it.

[+] nl|5 years ago|reply
This is very true.

> If they wanted to seriously affect this, they would prevent posts from spreading to tens of thousands/ millions of people without being vetted.

And then you see the complaints (including on HN) about how Facebook is biased.

I do note that they have recently banned all QAnon groups and posts, so that's a start.

[+] paganel|5 years ago|reply
> they would prevent posts from spreading to tens of thousands/ millions of people without being vetted.

Mainstream media outlets like the NYTimes or the Washington Post have become too one-sided (I think that if Trump were to say a common-sense thing like "the Earth is round" they'd certainly find something to argue about, like "it's not actually round, round! it's an ellipsoid! Trump is lying!"), at this point applying your (in other contexts) perfectly reasonable suggestion will only alienate half of the electorate.

[+] ponker|5 years ago|reply
The problem that the Internet has exposed is that the average person is too susceptible to lies to be a useful contributor to republic-style electoral governance.
[+] hnracer|5 years ago|reply
The problem with vetting is that it's unlikely to be done in an unbiased way. Facebook is a Silicon Valley company with a young staff that leans significantly to the left.

As a right-leaning individual this makes me uncomfortable because I know that staff aren't going to separate their politics from decision making surrounding vetting/choosing who gets to do the vetting. Here's a concrete example: Facebook staff, being left-leaning, do not view the "fine people hoax" as misinformation - their echo chamber wouldn't even bring it to their attention as something that could be misinformation, even though it is.

The only solution would be for a politically balanced workforce to be making these decisions, ideally a satellite office in a purple state/city with quotas for political leaning, set up only for the purposes of vetting. If we lived in a less polarized society, this wouldn't be an issue.

Would you feel comfortable if oil and gas workers who lean 80-90 percent Republican are "vetting" the information you're seeing (or are choosing who gets to do the vetting)?

As Naval says, you have a good system if you can hand the keys over to your opposition and things don't go wrong.

[+] hogFeast|5 years ago|reply
Yes, it is so simple. News should only come from approved sources. Once the veracity has been checked by a Facebook employee, it is then empirically true or false. I don't know why people in the 21st century cannot understand that all statements are true or false, and that we can place authority for determining truth with companies with known, trusted political allegiances.

For example, I have read a lot about healthcare-for-all not being free. The clue is in the name: "free" healthcare. I can't believe that misinformation, probably spread by insurance companies, has been allowed to spread. Facebook should put a stop to this.

[+] schwinn140|5 years ago|reply
Simple solutions:

* Block all political ads all the time.

* Require all political parties, and their PACS, to register as a known Advertiser account within the system. Any time an ad runs, and regardless of the source, the associated party will be heavily penalized with a removal of their non-paid content.

There needs to be ramifications for their abuse of the platform. Being that Facebook can't charge them a fine, penalizing their organic exposure is the only thing that they can hold against them.

Multiple offenses will result in longer and longer periods of their content being "muted".

[+] maybeOneDay|5 years ago|reply
Perhaps I'm misunderstanding but does this not potentially incentivise political organisations running their opposition's ads (it's probably feasible to do so in such a way that they don't really reach anyone) and then reporting the infractions and thereby getting their opposition punished?
[+] dsugarman|5 years ago|reply
Can they not charge them a fine? Can't they put something in their terms that requires a monetary penalty as well as a "time-out"?

I love your solutions but I also worry that facebook wouldn't want to put them in place because of all the lost revenue, if you make the abuser pay for the lost revenue then all incentives seem aligned

[+] KiranRao0|5 years ago|reply
I would love if Facebook could simply remove all political ads. The problem then becomes what classifies as political ad. This can be very obvious (vote for president = political, buy cereal = not political), but it can also be far less obvious.

For example, would an ad for in support of an oil pipeline be considered a political ad? What if it's in support for an outcome of a referendum? Or an Ad for a charity? What if that charity is the EFF/ACLU in support of changing policies?

It's an extremely difficult problem to differentiate what is considered a political ad and what is not and I don't exactly trust Facebook to do so.

[+] ceejayoz|5 years ago|reply
> Political ads will be banned indefinitely after polls close on Nov. 3...

Of course.

[+] kyrra|5 years ago|reply
(googler, opinion is my own).

Google did the same thing: https://www.theverge.com/2020/9/25/21456323/google-election-...

As there is a good chance we won't know the result of this elect on election night (or possibly within the first week), the tech companies seem to be getting in-front of a potential problem to keep campaigns from drumming up any kind of fear about the election results after the polls are closed.

[+] jsendros|5 years ago|reply
FWIW this misses the context that they'll already be banned 1 week before Nov 3. This just extends that ban beyond the election.
[+] koolba|5 years ago|reply
I predict they walk that back when they realize the huge market for post election day ads for steering the narrative of when to stop counting votes.

See 2000 for a great example.

[+] sleavey|5 years ago|reply
What's the reason not to ban them immediately?
[+] sg47|5 years ago|reply
Something about stable doors and horses.
[+] mrfusion|5 years ago|reply
I came across this article today and actually thought it as true for a couple minutes.

https://babylonbee.com/news/facebook-will-now-add-warnings-t...

I just found that telling on how bad the current state of affairs has gotten.

[+] driverdan|5 years ago|reply
Babylon Bee is satire, like The Onion but with a right wing bias.

Edit: If you're going to downvote please explain why. What I said is factually correct.

[+] divbzero|5 years ago|reply
Whenever I read about the potential harms of social media or targeted ads I think of the imagery from Snow Crash describing our retinas as the only exposed part of our brain, offering a direct neural interface for good or for ill.
[+] smcleod|5 years ago|reply
A truely brilliant book that I too think back to regularly.
[+] mensetmanusman|5 years ago|reply
Here is how facebook can solve this issue.

By default, each user (including news outlets) should have an area of influence analogous to a square kilometer of travel. This means that any post or share is only ever seen by a dozen people or so.

Now, the hard part comes in how you design an algorithm that reinforces community building behavior. Essentially, every troll in the world should start shadow-banned with no way to work their way to a larger audience without years of community building comments (kindness, engagement, supportive, etc.).

This would also mean that FB would have to enforce geotagging of comments, so that operatives in other countries couldn’t be a large part of the hidden political discussion in another country.

[+] vmception|5 years ago|reply
Profiteer off of unlimited Super PAC spending all year until 3 weeks before the election?

They really think we’re dumb.

The only thing Facebook can do right is to shut the ad and social graph down and deal with private shareholder lawsuits till the end of time.

[+] dangjc|5 years ago|reply
We have a serious misinformation problem. It started with Fox News decades ago, and now it’s amplified by Facebook. It’s too easy for bad info and conspiracy theories to proliferate, and good info to be downplayed, before the truth rises to the top. Most voters aren’t getting the full picture and the media sources they consume aren’t integrating evidence well.
[+] joquarky|5 years ago|reply
Retractions and corrections are usually buried deeper than the original erroneous statement.
[+] Upvoter33|5 years ago|reply
I just wish they would get out of politics altogether.
[+] castratikron|5 years ago|reply
What if they made all social media read-only for a day or two preceding the election? Analogous to radio silence. Then it would be impossible for anyone to post misinformation about where to vote or other things that are designed to change the result of an election.
[+] untog|5 years ago|reply
I imagine that would be hugely disruptive to people’s lives, given how central social media is these days.

Besides, you’d presumably only do this to US users. So if someone wanted to spread some piece of viral misinformation they’d just post it under a non-US user and promote it.

[+] netcan|5 years ago|reply
Honestly, I've been quite shocked over the last 5 years or so at Zuck's take on political and politically adjacent politics. I get that they're of the "money at any cost" ethic, but a lot of decisions (or lack thereof) have been downright imprudent.

These last minute decisions are likely reactive to something, whether or not it's actually reactive to the debate comment that this writer thinks triggered it. My (uneducated) guess is that its more related to stuff happening on the platform itself.

In any case, the creepiness of fb just keeps rising. Maybe they'll end up exploding the whole thing.

[+] bretpiatt|5 years ago|reply
The challenge for all of the platforms to exit "political advertising" is the same as exiting "selling likes/follows", if they don't do it directly then 3rd party operators will enter midstream and broker it connecting people with political (or other content) to a network of bot/fake/paid posting accounts who then go promote and spread the content.

By keeping advertising on the platform it actually allows better control and enforcement than pushing it to dark web markets.

The tech players don't want to be regulated like broadcast networks are so they've created this mess by lobbying against regulation. Some level of "FCC" (I use quotes as maybe it should be a different agency online) regulation as we have with broadcast ads could make this all much better.

FCC broadcast guidelines: https://www.fcc.gov/media/policy/statutes-and-rules-candidat...

[+] bladegash|5 years ago|reply
From the post about the potential for upcoming anti-trust actions against Big Tech earlier this week[1]:

“One thing that occurred to me is the potential that Democrats could be using the threat of anti trust actions to pressure the tech companies to be more proactive, re: misinformation prior to the elections. Guess we will know if they make any significant policy changes over the next week or two.”

Now I guess we will see if Google and the rest to make any announcements :-).

[1] https://news.ycombinator.com/item?id=24697860

[+] mixmastamyk|5 years ago|reply
I wish they'd also de-emphasize the ugly political memes that are constantly forwarded by some addicted folks.

I used to unfollow them, but recently found that if I blocked their sources instead it was more effective because I could still hear from them when they post themselves.

I just don't see any use for this content, it is 99% garbage.

[+] shadowprofile77|5 years ago|reply
Am I missing something or has a controversy been created out of nearly thin air by random politicized factors here? Facebook may be a gigantic media company with a whole social component, but it's still in many ways (especially in terms of advertising) mostly another media company with lots of eyeballs, fundamentally like so many media companies that have existed for a very long time.

Since political ads have been a part of media placement since (a very damn long time ago), what justification beyond hand wavey "it's a social danger" arguments can some of you favoring this state for such a ban or the feeling of having to pursue it?

It almost seems as if this only started being a supposed problem since the unpopular-among-educated-liberal-types Trump won an election. Prior to that I recall much less mention of anyone worrying too much about political ads on social media (my memory might be faulty on that though). So, is it a general thing or are we talking about many people now thinking there's a problem mainly because someone they happen to dislike took advantage of social media to win a major election?

Also worth noting:

1. Much of the really rabid fake news type content on platforms like Facebook doesn't even come from paid in-platform ads. It's sourced and virally spread from people (or fake accounts in many cases) posting it more or less organically, albeit in a highly coordinated way sometimes.

2. The whole argument of Russia supposedly using Facebook to "influence" the 2016 election has yet to be in any way credibly substantiated and has gone nowhere so far. Or does anyone have a piece of concrete evidence arguing the opposite to put forth?

[+] nsx147|5 years ago|reply
The answer here is to pull politics out into a separate app, made by FB or someone else