Huh, never thought I’d see XCheck in a news article. I used to work at Facebook and spotted abuse of this system by bad actors and partly fixed it. It’s still not perfect but it’s better than it used to be.
I think I might have agreed with the author of this article before working in Integrity for a few years. But with time I learned that any system that’s meant to work for millions of users will have some edge cases that need to be papered over. Especially when it’s not a system owned and operated by a handful of people. Here’s an example - as far as I know it’s not possible for Mark Zuckerberg to log in to Facebook on a new device. The system that prevents malicious log in attempts sees so many attempts on his account that it disallows any attempt now. There’s no plans to fix it for him specifically because it works reasonably well for hundreds of millions of other users whose accounts are safeguarded from being compromised. His inconvenience is an edge case.
With XCheck specifically what would happen is that some team working closely on a specific problem in integrity might find a sub population of users being wrongly persecuted by systems built by other teams located in other time zones. They would use XCheck as a means to prevent these users from being penalised by the other systems. It worked reasonably well, but there’s always room for improvement.
I can confirm some of what the article says though. The process for adding shields wasn’t policed internally very well in the past. Like I mentioned, this was being exploited by abusive accounts - if an account was able to verify its identity it would get a “Shielded-ID-Verified” tag applied to it. ID verification was considered to be a strong signal of authenticity. So teams that weren’t related to applying the tag would see the tag and assume the account was authentic. And as I investigated this more I realised no one really “owned” the tag or policed who could apply it and under what circumstances. I closed this particular loop hole.
In later years the XCheck system started being actively maintained by a dedicated team that cared. They looked into problems like these and made it better.
Thanks a lot for posting these details and dealing with the critical replies.
I think that with your background and investment in improving these problems, it will be hard for you to understand the perspective many people have that Facebook is fundamentally rotten at this point. These conflicts arise from FB's core business model. It calls up a torrent of hate speech and misinformation with the right hand while trying to clumsily moderate with the left.
You can hire whole teams to prevent singed fingers or protect certain possessions, but the point of a fire is to burn. If there are no good solutions while maintaining FB's core approach and business model, then it would be better for the world if it were extinguished.
I think people that work on this feature mean well - or at least they think that they mean well. But as a result, we have a two-tier system where the peasants have one set of rules and the nobility has an entirely different one. It may have started as a hack to correct the obvious inadequacies of the moderation system, but it grew into something much more sinister and alien to the spirit of free speech, and is ripe for capture by ideologically driven partisans (which, in my opinion, has already happened). And once it did, the care that people implementing and maintaining the unjust system have for it isn't exactly a consolation for anybody who encounters it.
What makes you think that this is unique case? What about people that suddenly come to fame, like viral video subjects?
A simple solution is to disallow logging in from new devices and the attempt being silently dropped so you are not bothered, unless you do some magic like generate one time key to complete the procedure on the new device.
I could think of a lot of people that would find it useful.
Or allow setting up 2FA token (other than mobile) correctly.
Instead what FB does is make it impossible to secure your account because they insist whatever you want you should always be able to recover your password with your phone number.
Years ago when I was still using it (I had reason) I tried to secure it with my Yubico. Unfortunately, it wasn't possible to configure FB to not allow you to log in on a new device without the key.
I understand how the discussion probably went: "Let's make it so that we can score some marketing points but let's not really make it requirement because we will be flooded with requests from people who do not understand they will never be able to log in if they loose the token."
But that's exactly what I want. I have a small fleet of these so it is not possible for me to loose them all but unfortunately most sites that purport to allowing 2FA can't do it well because they either don't allow configuring multiple tokens or if they do, they don't allow really lock your account so it is not possible to log in the next time without the token.
Most of your response treats the service and its flaws as an engineering problem, whereas the ramifications in the real world aren't something Facebook gets to absolve itself from. They need to own the problem completely. If they can't solve the issue through engineering, it is their responsibility to hire hundreds of thousands of moderators.
You haven’t really touched on the main problem discussed in the article, which is that to Facebook, there are special users - mainly celebrities and politicians - who get to play by different rules than the rest of us. Social media was supposed to help level the playing field of society, not exacerbate its inequalities.
> Huh, never thought I’d see XCheck in a news article.
Is everyone at Facebook this naive? You didn't think a system that creates a secret tier of VIP accounts where the rules (and laws) don't apply while publicly claiming the opposite would end up ... in the news?!?!
This system also made it impossible for me to ever log in again. It had been a few years since I used FB but some friends tagged me at an event, so I figured what the heck.
I was presented with a system I had never configured, which asked me to contact people I don't know to get them to vouch for me. At the same time my FB profile was blackholed, and my wife and long time actual friends can't even see that I exist anymore. Just some person that astroturfed my name with no content (I have a globally unique name).
So I no longer exist from FB perspective, which made both my decision to not use FB as well as never use any FB products like Occulus much easier.
One of my favorite things about HN is seeing people come out of the woodwork to raise their hand and say that they worked on a system and give their insight. Thanks for sharing this perspective.
That's all well and great, but your comment as an insider directly implicates Facebook CEO of perjury by lying to congress. During a hearing he claimed all users were treated equally. This is clearly not the case.
Perjury before congress can result in jail time and I hope he's made an example of.
The problem seems to be though, that while the company may have tools to detect abuse, if they're choosing selectively when to enforce things it defeats the entire point
Strong opsec that the supporting documents are actual photos of a computer screen from the visible moire (or somehow altered to look that way).
After numerous leaks, Facebook's internal security team became very good at identifying leakers. The person responsible for this 2016 post was identified within hours and terminated the next day: https://www.buzzfeednews.com/article/blakemontgomery/mark-zu.... The leaker was easily identified by the names of friends liking the post (that and part of their name was visible).
Facebook-issued laptops are filled with spyware, monitoring everything down to the system call level, and practically every access to internal systems is logged at a fine level. The only way to exfiltrate data with plausible deniability would be to photograph the screen with an individually owned device. The fact that you searched for the internal wiki page and viewed it are nothing, but that you shortly invoked the keyboard shortcut for a screen capture, then inserted a USB drive, and copied a file ("Screen shot ____.png" even!) to it (all logged) ... congratulations, you're caught.
It seems like no one has figured out a good system for moderation on the internet.
IIUC, Facebook hired contractors to do it, then realized that that didn't work and created XCheck to cover the visible cases, and is now in trouble because XCheck also doesn't work and rubber-stamps everything. Even before this there were news stories about the horribleness of those contract moderator jobs. Reddit tried to federate moderation, but it's since become clear that all top subreddits are moderated by the same people. Even HN only works because dang busts ass to keep it good, and that has obvious limits (what happens when dang goes on vacation or retires?)
The only systems that have figured out moderation at scale are Wikipedia and StackExchange. But see what HN thinks about that.
Nobody wants to admit that the only type of moderation that actually works at scale is an entrenched group of somewhat-expert overly-attached users gatekeeping contributions with (what looks like to the novice and sometimes even to the established user) extreme prejudice on a website with intentionally highly limited scope.
I don't think universal moderation (a moderation standard across all users) is possible or even desirable.
Different users want different things. There are users who never want a single even mildly insulting word. There are users who want unlimited freedom.
The best you can do is to break down moderation and let people opt into a level and form of moderation. Tell them upfront what they are getting and let them pick (or let them make their own moderation rules that apply clientside).
Ironically, HN's great moderation caused it to become very popular, which has made the task of moderating it all much more difficult, which is having a noticeable effect on discussions and which articles make it to the front page.
Reddit could change their TOS tomorrow to prevent users from moderating more than 2 subreddits if they wanted; others would take their place. But the mods of subreddits that have not been banned are advertiser friendly.
I think part of the problem of "moderation" is exposition, and incentives to maximize user engagement. Posts that nibody sees don't need to be moderated. The problem comes from the fact that platforms offer the most visibility to the worst content, because getting users riled up, excited or upset is the core of their business. It's their only business.
Maybe moderation could be solved by regulating the number of likes or reposts a given user can make or a givzn post can receive. Seems a little far-fetched but worth thinking about.
> It seems like no one has figured out a good system for moderation on the internet.
I use locals.com; lots of small disjointed communities, where posters have to pay (or not) a small fee per month (1$ to 5$). which keep the trolls and influence campaigners away.
This should have been obvious during the election when trump clearly violated the "don't mislead the public about how elections work" when he claimed that postal votes are what ever it is he said it was.
That is a clear ban. It says so in the "community guidelines"
(side note, you should really read the community guidelines, they are a great set of rules for keeping a community vibrant and happy, assuming they are enforced....)
I can see why facebook did it, you don't want to obviously piss off a capricious party with the power to fuck with your bottom line. It doesn't make it any better.
I don't understand the way their enforcement works. I've reported videos of people literally setting live animals on fire and been told there was no violation, but my wife called someone a "loser" and got a week long ban.
I once made fun of Justin Bieber(said he acts like a baby) on IG, and got a warning. Some guy threatened to Hunt me and my family, kill us and do bad things to our bodies and IG said it didn’t violate any rules, when I reported it. My account can now not even post the word “chump” without warnings. Talk about backwards.
It’s very safe to say there is no adult in the room at FB/IG when it comes to rule enforcement. I simply cannot wait until they get the whip from some governments.
Is it really that bad that they apply slightly different sets of rules to accounts with more notoriety?
For example, do we (as facebook consumers) want newly created accounts with @hotmail email treated the same as a new account with @doj.gov, as the same as a Celebrity with a million followers?
Do we want the same set of rules for a suspected Russian troll account to be applied to a major politician? (well..some here might, but I don't).
I think as your account age, status and popularity grows, you should be given *some* flexibility under the rules. Imagine a points system behind the scenes, where bad things get you points, and other things remove points. At a certain point threshold you are banned, suspended, etc.
I will jump on this bandwagon and say color me not surprised.
The issue is emblematic of a bigger issue though. General trust in our society is generally down. It stretches beyond the sectors normally understood as BS ( advertising, HR come to mind ), but moved to corrupt just about everything else out there. We are a point, where the only organization that is somewhat trusted is military.
And the military is the pawn of a corrupt State Department, national security elite, and military industrial complex, which renders moot the mostly honorable conduct of people in the military.
Because we need regulation of new technologies and to address issues that are becoming endemic and long-standing enough that the general populace genuinely understands them. From neonics and bees to misinformation and facebook. But our political class has adopted a policy that demands no action be taken, as that is the official policy position on all free-market related issues for one of the two major political parties.
So popular belief that our society can fix the issues it is presented with drops.
Which means societal trust drops.
And those who are causing said problems, become emboldened.
I got banned yesterday for quoting a nazi official on propaganda:
“Propaganda must facilitate the displacement of aggression by specifying the targets for hatred.”
- Joseph Goebbels
Yeah. Facebook fucking sucks.
Oh, the reason? Encourages danger and violence (which in their broad definition now includes quoting individuals who were associated to dangerous regimes). Welcome to the day and age in which another organization now decides you are dangerous and censors your presence from the internet independent of what you say.
> In a written statement, Facebook spokesman Andy Stone said criticism of XCheck was fair, but added that the system “was designed for an important reason: to create an additional step so we can accurately enforce policies on content that could require more understanding.”
Would be great if us plebs could get the privilege of accurately enforced policies.
Facebook are infamous liars, but journalists keep covering Facebook as if their various PR statements are true
This is like, the 100th time Facebook's public relations has been caught lying about pretty much every god damn topic they have ever addressed in public, but the coverage never, ever changes
Every time it turns out Facebook was deceiving the public, or investing a less-than-reasonable effort into protecting the public, journalists are shocked, shocked that FB would do something like this.
> At least some of the documents have been turned over to the Securities and Exchange Commission and to Congress by a person seeking federal whistleblower protection, according to people familiar with the matter.
The story-within-the-story here is that there is a FB whistleblower who wanted to bring this to light, not unlike other high-profile cases involving government surveillance. It amazes me that one person can wield more power than scores of seasoned journalists.
When a company (or even more generally a social phenomenon) is so big, the only logical consequence is that it becomes embedded in the layer that it services.
In society not everybody is equal, a social movement with the massive scope that Facebook has cannot deviate from such rule.
Power law is a thing, you can't escape it, not even the Universe can.
Outside of the Facebook issue, can you ever really automate solutions for managing society-scale interactions while still being fair to people?
If you happen to become a similar edge case to a celebrity but actually adding a fix to the problem you also suffer bumps into corporate budgetary restrictions (you’re not worth it but the celebrity is so the solution is to just add them to a no mod whitelist while you suffer), is that fair? What are the social and societal consequences of this?
The thing about what Facebook is doing is that it may be impossible. And if it's not impossible, it may be a profoundly bad idea.
Facebook is trying to build a social network with 100% reach and a userbase beholden to a globally uniform set of rules (where possible; the laws of individual nations will forever intervene). This is not something that has ever succeeded. We don't actually know, apriori, whether you can govern the whole of humanity under one set of norms. It's never been done.
It's possible it fundamentally can't be done... That the end result of this experiment is that Facebook fractures and ends up either having to vend multiple views of its userbase with different rules (like Reddit) or has a large chunk of the human populace it can never get on-board. But we should keep in mind what the goal is.
[+] [-] actually_a_dog|4 years ago|reply
[+] [-] nindalf|4 years ago|reply
I think I might have agreed with the author of this article before working in Integrity for a few years. But with time I learned that any system that’s meant to work for millions of users will have some edge cases that need to be papered over. Especially when it’s not a system owned and operated by a handful of people. Here’s an example - as far as I know it’s not possible for Mark Zuckerberg to log in to Facebook on a new device. The system that prevents malicious log in attempts sees so many attempts on his account that it disallows any attempt now. There’s no plans to fix it for him specifically because it works reasonably well for hundreds of millions of other users whose accounts are safeguarded from being compromised. His inconvenience is an edge case.
With XCheck specifically what would happen is that some team working closely on a specific problem in integrity might find a sub population of users being wrongly persecuted by systems built by other teams located in other time zones. They would use XCheck as a means to prevent these users from being penalised by the other systems. It worked reasonably well, but there’s always room for improvement.
I can confirm some of what the article says though. The process for adding shields wasn’t policed internally very well in the past. Like I mentioned, this was being exploited by abusive accounts - if an account was able to verify its identity it would get a “Shielded-ID-Verified” tag applied to it. ID verification was considered to be a strong signal of authenticity. So teams that weren’t related to applying the tag would see the tag and assume the account was authentic. And as I investigated this more I realised no one really “owned” the tag or policed who could apply it and under what circumstances. I closed this particular loop hole.
In later years the XCheck system started being actively maintained by a dedicated team that cared. They looked into problems like these and made it better.
[+] [-] bo1024|4 years ago|reply
I think that with your background and investment in improving these problems, it will be hard for you to understand the perspective many people have that Facebook is fundamentally rotten at this point. These conflicts arise from FB's core business model. It calls up a torrent of hate speech and misinformation with the right hand while trying to clumsily moderate with the left.
You can hire whole teams to prevent singed fingers or protect certain possessions, but the point of a fire is to burn. If there are no good solutions while maintaining FB's core approach and business model, then it would be better for the world if it were extinguished.
[+] [-] smsm42|4 years ago|reply
[+] [-] lmilcin|4 years ago|reply
Let's take your example of Mark Z.
What makes you think that this is unique case? What about people that suddenly come to fame, like viral video subjects?
A simple solution is to disallow logging in from new devices and the attempt being silently dropped so you are not bothered, unless you do some magic like generate one time key to complete the procedure on the new device.
I could think of a lot of people that would find it useful.
Or allow setting up 2FA token (other than mobile) correctly.
Instead what FB does is make it impossible to secure your account because they insist whatever you want you should always be able to recover your password with your phone number.
Years ago when I was still using it (I had reason) I tried to secure it with my Yubico. Unfortunately, it wasn't possible to configure FB to not allow you to log in on a new device without the key.
I understand how the discussion probably went: "Let's make it so that we can score some marketing points but let's not really make it requirement because we will be flooded with requests from people who do not understand they will never be able to log in if they loose the token."
But that's exactly what I want. I have a small fleet of these so it is not possible for me to loose them all but unfortunately most sites that purport to allowing 2FA can't do it well because they either don't allow configuring multiple tokens or if they do, they don't allow really lock your account so it is not possible to log in the next time without the token.
[+] [-] dundermuffl1n|4 years ago|reply
[+] [-] otterley|4 years ago|reply
[+] [-] josefresco|4 years ago|reply
Is everyone at Facebook this naive? You didn't think a system that creates a secret tier of VIP accounts where the rules (and laws) don't apply while publicly claiming the opposite would end up ... in the news?!?!
[+] [-] schwank|4 years ago|reply
I was presented with a system I had never configured, which asked me to contact people I don't know to get them to vouch for me. At the same time my FB profile was blackholed, and my wife and long time actual friends can't even see that I exist anymore. Just some person that astroturfed my name with no content (I have a globally unique name).
So I no longer exist from FB perspective, which made both my decision to not use FB as well as never use any FB products like Occulus much easier.
[+] [-] cheath|4 years ago|reply
[+] [-] sub7|4 years ago|reply
[+] [-] rwmj|4 years ago|reply
[+] [-] tlear|4 years ago|reply
I mean unless you are the 5th directorate of KGB.. but even then shit like this always comes out.
[+] [-] gkop|4 years ago|reply
[deleted]
[+] [-] iammisc|4 years ago|reply
Perjury before congress can result in jail time and I hope he's made an example of.
[+] [-] annadane|4 years ago|reply
Edit: downvotes from shills
[+] [-] jmwilson|4 years ago|reply
After numerous leaks, Facebook's internal security team became very good at identifying leakers. The person responsible for this 2016 post was identified within hours and terminated the next day: https://www.buzzfeednews.com/article/blakemontgomery/mark-zu.... The leaker was easily identified by the names of friends liking the post (that and part of their name was visible).
Facebook-issued laptops are filled with spyware, monitoring everything down to the system call level, and practically every access to internal systems is logged at a fine level. The only way to exfiltrate data with plausible deniability would be to photograph the screen with an individually owned device. The fact that you searched for the internal wiki page and viewed it are nothing, but that you shortly invoked the keyboard shortcut for a screen capture, then inserted a USB drive, and copied a file ("Screen shot ____.png" even!) to it (all logged) ... congratulations, you're caught.
[+] [-] nickysielicki|4 years ago|reply
Well that, and the fact that his face was immediately to the left of his name.
[+] [-] msteffen|4 years ago|reply
IIUC, Facebook hired contractors to do it, then realized that that didn't work and created XCheck to cover the visible cases, and is now in trouble because XCheck also doesn't work and rubber-stamps everything. Even before this there were news stories about the horribleness of those contract moderator jobs. Reddit tried to federate moderation, but it's since become clear that all top subreddits are moderated by the same people. Even HN only works because dang busts ass to keep it good, and that has obvious limits (what happens when dang goes on vacation or retires?)
[+] [-] solveit|4 years ago|reply
Nobody wants to admit that the only type of moderation that actually works at scale is an entrenched group of somewhat-expert overly-attached users gatekeeping contributions with (what looks like to the novice and sometimes even to the established user) extreme prejudice on a website with intentionally highly limited scope.
[+] [-] vorpalhex|4 years ago|reply
Different users want different things. There are users who never want a single even mildly insulting word. There are users who want unlimited freedom.
The best you can do is to break down moderation and let people opt into a level and form of moderation. Tell them upfront what they are getting and let them pick (or let them make their own moderation rules that apply clientside).
[+] [-] idrios|4 years ago|reply
[+] [-] nradov|4 years ago|reply
[+] [-] ErikVandeWater|4 years ago|reply
[+] [-] bambax|4 years ago|reply
Maybe moderation could be solved by regulating the number of likes or reposts a given user can make or a givzn post can receive. Seems a little far-fetched but worth thinking about.
[+] [-] maccolgan|4 years ago|reply
[+] [-] stjohnswarts|4 years ago|reply
[+] [-] TeeMassive|4 years ago|reply
I use locals.com; lots of small disjointed communities, where posters have to pay (or not) a small fee per month (1$ to 5$). which keep the trolls and influence campaigners away.
[+] [-] literallyaduck|4 years ago|reply
[+] [-] KaiserPro|4 years ago|reply
That is a clear ban. It says so in the "community guidelines"
(side note, you should really read the community guidelines, they are a great set of rules for keeping a community vibrant and happy, assuming they are enforced....)
I can see why facebook did it, you don't want to obviously piss off a capricious party with the power to fuck with your bottom line. It doesn't make it any better.
[+] [-] donatj|4 years ago|reply
[+] [-] mtnGoat|4 years ago|reply
It’s very safe to say there is no adult in the room at FB/IG when it comes to rule enforcement. I simply cannot wait until they get the whip from some governments.
[+] [-] mox1|4 years ago|reply
For example, do we (as facebook consumers) want newly created accounts with @hotmail email treated the same as a new account with @doj.gov, as the same as a Celebrity with a million followers?
Do we want the same set of rules for a suspected Russian troll account to be applied to a major politician? (well..some here might, but I don't).
I think as your account age, status and popularity grows, you should be given *some* flexibility under the rules. Imagine a points system behind the scenes, where bad things get you points, and other things remove points. At a certain point threshold you are banned, suspended, etc.
[+] [-] A4ET8a8uTh0|4 years ago|reply
The issue is emblematic of a bigger issue though. General trust in our society is generally down. It stretches beyond the sectors normally understood as BS ( advertising, HR come to mind ), but moved to corrupt just about everything else out there. We are a point, where the only organization that is somewhat trusted is military.
That is not a good state of affairs.
[+] [-] jackfoxy|4 years ago|reply
[+] [-] ep103|4 years ago|reply
So popular belief that our society can fix the issues it is presented with drops.
Which means societal trust drops.
And those who are causing said problems, become emboldened.
[+] [-] notacoward|4 years ago|reply
[deleted]
[+] [-] mannanj|4 years ago|reply
“Propaganda must facilitate the displacement of aggression by specifying the targets for hatred.” - Joseph Goebbels
Yeah. Facebook fucking sucks.
Oh, the reason? Encourages danger and violence (which in their broad definition now includes quoting individuals who were associated to dangerous regimes). Welcome to the day and age in which another organization now decides you are dangerous and censors your presence from the internet independent of what you say.
[+] [-] AlexandrB|4 years ago|reply
Would be great if us plebs could get the privilege of accurately enforced policies.
[+] [-] hapless|4 years ago|reply
This is like, the 100th time Facebook's public relations has been caught lying about pretty much every god damn topic they have ever addressed in public, but the coverage never, ever changes
Every time it turns out Facebook was deceiving the public, or investing a less-than-reasonable effort into protecting the public, journalists are shocked, shocked that FB would do something like this.
[+] [-] dralley|4 years ago|reply
[+] [-] gverrilla|4 years ago|reply
[+] [-] adolph|4 years ago|reply
https://www.moma.org/collection/works/63755
[+] [-] 8b16380d|4 years ago|reply
[+] [-] hcrisp|4 years ago|reply
The story-within-the-story here is that there is a FB whistleblower who wanted to bring this to light, not unlike other high-profile cases involving government surveillance. It amazes me that one person can wield more power than scores of seasoned journalists.
[+] [-] GDC7|4 years ago|reply
When a company (or even more generally a social phenomenon) is so big, the only logical consequence is that it becomes embedded in the layer that it services.
In society not everybody is equal, a social movement with the massive scope that Facebook has cannot deviate from such rule.
Power law is a thing, you can't escape it, not even the Universe can.
It's not right and it's not wrong. It just is.
[+] [-] egberts1|4 years ago|reply
[+] [-] beezischillin|4 years ago|reply
If you happen to become a similar edge case to a celebrity but actually adding a fix to the problem you also suffer bumps into corporate budgetary restrictions (you’re not worth it but the celebrity is so the solution is to just add them to a no mod whitelist while you suffer), is that fair? What are the social and societal consequences of this?
[+] [-] CiPHPerCoder|4 years ago|reply
[+] [-] shadowgovt|4 years ago|reply
Facebook is trying to build a social network with 100% reach and a userbase beholden to a globally uniform set of rules (where possible; the laws of individual nations will forever intervene). This is not something that has ever succeeded. We don't actually know, apriori, whether you can govern the whole of humanity under one set of norms. It's never been done.
It's possible it fundamentally can't be done... That the end result of this experiment is that Facebook fractures and ends up either having to vend multiple views of its userbase with different rules (like Reddit) or has a large chunk of the human populace it can never get on-board. But we should keep in mind what the goal is.