I worked at Facebook for most of 2017 and 2018. In the first week, they made it clear that you would be fired instantly for any improper access of user data.
They further said that if you need to access any sensitive personal data, or if you need to log in as a user in order to debug a problem, you need to have approval from your manager _before_ the access, not after.
Also, you are not allowed to access the data of anyone you know personally for any reason whatsoever. You have to find someone else to do that if it needs to be done.
Finally, they really do audit every single access of personal data. I had every reason to believe that if I accessed any data improperly, I would be fired within the week if not the day.
I don’t know how much abuse still exists despite all of the above, but I don’t think this article does a good job of explaining how seriously Facebook takes this.
>They further said that if you need to access any sensitive personal data, or if you need to log in as a user in order to debug a problem, you need to have approval from your manager _before_ the access, not after.
But were you still able to just look at the data or login as the user without the permission? I think that's the key question.
Talk is cheap. As a user it's not good enough for me that people are being told internally not to abuse their access. Just remove the permissions from the employees and make them request the permissions for each individual case instead of trusting the employees to follow the rules.
> Also, you are not allowed to access the data of anyone you know personally for any reason whatsoever.
Which explicitly also includes yourself, because looking yourself up would e.g. let you see who has you blocked.
You're also fairly unlikely to access personal data by accident. You have to explictly go look for it in the internal tooling, which has pretty good signage around interfaces that could potentially expose you to personal data by accident so you know to be careful (I did a couple of tickets for the abuse team and testing that stuff was riddled with interstitials asking if I was sure I wanted to access personal data). "Oops I didn't notice" just doesn't fly.
They're also fairly good at removing the semi-legitimate reasons you'd have for accessing personal data. If you have friends or family that are having some sort of issue, they have a separate priority queue you can submit requests to so they'll look into those issues for you, for example. If you need test data, there's great tools to generate test users with all sorts of weird configurations (so you don't have to rely on finding a live one that meets your criteria)
I'm surprised that this stuff is audit only. At my company, at least in the past five years or so, this type of access has been forbidden to almost all employees. You need to request access to these types of systems and provide justification for why you should have it. Access is controlled on a per-system basis -- it's not blanket access. Many of the most sensitive systems have auto-expiring access for humans.
Nowadays we are seeing many systems switch to a regime where you have to get another engineer to sign off on any access to production, and your access is limited to at most 24h. This isn't merely a policy -- it is enforced by technical controls that forbid ordinary human-user access to production. I literally cannot even send an RPC to services I work with that handle private data without getting a colleague to sign off on it.
They had this rule at America Online when I worked there early 2000s. It was routinely violated by the managers, and was really only in place for the rank and file to cover their butts. I just assume bad management and executives of Facebook routinely violate peoples privacy by digging through their information, it’s there, and Facebook hasn’t exactly shown an interest in protecting privacy.
Those policies will only catch someone after the fact. Firing someone is the bare minimum, it prevents a single repeat offender, but they could already do damage.
Agreed, I was there at the same time, and was taken pretty seriously, and grew progessively more locked down as time went on.
A friend of mine worked at a large bank in customer service and this was also a big part of their training, and there was even a speech trainees were given before going to their desk at the end of training. He said, almost invariably, that at least one person from every class was fired within hours for looking up the accounts of someone they knew or a celebrity.
There's a certain trend in most companies that every bureaucratic rule can be traced back to a specific event where someone caused a problem by doing what the rule was written to forbid - so it's possible that you were indirectly told about four incidents.
If you know the right people, can you be taken off the audit list? I remember in the early days of Facebook, access to everyone’s account was seen as an unofficial perk of the job; the cynic in me would say that this perk still exists but is only given to people who can be trusted to never talk about it.
Just firing is not enough for the cases of personal data abuse. What I would like to see is those employees being reported by Facebook to the authorities to be further legally prosecuted.
We should not rely on the goodwill or internal guidelines of a single company in such a sensitive topic.
There’s a difference between having an audit trail and actually using it. I would be interested to know how often Facebook analyzes this data and actually fires people for improper usage.
These comment's are all relatively ignorant of the fact that implementing these sorts of privacy controls generally makes your product worse and your engineers miserable.
> Facebook employees were granted user data access in order to “cut away the red tape that slowed down engineers,” the book says.
If we can take a step back, this is a totally reasonable policy. Unfortunately Facebook is facing the reality of the law of large numbers in that once you have 1000+ people the chances of having a bad actor in your system is much higher than 10 people.
Maybe this is a hot take, but I for one prefer that my company trusts me to do the right thing rather than make it hard to do my job. I'm not saying that there isn't a solution for this, but behind the "facebook corporation" there is generally just a bunch of engineers that want to do a good job at work.
It’s not zero. Hashed passwords are still passwords and should be treated as such. “Zero” implies that hashed passwords are not passwords, since otherwise you won’t get to zero.
Just because passwords are hashed doesn’t mean you can give access to them willy nilly and happily claim that “zero” people have password access.
A long time ago I managed FB's udb backups, the central MySQL schema around which all other services were strung. Even from the beginning FB never stored plaintext passwords. Can't say there weren't log excursions or the like, but when found these would have been critical bugs and fixed immediately.
They probably mean some access to production servers where it might be scanned in memory or otherwise grabbed using debug tooling. Making this possible for 0 people will be challenging and at the least add a lot of complexity.
Could it be possible they mean access to servers where authentication is handled? If you had root access to such a server you could look at memory or packets and work towards revealing a user's password.
This is a pretty widespread issue I'd imagine, we just don't hear about it or people aren't caught.
I know they've been locked down since I've left, but some of the tools we were allowed to just freely access at Uber were a tad scary, to say the least.
I'm sure every company with a very large userbase, such as Facebook/Microsoft/Google/etc claim they have internal protections/checks but have even more holes like this.
Pretty inexcusable by 2015. FB was hardly a new company at that point.
Every Googler gets the message that you keep your mitts off private information in logs (or get terminated) drilled into them in their first week of training. Logs access is a) restricted b) audited c) tiered and d) enforced. That was the case in 2011 when I started and it's the case now.
Not saying Google is perfect, but it's not like companies like FB didn't have a template for privacy standards that they could have followed.
All that said, but back then I just personally assumed that this is how FB was operating :-( I am hoping they've improved since.
I know an engineer, a security engineer at Google who is pretty well-known, who went to work at Google specifically so he could get at peoples personal data. I don’t know if he actually does it, but he boasted quite openly for years that he wanted to be the “architect“ and see everything and know everyone’s secrets. He is now a highly placed Google security employee.
In a sane world this would be a company-ending event, or at least seriously impact their stock and C level execs.
The idea that:
a) User data access is not just allowed but normal (or at least that it was at one point)
b) That it's allowed at all so widely
c) That (a) and (b) are true despite repeated abuse
is absolutely insane. "Nearly every month" is insane. It should be criminal, but it isn't.
Sadly, it's all too common for engineers to have way more access than is necessary, though this seems extreme. I see no reason why any engineer, outside of extreme circumstances that should set off alarm bells, should have access to sensitive user data like passwords. It should generally not be the case that direct access of data is needed at all.
Where I used to work, user activity/transactions data sent to us would be stored on a single giant nfs volume. If you were added to a Linux group you can full, unaudited access to everything. Whenever someone tried to build anything that would restrict and audit access there would be a ton of pushback from engineers and customer support who loved being able to ssh into a machine and have full access to everything.
Not uncommon in early-stage startups. I've learned to build these sorts of things with access control and auditing up-front, but certainly have built my share of attractive nuisances over time.
My advice is stub something out up front, before you go to production. You don't have time to do it right, but you do have time to establish the norm. Even if your audit trail is just a two-minute DB trigger that records that Worker Bob changed Customer Alice's password yesterday at 11, make it clear that there needs to be an articulable reason at hand for having used mechanisms that may violate users' trust.
A lot of the commenters here are rightly not surprised about this. A few years down the line this is going to be a similar thing with voice activated devices like Alexa. Employees, contractors, advertisers will all have access to the voice data of not just the person using these devices but of those who just happen to be in the vicinity. And no one will be surprised about it.
This is so common. Think of any start up you gave way too much info to. They have lots of lower paid employees who can look at that data. It happens a lot.
I was on vacation in Egypt, this year, with a guy who worked at Facebook, along with a fairly large group of us from the States. He would stalk people's Facebook profiles in our group to find out information on them and even confront them about it, if they made him upset enough. He even messaged them directly on Facebook to tell them off.
As a side note, he was mostly only interested in having hook ups and orgies with Ukrainian tourist women, while in Egypt, made even more bizarre when we found out he has a wife back in the United States, of which, worked at Apple.
He was not a well liked guy and he was very rude to the Egyptian natives, especially towards the Bedouins.
At this point, why not just make any of this social media information public? What’s the difference to more than 16 000 people knowing with whom you cheated vs. the whole world knowing?
By 6 degrees of Kevin Bacon there surely is a connection to one of these 16000 people in your bubble, hence the secrets are theoretically out, too. Why should they have the advantage over you and potentially blackmail you?
While Facebook has a massive pile of data, what about the other massive collectors of data out there?
Do similar processes and consequences apply in the worlds of the your banks, credit card company, Experian, Equifax, the NSA, FBI, and other groups, both government and commercial?
She should sue Facebook. This industry has no will to change. It must be coerced.
Ideally, your data should be encrypted and no one at Facebook should not have access to it. Only those people whom you have chosen to then share that data should have access to the degree given (crypto wise, maybe this means you decrypt and broadcast to those people like email). Any reason why a Proton-like model wouldn't work for technical reasons? Facebook could still make money off ads, but those ads would be less targeted. Good. We need less targeting.
I would not be surprised if this was common. I simply don't have enough faith in society today to believe that most people would understand how wrong this is.
Everyone here is unsurprised by this and at this point I expect the social networks to just abuse my user data anyway. They won't change and they will never stop this.
Who is to say that this is already happening with the other social networks that are scooping up our data but in 5 years time will only admit their actions afterwards.
Maybe they are all doing this as we type.
To Downvoters: So you think that these social media companies are NOT abusing our data? There's tons of evidence of this everywhere, including this confession.
There can only be one explanation of why I'm getting downvoted heavily of an undeniable known fact and it is likely that it is by those working at these companies because they know that I am right and the point still stands regardless of any downvotes (and censoring of the truth).
emtel|4 years ago
They further said that if you need to access any sensitive personal data, or if you need to log in as a user in order to debug a problem, you need to have approval from your manager _before_ the access, not after.
Also, you are not allowed to access the data of anyone you know personally for any reason whatsoever. You have to find someone else to do that if it needs to be done.
Finally, they really do audit every single access of personal data. I had every reason to believe that if I accessed any data improperly, I would be fired within the week if not the day.
I don’t know how much abuse still exists despite all of the above, but I don’t think this article does a good job of explaining how seriously Facebook takes this.
antris|4 years ago
But were you still able to just look at the data or login as the user without the permission? I think that's the key question.
Talk is cheap. As a user it's not good enough for me that people are being told internally not to abuse their access. Just remove the permissions from the employees and make them request the permissions for each individual case instead of trusting the employees to follow the rules.
pdpi|4 years ago
Which explicitly also includes yourself, because looking yourself up would e.g. let you see who has you blocked.
You're also fairly unlikely to access personal data by accident. You have to explictly go look for it in the internal tooling, which has pretty good signage around interfaces that could potentially expose you to personal data by accident so you know to be careful (I did a couple of tickets for the abuse team and testing that stuff was riddled with interstitials asking if I was sure I wanted to access personal data). "Oops I didn't notice" just doesn't fly.
They're also fairly good at removing the semi-legitimate reasons you'd have for accessing personal data. If you have friends or family that are having some sort of issue, they have a separate priority queue you can submit requests to so they'll look into those issues for you, for example. If you need test data, there's great tools to generate test users with all sorts of weird configurations (so you don't have to rely on finding a live one that meets your criteria)
asdfasgasdgasdg|4 years ago
Nowadays we are seeing many systems switch to a regime where you have to get another engineer to sign off on any access to production, and your access is limited to at most 24h. This isn't merely a policy -- it is enforced by technical controls that forbid ordinary human-user access to production. I literally cannot even send an RPC to services I work with that handle private data without getting a colleague to sign off on it.
na85|4 years ago
Yet TFA contains a quote about how abusing personal data is "against Mark's DNA". Horseshit.
Facebook is the enemy.
[0] https://www.esquire.com/uk/latest-news/a19490586/mark-zucker...
underseacables|4 years ago
staticassertion|4 years ago
None of this should even be possible.
efsavage|4 years ago
A friend of mine worked at a large bank in customer service and this was also a big part of their training, and there was even a speech trainees were given before going to their desk at the end of training. He said, almost invariably, that at least one person from every class was fired within hours for looking up the accounts of someone they knew or a celebrity.
whatshisface|4 years ago
panic|4 years ago
tut-urut-utut|4 years ago
We should not rely on the goodwill or internal guidelines of a single company in such a sensitive topic.
herbst|4 years ago
pm90|4 years ago
harrisrobin|4 years ago
cfors|4 years ago
> Facebook employees were granted user data access in order to “cut away the red tape that slowed down engineers,” the book says.
If we can take a step back, this is a totally reasonable policy. Unfortunately Facebook is facing the reality of the law of large numbers in that once you have 1000+ people the chances of having a bad actor in your system is much higher than 10 people.
Maybe this is a hot take, but I for one prefer that my company trusts me to do the right thing rather than make it hard to do my job. I'm not saying that there isn't a solution for this, but behind the "facebook corporation" there is generally just a bunch of engineers that want to do a good job at work.
aaronmdjones|4 years ago
> Stamos suggested tightening access to fewer than 5,000 employees and fewer than 100 for particularly sensitive information like passwords.
I'm sorry, what?
I can tell you the number of legitimate engineers that should have access to user's passwords.
It's a nice, round number.
It's zero.
praptak|4 years ago
sombremesa|4 years ago
Just because passwords are hashed doesn’t mean you can give access to them willy nilly and happily claim that “zero” people have password access.
ericbarrett|4 years ago
EmielMols|4 years ago
hu3|4 years ago
I have a hard time believing that Facebook would store user passwords without at least hash + salt which makes it virtually unrecoverable.
Guest42|4 years ago
ses1984|4 years ago
rwmj|4 years ago
mothsonasloth|4 years ago
Any employee logins are done through skeleton keys that are audited.
junon|4 years ago
I know they've been locked down since I've left, but some of the tools we were allowed to just freely access at Uber were a tad scary, to say the least.
I'm sure every company with a very large userbase, such as Facebook/Microsoft/Google/etc claim they have internal protections/checks but have even more holes like this.
cmrdporcupine|4 years ago
Every Googler gets the message that you keep your mitts off private information in logs (or get terminated) drilled into them in their first week of training. Logs access is a) restricted b) audited c) tiered and d) enforced. That was the case in 2011 when I started and it's the case now.
Not saying Google is perfect, but it's not like companies like FB didn't have a template for privacy standards that they could have followed.
All that said, but back then I just personally assumed that this is how FB was operating :-( I am hoping they've improved since.
underseacables|4 years ago
staticassertion|4 years ago
The idea that:
a) User data access is not just allowed but normal (or at least that it was at one point)
b) That it's allowed at all so widely
c) That (a) and (b) are true despite repeated abuse
is absolutely insane. "Nearly every month" is insane. It should be criminal, but it isn't.
Sadly, it's all too common for engineers to have way more access than is necessary, though this seems extreme. I see no reason why any engineer, outside of extreme circumstances that should set off alarm bells, should have access to sensitive user data like passwords. It should generally not be the case that direct access of data is needed at all.
pm90|4 years ago
Where I used to work, user activity/transactions data sent to us would be stored on a single giant nfs volume. If you were added to a Linux group you can full, unaudited access to everything. Whenever someone tried to build anything that would restrict and audit access there would be a ton of pushback from engineers and customer support who loved being able to ssh into a machine and have full access to everything.
_jal|4 years ago
My advice is stub something out up front, before you go to production. You don't have time to do it right, but you do have time to establish the norm. Even if your audit trail is just a two-minute DB trigger that records that Worker Bob changed Customer Alice's password yesterday at 11, make it clear that there needs to be an articulable reason at hand for having used mechanisms that may violate users' trust.
iamalexa|4 years ago
AlwaysRock|4 years ago
Drybones|4 years ago
As a side note, he was mostly only interested in having hook ups and orgies with Ukrainian tourist women, while in Egypt, made even more bizarre when we found out he has a wife back in the United States, of which, worked at Apple.
He was not a well liked guy and he was very rude to the Egyptian natives, especially towards the Bedouins.
danlugo92|4 years ago
protoman3000|4 years ago
By 6 degrees of Kevin Bacon there surely is a connection to one of these 16000 people in your bubble, hence the secrets are theoretically out, too. Why should they have the advantage over you and potentially blackmail you?
/s
mrits|4 years ago
unknown|4 years ago
[deleted]
sschueller|4 years ago
caseysoftware|4 years ago
Do similar processes and consequences apply in the worlds of the your banks, credit card company, Experian, Equifax, the NSA, FBI, and other groups, both government and commercial?
smalltarget|4 years ago
bobthechef|4 years ago
Ideally, your data should be encrypted and no one at Facebook should not have access to it. Only those people whom you have chosen to then share that data should have access to the degree given (crypto wise, maybe this means you decrypt and broadcast to those people like email). Any reason why a Proton-like model wouldn't work for technical reasons? Facebook could still make money off ads, but those ads would be less targeted. Good. We need less targeting.
stevespang|4 years ago
I smell lawsuits going for the very deep pocket of Zuck . . . .
globular-toast|4 years ago
HumblyTossed|4 years ago
api|4 years ago
I bet this is incredibly common, and far more so at lower profile and even shadier surveillance capitalist companies.
rvz|4 years ago
Everyone here is unsurprised by this and at this point I expect the social networks to just abuse my user data anyway. They won't change and they will never stop this.
Who is to say that this is already happening with the other social networks that are scooping up our data but in 5 years time will only admit their actions afterwards.
Maybe they are all doing this as we type.
To Downvoters: So you think that these social media companies are NOT abusing our data? There's tons of evidence of this everywhere, including this confession.
There can only be one explanation of why I'm getting downvoted heavily of an undeniable known fact and it is likely that it is by those working at these companies because they know that I am right and the point still stands regardless of any downvotes (and censoring of the truth).
GrumpyNl|4 years ago