top | item 45176635

(no title)

United857 | 5 months ago

That's rather surprising about the accessing user data bit. When I was at Meta, the quickest way to get fired as an engineer was to access user data/accounts without permission or business reason. Everything was logged/audited down to the database level. Can't imagine that changing and the rules are taught very early on in the onboarding/bootcamp process.

discuss

order

lysace|5 months ago

That part of the complaint is specifically about 1500 ”WhatsApp engineers”.

Different culture from the blue app, or whatever they call it?

MrDresden|5 months ago

But the crucial bit to know here would be if that data was readable in anyway in case it was accessed?

Personally it doesn't matter if there are auditing systems in place, if the data is readable in any way, shape or form.

dijit|5 months ago

is that really true?

I haven’t touched a lot of these cyber security parts of industry: especially policies for awhile…

… but I do recall that auditing was a stronger motivator than preventing. There were policies around checking the audit logs, not being able to alter audit logs and ensuring that nobody really knew exactly what was audited. (Except for a handful of individuals of course.)

I could be wrong, but “observe and report” felt like it was the strongest possible security guarantee available inside the policies we followed (PCI-DSS Tier 1). and that prevention was a nice to have on top.

imiric|5 months ago

Whatever Meta says publicly about this topic, and whatever its internal policies may be, directly contradicts its behavior. So any attempt to excuse this is nothing but virtue signalling and marketing.

The privacy violations and complete disregard for user data are too numerous to mention. There's a Wikipedia article that summarizes the ones we publicly know about.

Based on incentives alone, when the company's primary business model is exploiting user data, it's easy to see these events as simple side effects. When the CEO considers users of his products to be "dumb fucks", that culture can only permeate throughout the companies he runs.

testdelacc1|5 months ago

There’s a meaningful difference in a company wanting to exploit user data to enrich itself and allowing employees to engage in voyeurism. The latter doesn’t make the company money, and therefore can be penalised at no cost.

Your comment talks about incentives, but you haven’t actually made a rational argument tying actual incentives to behaviour.

mgh2|5 months ago

Do you have proof?

YouWhy|5 months ago

To the extent a random person's evidence on the Internet amounts to proof:

From people at Facebook circa 2018, I know that end user privacy was addressed at multiple checkpoints -- onboarding, the UI of all systems that could theoretically access PII, war stories about senior people being fired due to them marginally misunderstanding the policy, etc.

Note that these friends did not belong to WhatsApp, which was at that time a rather separate suborg.

Jenk|5 months ago

Does Attaullah Baig?

aprilthird2021|5 months ago

Everything is logged, but no one really cares, and the "business reasons" are many and extremely generic.

That being said, maybe I'm dumb but I guess I don't see the huge risk here? I could certainly believe that 1500 employees had basically complete access with little oversight (logging and not caring isn't oversight imo). But how is that a safety risk to users? User information is often very important in the day to day work of certain engineering orgs (esp. the large number of eng who are fixing things based off user reports). So that access exists, what's the security risk? That employees will abuse that access? That's always going to be possible I think?

simmerup|5 months ago

You really don't see the safety risk?

If you have a sister,imagine her being stalked by an employee?

If you have crypto, imagine an employee selling your information to a third party?