top | item 46840503

(no title)

digdigdag | 1 month ago

> We didn't review the entire source code

Then it's not fully investigated. That should put any assessments to rest.

discuss

order

3rodents|1 month ago

By that standard, it can never be verified because what is running and what is reviewed could be different. Reviewing relevant elements is as meaningful as reviewing all the source code.

dangus|29 days ago

Let’s be real: the standard is “Do we trust Meta?”

I don’t, and don’t see how it could possibly be construed to be logical to trust them.

I definitely trust a non-profit open source alternative a whole lot more. Perception can be different than reality but that’s what we’ve got to work with.

giancarlostoro|1 month ago

Or they could even take out the backdoor code and then put it back in after review.

ghurtado|1 month ago

I have to assume you have never worked on security cataloging of third party dependencies on a large code base.

Because if you had, you would realize how ridiculous it is to state that app security can't be assessed until you have read 100% of the code

That's like saying "well, we don't know how many other houses in the city might be on fire, so we should let this one burn until we know for sure"

fasbiner|1 month ago

What you are saying is empirically false. Change in a single line of executed code (sometimes even a single character!) can be the difference between a secure and non-secure system.

This must mean that you have been paid not to understand these things. Or perhaps you would be punished at work if you internalized reality and spoke up. In either case, I don't think your personal emotional landscape should take precedence over things that have been proven and are trivial to demonstrate.

jokersarewild|1 month ago

It sounds like your salary has depended on believing things like a partial audit is worthwhile in the case that a client is the actual adversary.

Barrin92|1 month ago

as long as client side encryption has been audited, which to my understanding is the case, it doesn't matter. That is literally the point of encryption, communication across adversarial channels. Unless you think Facebook has broken the laws of mathematics it's impossible for them to decrypt the content of messages without the users private keys.

maqp|1 month ago

Well the thing is, the key exfiltration code would probably reside outside the TCB. Not particularly hard to have some function grab the signing keys, and send them to the server. Then you can impersonate as the user in MITM. That exfiltration is one-time and it's quite hard to recover from.

I'd much rather not have blind faith on WhatsApp doing the right thing, and instead just use Signal so I can verify myself it's key management is doing only what it should.

Speculating over the correctness of E2EE implementation isn't productive, considering the metadata leak we know Meta takes full advantage of, is enough reason to stick proper platforms like Signal.

hn_throwaway_99|1 month ago

The issue is what the client app does with the information after it is decrypted. As Snowden remarked after he released his trove, encryption works, and it's not like the NSA or anyone else has some super secret decoder ring. The problem is endpoint security is borderline atrocious and an obvious achilles heel - the information has to be decoded in order to display it to the end user, so that's a much easier attack vector than trying to break the encryption itself.

So the point other commenters are making is that you can verify all you want that the encryption is robust and secure, but that doesn't mean the app can't just send a copy of the info to a server somewhere after it has been decoded.