That was a good skim for me as someone who implemented one of the first independent mega.nz clients. Useful to know especially about structure authentication and ability to swap metadata on files and move files/chunks of files around when server is compromised, when there's no e2e authentication for this. Lots of traps all around. :)
Looks like the safest bet is still to just tar everything and encrypt/sign the result in one go.
I wonder how vulnerable eg. Linux filesystem level encryption is to these kinds of attacks...
I was using Boxcryptor with OneDrive for over 5 years and once they shut it down, I moved everything back to my local SSD. This had a number of advantages, the biggest one being that I could now use MacOS search to find files at lighting speed. I’ll never go back to cloud storage for files again due to latency. As a precaution, I now back up all of my data to an external HDD daily, then to a separate one on 1st of each month. Critical financial data is archived to a BluRay on the first day of each quarter.
Hmm, I wish the author had reviewed Proton. I think it's kind of seen as a meme here? But I heavily rely on it and generally the Proton ecosystem is getting better and better from a UX perspective
I want to consider Proton, but they cap out their maximum storage very low with no option to increase. They're not really competing in this space because no one needs backup of only a few GB of data.
I like the way you can use the tabs to check the results of each reviewed cloud storage service, and the exposition on each. Anybody know what the authors used to create this website? Custom built, or a templated version?
Nice to see that Tresorit didn't have any serious issues in this analysis, I've been using that for a long time and it works really great, also one of the few players that have a really good Linux client.
The two vulnerabilities they found seem pretty far-fetched to me, basically the first is that a compromised CA server will be able to create fake public keys, which I honestly don't know how one could defend against? Transparency logs maybe but even that wouldn't solve the issue entirely when sharing keys for the first time. The second one around unencrypted metadata is hard to assess without knowing what metadata is affected, it seems that it's nothing too problematic.
Tresorit had a game-over vulnerability: public keys aren't meaningfully authenticated (the server can forge keys; the CA the paper discusses is operated by the service) and any attempt to share a directory allows the server to share that directory with itself.
It's too bad they focused on commercial closed-source solutions providers. The ecosystem would have really benefited if they had put their efforts to, for example, do the same work with NextCloud.
seafile is open source (https://github.com/haiwen/seafile), or at least was, when I looked at it years ago. Definitely a concern when the paper mentioned an acknowledge of the protocol downgrade as of 29th April 2024, yet the latest version on the seafile github is dated feb 27.
Considering iCloud does have some documented cases of silent corruption, such as of original resolution media stored in Photos, it might not be the best choice.
The sad state of E2E encryption for cloud storage is a big part of why I wrote mobiletto [1]. It supports transparent client-side encryption for S3, B2, local storage and more. Rekeying is easy- set up a new volume, mirror to it, then remove old volume.
> Rekeying is easy- set up a new volume, mirror to it, then remove old volume.
Right, just have to transfer those 10TB every time a key needs to be rotated, no biggie!
I think that is the reason why most systems use two levels of keys (user keys encrypting a master key. Rotating means ditching the user keys, not the master.)
We use Syncdocs (https://syncdocs.com) to do end-to-end Google Drive encryption.
The keys stay on the client. It is secure, but means the files are only decryptable on the client, so keys need to be shared manually. I guess security means extra hassle.
Dropbox introduced this feature in April 2024 [1]. The CCS deadline was just a couple of days later; there was no chance of analyzing it meaningfully before then.
Because not having to trust the provider is the entire premise of these services, and without that premise, you might as well just store things in GDrive.
One downside to encryption, is it prevents the server operator from doing any deduplication (file or block level) on their end.
Maybe one reason why cloud providers aren't pushing it that heavily. Especially the big players, since more data = more duplication = more efficient deduplication.
That's fine. We pay for storage. I'll pay extra to not have the host spy, sell, etc. my data.
Deduplication only really shines if most data is pirated copy data. In reality the vast majority of data is in fine details of high resolution photos and videos of completely uncorrelated images.
Is that true? Couldn't you run dedupe on blocks of encrypted files? I assume there would be fewer duplicate blocks compared to the cleartext, but if you have a bunch of blocks full of random bits there are bound to be repeats with a large enough number of blocks.
256 bit symmetric cryptography keys are a bit like picking one atom in the universe (10^80 atoms, or 100000000000000000000000000000000000000000000000000000000000000000000000000000000). Your opponent would have to test half of the atoms in the universe to have a reasonable chance of getting the right key.
[+] [-] jszymborski|1 year ago|reply
[0] https://cryptomator.org/
[1] https://vgough.github.io/encfs/
[2] https://github.com/rfjakob/gocryptfs
[+] [-] tptacek|1 year ago|reply
[+] [-] megous|1 year ago|reply
Looks like the safest bet is still to just tar everything and encrypt/sign the result in one go.
I wonder how vulnerable eg. Linux filesystem level encryption is to these kinds of attacks...
[+] [-] Sat_P|1 year ago|reply
[+] [-] triyambakam|1 year ago|reply
[+] [-] canadiantim|1 year ago|reply
[+] [-] willis936|1 year ago|reply
https://proton.me/support/increase-storage-space/
[+] [-] xarope|1 year ago|reply
[+] [-] 0x073|1 year ago|reply
[+] [-] ThePhysicist|1 year ago|reply
The two vulnerabilities they found seem pretty far-fetched to me, basically the first is that a compromised CA server will be able to create fake public keys, which I honestly don't know how one could defend against? Transparency logs maybe but even that wouldn't solve the issue entirely when sharing keys for the first time. The second one around unencrypted metadata is hard to assess without knowing what metadata is affected, it seems that it's nothing too problematic.
[+] [-] tptacek|1 year ago|reply
[+] [-] traceroute66|1 year ago|reply
[1] https://www.linkedin.com/posts/tresorit_end-to-end-encrypted...
[+] [-] fguerraz|1 year ago|reply
[+] [-] kientuong114|1 year ago|reply
[+] [-] xarope|1 year ago|reply
[+] [-] iknowstuff|1 year ago|reply
[+] [-] MichaelZuo|1 year ago|reply
[+] [-] java-man|1 year ago|reply
[+] [-] V__|1 year ago|reply
[+] [-] cobbzilla|1 year ago|reply
[1] https://github.com/cobbzilla/mobiletto
[+] [-] gertop|1 year ago|reply
Right, just have to transfer those 10TB every time a key needs to be rotated, no biggie!
I think that is the reason why most systems use two levels of keys (user keys encrypting a master key. Rotating means ditching the user keys, not the master.)
[+] [-] CPAhem|1 year ago|reply
The keys stay on the client. It is secure, but means the files are only decryptable on the client, so keys need to be shared manually. I guess security means extra hassle.
[+] [-] nonamepcbrand1|1 year ago|reply
dropbox has been mentioned in the article and I think the author is drinking kool-aid and throwing random facts
[+] [-] kientuong114|1 year ago|reply
[1] https://blog.dropbox.com/topics/company/new-solutions-to-sec...
[+] [-] tptacek|1 year ago|reply
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] mr_toad|1 year ago|reply
It’s not hard to encrypt it before you upload it.
[+] [-] tptacek|1 year ago|reply
[+] [-] paulgerhardt|1 year ago|reply
[+] [-] vmfunction|1 year ago|reply
[+] [-] tptacek|1 year ago|reply
[+] [-] eemil|1 year ago|reply
Maybe one reason why cloud providers aren't pushing it that heavily. Especially the big players, since more data = more duplication = more efficient deduplication.
[+] [-] tjpnz|1 year ago|reply
[+] [-] willis936|1 year ago|reply
Deduplication only really shines if most data is pirated copy data. In reality the vast majority of data is in fine details of high resolution photos and videos of completely uncorrelated images.
[+] [-] idle_zealot|1 year ago|reply
[+] [-] slac|1 year ago|reply
[+] [-] ranger_danger|1 year ago|reply
[+] [-] thinkingofthing|1 year ago|reply
[+] [-] swijck|1 year ago|reply
[+] [-] oconnore|1 year ago|reply
That's generally understood to be not feasible.
[+] [-] ziddoap|1 year ago|reply