Very cool - the issue with believing that encrypting on the client is safer then relying a secure transport and a secure server is that the code on the client doing that encryption, came from the server. That said, it may still be an interesting layer of security to layer into your application. After all security is more about layers then absolutes.
This is the usual argument in HN when client-side encryption comes up, and I have a question about it.
Doesn't virtually all encryption on user computers come from servers? I mean, Chrome updates itself from a server, and I first downloaded it from the same server. Google (or anyone breaking in to Google) could maliciously break the SSL code in my browser and I'd never even know it had been updated.
Likewise, I downloaded dropbox and putty and ssh.exe from servers, all apps I installed on my phone came from a server and were put there by people I don't know at all.
Why is in-browser encryption considered so different that even smart people like tptacek get all worked up about it? Is it that browser apps get "reinstalled" on every use, and not "only" when an autoupdater or user detects a new version? If so, can I use appcache to achieve the same and enjoy the exact same security as I'd get in an installed app(lication)?
In short, what am I missing?
(note to the angry security people: I'm not dissing you, I really don't know. I'd appreciate non-snarky replies)
I came across this issue when I was implementing an ssh client in browser (https://www.minaterm.com). It's absolutely correct of course, but interesting that it's an issue in 2015.
There should be a way of having the browser (outside of the JS) verify the hash/signature of a page against an external repository which would verify that this page had been independently audited.
It's interesting that it's relatively easy to do this with native client software (hash/signature, manually check it), but less so with browser based applications.
Yeah but "in the browser" is not "on the client". Do we have browser internal encryption engines yet? Encrypting in the JS has the problem that someone trying to read your nodes could MITM you other JS files, that allow him to read your clear text notes.
The point is that the web is very geared towards "Seamlessly get the latest content from the server without bothering the user".
That's not necessarily what we want in the land of crypto. We want something more like "Offer to get the latest code from the server, but ask the user for permission first, because maybe the server has been compromised."
Having the client page hosted at something like IPFS would solve this problem. The content of the webpage (and all its javascript references) would be the same whenever you had the same address, so you could be sure the code was safe.
Or you do both. If your using SSL you still have to trust the server, or anyone who gets access to it.
Encrypting in browser at least means the server doesn't intensionally have access to the plaintext. That means if the server is stolen/ceased the plaintext can't be accessed.
However you are right of course that it doesn't protect you against a malicious agent who has access to the server over a period of time.
It seems like it should be theoretically possible to do that though, but it would require browser capabilities that we don't have at the moment.
I love a lot of the outreach that Matasano does, but I strongly disagree with that article. Most of the points they make aren't fundamental problems of websites - they're just issues for poorly implemented websites. Some of it is out of date / plain wrong - for example, we've had a SRNG (window.crypto.getRandomValues) on the web since Chrome 11.
Most of the rest of the complaints you could also reasonably level against installed apps with update mechanisms. The article compares web apps (and their update mechanism) with desktop apps (without their update mechanism). Then it points out flaws in update mechanisms in general (eg they can send you malicious code), then says the thats why the web is flawed. Yeah, nice try.
The fundamental question is: How do you trust the code that I give you? No matter what platform you're on, at some level, you need to trust me. Lets say I'm writing a 'secure' todo list app. You have to trust that I'm not going to forward your todo list entries to any hooded figures. And I'm not going to change your shopping list to quietly add entries from my sponsors. Also both on the web and locally, apps can open mostly-arbitrary network connections and send any data anywhere we like.
Its as simple as that. On the web, I send you code, you run my code, my code does something useful and might betray you. In native apps, I send you code (maybe via an app store or something). You run my code. My code does something useful but it might betray you.
As far as I can tell there's only two fundamental weaknesses of web apps:
1. The JS isn't signed
2. The JS gets sent on every page load
The combination of which makes it much more convenient to do spear-phishing type attacks. But that said, any threat that looks like "but on the web you might send malicious code to user X" is also true of other app update mechanisms. Even on the iOS App Store, nothing is stopping me from writing code which says `if (hash(username+xyz) == abc123) { send data to NSA(); }`. I can't think of any binary-downloading systems (app stores, aptget, etc) which would discover that code.
And remember - desktop app code is potentially much more dangerous. Desktop apps can take photos with your webcam, record audio, record keystrokes and access every file in your home directory.
Its definitely true that most web apps are poorly implemented - they dynamically load 3rd party JS and they don't use SSL / HSTS. Its also embarrassing how many desktop apps have simple buffer overflow exploits. But the solution isn't to go back to desktop apps - the solution is to push for better best practices on the web.
In my opinion, the biggest security problem with the web is that most web apps store all your data in plaintext on someone else's computers. This is a problem that we we need to start addressing systematically via projects like CryptDB. Ie, we need more serious security work done on the web. Not less.
In case people are interested in the under-the-hood. I just dug in a little bit and it looks like a library is used that automatically generates IV for each encrypt, and automatically uses your passphrase, passed through EvpKDF as the key.
SHA3, Rabbit, and whatever EvpKDF does, I don't actually have time to look at that.
On binbox.io, we use a similar library called Stanford Cryptograph JavaScript Library. It's fast enough that small files decrypt and encrypt within reasonable times, but it scales poorly to large files.
Could be interesting to allow multiple pre-existing wiki/CMS systems to serve as backends.
(Potentially, then, you only bring the overlay JS from a trusted source, and your encrypted notes can live in many places, as visible-but-uninterpretable 'noise' in other systems.)
cool! I did something similar with my side project. Users could encrypt/decrypt text in an editor, and would have to provide a password to decrypt it on the display page.
Yes, its not 100% secure - you have to trust the JS, which with enough targeted effort could be swapped out for something malicious. But this does mean your data at rest is encrypted, which can be very beneficial.
A bullet proof vest doesn't stop all bullets, nor does it protect everywhere. But people still use them all the time in addition to other solutions.
Still, with all that if you end up with a keylogger on your machine it means nothing. Air gaping helps a lot but really nothing is totally foolproof - one can only make it exponentially more difficult to bypass.
[+] [-] taf2|11 years ago|reply
[+] [-] skrebbel|11 years ago|reply
Doesn't virtually all encryption on user computers come from servers? I mean, Chrome updates itself from a server, and I first downloaded it from the same server. Google (or anyone breaking in to Google) could maliciously break the SSL code in my browser and I'd never even know it had been updated.
Likewise, I downloaded dropbox and putty and ssh.exe from servers, all apps I installed on my phone came from a server and were put there by people I don't know at all.
Why is in-browser encryption considered so different that even smart people like tptacek get all worked up about it? Is it that browser apps get "reinstalled" on every use, and not "only" when an autoupdater or user detects a new version? If so, can I use appcache to achieve the same and enjoy the exact same security as I'd get in an installed app(lication)?
In short, what am I missing?
(note to the angry security people: I'm not dissing you, I really don't know. I'd appreciate non-snarky replies)
[+] [-] new299|11 years ago|reply
There should be a way of having the browser (outside of the JS) verify the hash/signature of a page against an external repository which would verify that this page had been independently audited.
It's interesting that it's relatively easy to do this with native client software (hash/signature, manually check it), but less so with browser based applications.
[+] [-] erikb|11 years ago|reply
[+] [-] davidbanham|11 years ago|reply
That's not necessarily what we want in the land of crypto. We want something more like "Offer to get the latest code from the server, but ask the user for permission first, because maybe the server has been compromised."
[+] [-] billowycoat|11 years ago|reply
It's more of a self-hosting solution I think.
[+] [-] fiatjaf|11 years ago|reply
[+] [-] jacques_chester|11 years ago|reply
The brief problem statement is:
Either you're on SSL, in which case, you have encryption from the client to the server.
Or you don't have SSL, and a MITM attack can substitute your javascript with anything, rendering the scheme unsafe.
[+] [-] new299|11 years ago|reply
Encrypting in browser at least means the server doesn't intensionally have access to the plaintext. That means if the server is stolen/ceased the plaintext can't be accessed.
However you are right of course that it doesn't protect you against a malicious agent who has access to the server over a period of time.
It seems like it should be theoretically possible to do that though, but it would require browser capabilities that we don't have at the moment.
[+] [-] josephg|11 years ago|reply
Most of the rest of the complaints you could also reasonably level against installed apps with update mechanisms. The article compares web apps (and their update mechanism) with desktop apps (without their update mechanism). Then it points out flaws in update mechanisms in general (eg they can send you malicious code), then says the thats why the web is flawed. Yeah, nice try.
The fundamental question is: How do you trust the code that I give you? No matter what platform you're on, at some level, you need to trust me. Lets say I'm writing a 'secure' todo list app. You have to trust that I'm not going to forward your todo list entries to any hooded figures. And I'm not going to change your shopping list to quietly add entries from my sponsors. Also both on the web and locally, apps can open mostly-arbitrary network connections and send any data anywhere we like.
Its as simple as that. On the web, I send you code, you run my code, my code does something useful and might betray you. In native apps, I send you code (maybe via an app store or something). You run my code. My code does something useful but it might betray you.
As far as I can tell there's only two fundamental weaknesses of web apps:
1. The JS isn't signed 2. The JS gets sent on every page load
The combination of which makes it much more convenient to do spear-phishing type attacks. But that said, any threat that looks like "but on the web you might send malicious code to user X" is also true of other app update mechanisms. Even on the iOS App Store, nothing is stopping me from writing code which says `if (hash(username+xyz) == abc123) { send data to NSA(); }`. I can't think of any binary-downloading systems (app stores, aptget, etc) which would discover that code.
And remember - desktop app code is potentially much more dangerous. Desktop apps can take photos with your webcam, record audio, record keystrokes and access every file in your home directory.
Its definitely true that most web apps are poorly implemented - they dynamically load 3rd party JS and they don't use SSL / HSTS. Its also embarrassing how many desktop apps have simple buffer overflow exploits. But the solution isn't to go back to desktop apps - the solution is to push for better best practices on the web.
In my opinion, the biggest security problem with the web is that most web apps store all your data in plaintext on someone else's computers. This is a problem that we we need to start addressing systematically via projects like CryptDB. Ie, we need more serious security work done on the web. Not less.
[+] [-] maaaats|11 years ago|reply
[+] [-] ionwake|11 years ago|reply
[+] [-] ivanhoe|11 years ago|reply
[+] [-] steakejjs|11 years ago|reply
SHA3, Rabbit, and whatever EvpKDF does, I don't actually have time to look at that.
[+] [-] colept|11 years ago|reply
[+] [-] billowycoat|11 years ago|reply
[+] [-] gojomo|11 years ago|reply
(Potentially, then, you only bring the overlay JS from a trusted source, and your encrypted notes can live in many places, as visible-but-uninterpretable 'noise' in other systems.)
[+] [-] meesterdude|11 years ago|reply
Yes, its not 100% secure - you have to trust the JS, which with enough targeted effort could be swapped out for something malicious. But this does mean your data at rest is encrypted, which can be very beneficial.
A bullet proof vest doesn't stop all bullets, nor does it protect everywhere. But people still use them all the time in addition to other solutions.
Still, with all that if you end up with a keylogger on your machine it means nothing. Air gaping helps a lot but really nothing is totally foolproof - one can only make it exponentially more difficult to bypass.
[+] [-] avyfain|11 years ago|reply
[+] [-] unknown|11 years ago|reply
[deleted]