This piece has surfaced before. Then and now I see people coming out of the woodwork with seemingly smart ideas about how it still should be possible to safely use crypto in the browser one way or another.
One of the things my company does is security testing of web applications. Regularly we encounter 'creative' use of cryptographic techniques (both in the browser and server-side) and each time it makes the hacker in us smile, because we know it is not a matter of 'if' we'll crack it but 'when' we'll crack it. Good crypto is a roadblock, bad crypto is just a challenge. And although it is very hard to decide if the crypto is 'good (enough)', the 'bad' is usually glaringly obvious.
With the current state of crypto in the browser - just forget it. That's what Thomas is trying to get across: forget it - if you think you've found some smart way around one of the weakness he addresses, you're very most likely wrong. And even if you seem to have got it right, you're probably wrong still without anyone realizing it (yet).
Same is true for building a crypto-system from primitives. Use what's out there, designed by the few people who know what they're doing.
Remember: from the defensive side you need to get everything right. As an attacker I only need 1 hole. That's what makes it "capital-H Hard".
I don't know if you're referring to my comment or not (http://news.ycombinator.com/item?id=5123674), but I was not saying anything about what is safe. I was merely objecting to false equivalences that the article was drawing.
To say that localStorage is literally no better than server-side storage is a strong statement, and one that does not appear to be literally true. Taking issue with that equivalence is not the same as saying that any particular system/design is safe as a whole.
JavaScript, and all other crypto code not done with native code is not safe, for a reason not mentioned in this article: side channel attacks.
When attempting to create crypto code using an interpreter or a byte code virtual machine, additional side channels are created by the differences in the execution compared to executing native code. Crypto code should be written in Assembler or C code where the assembly output is reviewed by the author. This is the only way to create code that does introduce side channel information that can be used with timing attacks, cache attacks, branch predictor attacks, etc. This introduces a problem because it takes a cryptographer and a hardware architecture expert in the team to write safe code for cryptographic primitives.
This does not mean you can't safely use crypto from interpreted languages, as long as the cryptographic primitives are good native code.
As there can be backdoors and bugs in OSes and hardware, any crypto code done on generic-purpose computers with standard OSes (Windows, OSX, Linux, BSD) is not safe. That does not mean it is useless.
The same is true for JS crypto - yes, it is not as safe as crypto in native code, but it can be used to add an additional layer of security in certain (non-critical) use cases.
Of course, changing the interpreter seems to be an order (or two) of magnitude easier than supplying malicious interpreter. However, I would argue that if you are able to replace JavaScript engine you could do same with whole browser, so SSL is also worthless...
What is magical about C code, or asm, or microcode that makes it more secure? Everyone has to trust someone. See the Ken Thompson hack: http://c2.com/cgi/wiki?TheKenThompsonHack
"WAIT, CAN'T I GENERATE A KEY AND USE IT TO SECURE THINGS IN HTML5 LOCAL STORAGE? WHAT'S WRONG WITH THAT?"
"That scheme is, at best, only as secure as the server that fed you the code you used to secure the key. You might as well just store the key on that server and ask for it later. For that matter, store your documents there, and keep the moving parts out of the browser."
This ignores the scenario of app deployment models like Chrome Packaged Apps, in which the JavaScript code gets downloaded up-front and then is only used locally. Since you don't re-download the code every time, you only depend on the security of the code once, up-front, instead of on a continuous basis. You aren't affected by server compromise (well, no more than compromise of your OS vendor, but surely you aren't arguing that we might as well send all our keys to Microsoft, Apple, and Canonical).
Also I feel that this analysis conflates security with access. You may trust a company to keep their servers secure from compromise, but want them not to have access to the documents when the government comes knocking.
The author is really only saying browser cryptography is bad for one specific problem: securely transmitting data to an untrusted provider. But there are lots of other use cases.
For example, maybe I want to upload a file to a server, and I trust them not to try to steal my data, but I don't trust my government not to confiscate their servers. In that case, SSL + browser cryptography is adequate to give me the assurances I need that the government won't be able to get access to my data, even if the service's engineers could.
"For example, maybe I want to upload a file to a server, and I trust them not to try to steal my data, but I don't trust my government not to confiscate their servers. In that case, SSL + browser cryptography is adequate to give me the assurances I need that the government won't be able to get access to my data, even if the service's engineers could."
If the government might have the ability to confiscate their servers they also have the ability to compromise their service during use. So, if you don't trust your government you can't trust their servers either, regardless of whether you trust the service's engineers or not.
He also ignores the situation where you want to upload it to an untrusted third party server where the javascript was obtained from a trusted server. Also the same problems of verifying the source exist in most closed source applications that automatically update so really nothing to do with javascript.
But if the government can confiscate the server they also can make it deliver modified JS. If you can't trust the server you also can't trust the JS it gives you.
Haha admit it, everyone is thinking about Mega ;).
I agree you are right. Once you CAN deliver the code securely (like the author admits is possible) through SSL you should be able to encrypt things in the browser and send then to the server encrypted. This is what the author is saying as well.
To be even more generous, I might not actually be worried about my users and their data--only about whether I am legally liable for what ends up being found uploaded to my servers. If all content pushed to me is encrypted before it gets there--and I don't have the keys--then I can honestly say I have no idea what it was. SSL only gets me half-way there; I need some form of PGP-type crypto to let me receive the data and store it without decrypting it first. It'd be great it if were implemented natively and exposed as a browser API, but doing it in Javascript doesn't seem so bad. Even if the data became compromised on my server, It would still be provable that I didn't have access to it.
Well, if you use native encryption software, what makes things any different? If they can replace key code and data on the fly for web application run over SSL, what makes think they're unable to deliver you fradulend updates for native apps?
I have been raising alarm about this for a long time. Automated updates are dangerous, how many users make it absolutely certain that every update is secure? Well, I can tell you nobody ever does. Because secure updates or software doesn't exist at all. Even if the previous version was secure, the next version could be boroken by mistake or on purpose, or you could just get espionage version delivered which is made just for you.
Unfortunately there are countless programs that do not make update delivery in very secure manner at all. Plain http, no signatures etc. That's quite much 100% fail.
The problem with running crypto code in Javascript is that practically any function that the crypto depends on could be overridden silently by any piece of content used to build the hosting page.
Ecmascript 5, the latest version of the Javascript standard, provides the ability to lock down the malleable runtime. Functions can be frozen so that no later code can overwrite or change their behavior. For more information, see this talk[1] by Mario Heiderich in 2011 or his slides[2].
If you'd heed the author's warning, you wouldn't do anything of any meaning in the browser: no online banking, no purchasing... Because even if you don't do "crypto" (in the sense of encrypting/decrypting primitives) everything else you do also relies on the same TLS and JavaScript. My online banking site uses, of course, JavaScript. My online trading platform uses JavaScript. If the browsers and the sites are vulnerable to the attacks he presents (cross side scripting, man-in-the-middle possibilities, code injection attacks from another souce) then it's irrelevant if you do crypto primitives or not, you are vulnerable.
On another side, if you assume that other vulnerabilities don't exist, and you do such things online like banking or trading, and you accept that the sites use JavaScript, I don't see any argument why the crypto primitives which run in addition to the rest of the code, everything delivered over TLS and from the same site, are any more suspicious than the rest of the JavaScript.
The advantage of the encryption on the client side is obvious. Of course, it would be even better to have the client side encryption controlled by the user separately from the site. But under assumption that I personally control the server from which I deliver my html and JavaScript over TLS, I still feel better having the possibility to encrypt something that I'll upload to the server as long as I assume that the browser is not attacked.
The only thing missing is the possibility to somehow checksum the delivered html and code and then "lock" that in my browser. It's not something scalable, I know.
But the problem is never that much technical as it's "political." Consider Dropbox: in many use cases, they would be able to have all the encryption on the client and not to deliver the key to them. However they do deliver the key "because the users will need it." Who says that? They, and I can't choose.
Technically, the solution can be certainly achieved, the problem is that it's not an interest of the current service providers.
Maybe is Mega the first one that really has such interest?
Mega is not the first one. AES.io (my company) and several others have been available for some time. Mega is the first one to bring client-side JS crypto into public discussion.
I'm imagining a system (kind of like tar snap) that backs up my files all pgp encrypted ith my public key, and which allows me do download those encrypted files (which I can then decrypt locally).
If the pgp encryption is done client side (by a native app, not in-browser), and the "backup service" oly ever sees pgp encrypted files - is there some other hole I've not seen there?
(I guess theres metadata leakage with that scheme, the number and sizes of backed up files could be determined, even if the contents are secure)
I agree that browsers simply don't have consistent enough APIs for the strong guarantees required for encryption, including strong random number generation and memory allocation behavior. That was the takeaway for me when I read this the first time.
The "if SSL, why JS crypto?," DOM, and "chicken-v-egg" trust problems seem more like straw-men and sophistry though. Desktop crypto underwent an iterative evolution with early adopters bearing the bulk of the risk too. (Mega got the digest part wrong, but they fixed it, for example.) SSH doesn't use certificates, but you can read the host fingerprint and follow the chain of trust that way. If people are going to use crypto, they have to take responsibility for these pieces, which is improbable en masse. "[T]he security value of a crypto measure that fails can easily fall below zero" definitely rings true. Repeated malware infections, however, suggest peope don't even learn after they are burned... "Normal users" can't be bothered to update their browser or verify trust (leading to VeriSign having complete power, for example), for the same reason "normal" people don't use the existing native encryption (GPG/PGP) and, if they did, there would be no need for JS crypto.
There are some good examples and arguments there, but to me it reads like it mostly boils down to having SSL, which does quite a good job at solving the chicken/egg situation. You can have a good degree of assurance that you talk to the right server, and that your communication to it is not in the clear (of course, what's good degree is debatable, and depends on what you're trying to protect and against whom).
I don't quite get why
> You could use SSL/TLS to solve this problem, but that's expensive and complicated
You can get an SSL trusted certificate for a few bucks, and installing SSL is probably one of the most well-documented sysadmin procedures on the web. How can it be more expensive or more complicated than implementing your own javascript crypto?
Regarding the malleability of the js runtime, would this be addressed by making all the page content a single html+js file and making its hash widely available for manual verification? Obviously "normal" users aren't going to check it's kosher, but it should mean if a site's serving dodgy code somebody will notice.
Obviously that's a bit impractical for most websites, but it could make sense for a site whose primary raison d'être is encryption.
Interesting points. I'd thought of these issues but hadn't heard them so clearly stated.
I don't know if this situation would be common but I have an idea of where it could work. Perhaps a web app you completely trust could talk to an API you don't trust over CORS. The web app you completely trust would only talk to the api you don't trust over XHR and wouldn't eval anything it got back.
For those interested in the "Radioactive Boy Scout" referenced in the article, a copy of the Harpers magazine article about David Hahn can be found here:
Honestly, being downvoted for being clear does not show a great intelectual clarity. Really: do you people check all the hashes of your software? Do you trust all the certificates in your browsers? Have you really honestly checked all the ssh fingerprints of the servers you connect to?
I repeat: security is not abstract, it depends on the problem and the trade-offs.
I've actually done client side HMAC before to keep from sending passwords in plaintext at least. The site couldn't do SSL at the time. Not perfect, easily MITM-able but at least not network sniffable.
I have seen (and shot down) people floating ideas of doing AJAX HMAC. It's a great idea if you think about it. But if you really think about it... it is a OHGODSWHY idea
Ermh sorry to nag but after the google (et al) rogue certificates I think one would better say SSL is considered harmful as well....
Security is a set of layers and trade-offs.
Harmful for what? That is the question.
Your computer is considered harmful. Did you check the hashes of all the software you downloaded? Oh wait there are no checks to be done for the little app you got a couple of months ago...
So: take an enemy and look if what you do is reasonable enough.
Flying is considered harmful, hence the TSA.
Remember the github fiasco? But you still trust them do you? I might (I do not) consider github harmful as well.
[+] [-] jvdongen|13 years ago|reply
One of the things my company does is security testing of web applications. Regularly we encounter 'creative' use of cryptographic techniques (both in the browser and server-side) and each time it makes the hacker in us smile, because we know it is not a matter of 'if' we'll crack it but 'when' we'll crack it. Good crypto is a roadblock, bad crypto is just a challenge. And although it is very hard to decide if the crypto is 'good (enough)', the 'bad' is usually glaringly obvious.
With the current state of crypto in the browser - just forget it. That's what Thomas is trying to get across: forget it - if you think you've found some smart way around one of the weakness he addresses, you're very most likely wrong. And even if you seem to have got it right, you're probably wrong still without anyone realizing it (yet).
Same is true for building a crypto-system from primitives. Use what's out there, designed by the few people who know what they're doing.
Remember: from the defensive side you need to get everything right. As an attacker I only need 1 hole. That's what makes it "capital-H Hard".
[+] [-] haberman|13 years ago|reply
To say that localStorage is literally no better than server-side storage is a strong statement, and one that does not appear to be literally true. Taking issue with that equivalence is not the same as saying that any particular system/design is safe as a whole.
[+] [-] exDM69|13 years ago|reply
When attempting to create crypto code using an interpreter or a byte code virtual machine, additional side channels are created by the differences in the execution compared to executing native code. Crypto code should be written in Assembler or C code where the assembly output is reviewed by the author. This is the only way to create code that does introduce side channel information that can be used with timing attacks, cache attacks, branch predictor attacks, etc. This introduces a problem because it takes a cryptographer and a hardware architecture expert in the team to write safe code for cryptographic primitives.
This does not mean you can't safely use crypto from interpreted languages, as long as the cryptographic primitives are good native code.
[+] [-] trekkin|13 years ago|reply
The same is true for JS crypto - yes, it is not as safe as crypto in native code, but it can be used to add an additional layer of security in certain (non-critical) use cases.
[Disclosure: I run AES.io]
[+] [-] jakozaur|13 years ago|reply
Of course, changing the interpreter seems to be an order (or two) of magnitude easier than supplying malicious interpreter. However, I would argue that if you are able to replace JavaScript engine you could do same with whole browser, so SSL is also worthless...
[+] [-] deltasquared|13 years ago|reply
[+] [-] haberman|13 years ago|reply
"That scheme is, at best, only as secure as the server that fed you the code you used to secure the key. You might as well just store the key on that server and ask for it later. For that matter, store your documents there, and keep the moving parts out of the browser."
This ignores the scenario of app deployment models like Chrome Packaged Apps, in which the JavaScript code gets downloaded up-front and then is only used locally. Since you don't re-download the code every time, you only depend on the security of the code once, up-front, instead of on a continuous basis. You aren't affected by server compromise (well, no more than compromise of your OS vendor, but surely you aren't arguing that we might as well send all our keys to Microsoft, Apple, and Canonical).
Also I feel that this analysis conflates security with access. You may trust a company to keep their servers secure from compromise, but want them not to have access to the documents when the government comes knocking.
[+] [-] zimbatm|13 years ago|reply
[+] [-] erikpukinskis|13 years ago|reply
For example, maybe I want to upload a file to a server, and I trust them not to try to steal my data, but I don't trust my government not to confiscate their servers. In that case, SSL + browser cryptography is adequate to give me the assurances I need that the government won't be able to get access to my data, even if the service's engineers could.
[+] [-] tjoff|13 years ago|reply
If the government might have the ability to confiscate their servers they also have the ability to compromise their service during use. So, if you don't trust your government you can't trust their servers either, regardless of whether you trust the service's engineers or not.
[+] [-] asdfaoeu|13 years ago|reply
[+] [-] ema|13 years ago|reply
[+] [-] haberman|13 years ago|reply
> And if you have SSL, why do you need Javascript crypto? Just use the SSL.
Maybe because SSL only solves one problem, and other cryptographic algorithms/systems solve other problems?
[+] [-] mosselman|13 years ago|reply
I agree you are right. Once you CAN deliver the code securely (like the author admits is possible) through SSL you should be able to encrypt things in the browser and send then to the server encrypted. This is what the author is saying as well.
[+] [-] derefr|13 years ago|reply
[+] [-] Sami_Lehtinen|13 years ago|reply
I have been raising alarm about this for a long time. Automated updates are dangerous, how many users make it absolutely certain that every update is secure? Well, I can tell you nobody ever does. Because secure updates or software doesn't exist at all. Even if the previous version was secure, the next version could be boroken by mistake or on purpose, or you could just get espionage version delivered which is made just for you.
Unfortunately there are countless programs that do not make update delivery in very secure manner at all. Plain http, no signatures etc. That's quite much 100% fail.
[+] [-] moonboots|13 years ago|reply
Ecmascript 5, the latest version of the Javascript standard, provides the ability to lock down the malleable runtime. Functions can be frozen so that no later code can overwrite or change their behavior. For more information, see this talk[1] by Mario Heiderich in 2011 or his slides[2].
[1] https://www.youtube.com/watch?v=yuNfO6I6pEA
[2] https://www.owasp.org/images/a/a3/Mario_Heiderich_OWASP_Swed...
[+] [-] jvdongen|13 years ago|reply
[+] [-] acqq|13 years ago|reply
On another side, if you assume that other vulnerabilities don't exist, and you do such things online like banking or trading, and you accept that the sites use JavaScript, I don't see any argument why the crypto primitives which run in addition to the rest of the code, everything delivered over TLS and from the same site, are any more suspicious than the rest of the JavaScript.
The advantage of the encryption on the client side is obvious. Of course, it would be even better to have the client side encryption controlled by the user separately from the site. But under assumption that I personally control the server from which I deliver my html and JavaScript over TLS, I still feel better having the possibility to encrypt something that I'll upload to the server as long as I assume that the browser is not attacked.
The only thing missing is the possibility to somehow checksum the delivered html and code and then "lock" that in my browser. It's not something scalable, I know.
But the problem is never that much technical as it's "political." Consider Dropbox: in many use cases, they would be able to have all the encryption on the client and not to deliver the key to them. However they do deliver the key "because the users will need it." Who says that? They, and I can't choose.
Technically, the solution can be certainly achieved, the problem is that it's not an interest of the current service providers.
Maybe is Mega the first one that really has such interest?
[+] [-] trekkin|13 years ago|reply
[+] [-] cperciva|13 years ago|reply
[+] [-] bigiain|13 years ago|reply
I'm imagining a system (kind of like tar snap) that backs up my files all pgp encrypted ith my public key, and which allows me do download those encrypted files (which I can then decrypt locally).
If the pgp encryption is done client side (by a native app, not in-browser), and the "backup service" oly ever sees pgp encrypted files - is there some other hole I've not seen there?
(I guess theres metadata leakage with that scheme, the number and sizes of backed up files could be determined, even if the contents are secure)
[+] [-] aarondf|13 years ago|reply
[+] [-] vy8vWJlco|13 years ago|reply
The "if SSL, why JS crypto?," DOM, and "chicken-v-egg" trust problems seem more like straw-men and sophistry though. Desktop crypto underwent an iterative evolution with early adopters bearing the bulk of the risk too. (Mega got the digest part wrong, but they fixed it, for example.) SSH doesn't use certificates, but you can read the host fingerprint and follow the chain of trust that way. If people are going to use crypto, they have to take responsibility for these pieces, which is improbable en masse. "[T]he security value of a crypto measure that fails can easily fall below zero" definitely rings true. Repeated malware infections, however, suggest peope don't even learn after they are burned... "Normal users" can't be bothered to update their browser or verify trust (leading to VeriSign having complete power, for example), for the same reason "normal" people don't use the existing native encryption (GPG/PGP) and, if they did, there would be no need for JS crypto.
[+] [-] politician|13 years ago|reply
[+] [-] thirsteh|13 years ago|reply
[+] [-] gingerlime|13 years ago|reply
I don't quite get why
You can get an SSL trusted certificate for a few bucks, and installing SSL is probably one of the most well-documented sysadmin procedures on the web. How can it be more expensive or more complicated than implementing your own javascript crypto?[+] [-] Joeboy|13 years ago|reply
Obviously that's a bit impractical for most websites, but it could make sense for a site whose primary raison d'être is encryption.
[+] [-] zachrose|13 years ago|reply
[+] [-] benatkin|13 years ago|reply
I don't know if this situation would be common but I have an idea of where it could work. Perhaps a web app you completely trust could talk to an API you don't trust over CORS. The web app you completely trust would only talk to the api you don't trust over XHR and wouldn't eval anything it got back.
[+] [-] el_cuadrado|13 years ago|reply
[+] [-] DanBC|13 years ago|reply
(http://users.guardian.co.uk/signin/0,12930,-1,00.html)
This website claims it's their script: (http://pajhome.org.uk/crypt/md5/)
[+] [-] chris_wot|13 years ago|reply
http://www.dangerouslaboratories.org/radscout.html
[+] [-] pfortuny|13 years ago|reply
I repeat: security is not abstract, it depends on the problem and the trade-offs.
If this hurts please check your mind.
And feel free to downvote OF COURSE.
Be happy.
[+] [-] Flenser|13 years ago|reply
Check back in 10 years when the majority of people aren't running browsers from 2008.
Edit: The oldest version on archive.org is Sep 2011, so it's at least 16 months old.
http://web.archive.org/web/20110815000000*/http://matasano.c...
[+] [-] willscott|13 years ago|reply
[+] [-] simcop2387|13 years ago|reply
[+] [-] callahad|13 years ago|reply
[+] [-] homedog|13 years ago|reply
[+] [-] chewxy|13 years ago|reply
[+] [-] trekkin|13 years ago|reply
[+] [-] DanBC|13 years ago|reply
[+] [-] pfortuny|13 years ago|reply
Security is a set of layers and trade-offs.
Harmful for what? That is the question.
Your computer is considered harmful. Did you check the hashes of all the software you downloaded? Oh wait there are no checks to be done for the little app you got a couple of months ago...
So: take an enemy and look if what you do is reasonable enough.
Flying is considered harmful, hence the TSA.
Remember the github fiasco? But you still trust them do you? I might (I do not) consider github harmful as well.
[+] [-] pfortuny|13 years ago|reply