For those, like me, wondering who the author might be, it appears to be this guy: "Adam Langley works on both Google’s HTTPS serving infrastructure and Google Chrome’s network stack. From the point of view of a browser, Langley has seen many HTTPS sites getting it dreadfully wrong and, from the point of view of a server, he’s part of what is probably the largest HTTPS serving system in the world - See more at: http://www.rsaconference.com/speakers/adam-langley#sthash.HM...
He's also the author of Golang's native crypto/tls TLS stack, a longtime contributor to the IETF TLS WG, and the author of some of OpenSSL's curve software. He's not messing around.
On the other hand we found that as of Friday, Chrome DID NOT recognize that one of our wildcard certs for Efficito had been revoked. We sent out an email to our customers saying to enable cert revocation checking.
Revovation isn't perfect and I would not suggest the current status quo is OK but the intermediary approach Chrome takes cannot be trusted as they have now shown.
If Chome will not show our cert as revoked what is the point of revoking the cert? The author has points but the approach Google ie taking is a cure worse than the disease...
So this complete infrastructure is crap. OpenSSL, a software half the internet uses but no one cares about because it's crap. CA's not revoking keys even though they know they're compromised. Revocation being worthless because it's too much of a hassle for anyone to bother.
Great. Maybe now, when half the internet is already compromised and all our certificates are not worth the bytes they're made of ... maybe we should try to come up with something better.
edit:
Actually, this whole heartbleed affair has been quite eyeopening for me, so I'm thankful for that.
But it certainly didn't help with the paranoia I feel the last couple of years while using services on the internet.
Yes! Now it's time for us to generate a whole new broken infrastructure! I'm sure if we just rewrite all the Internet's crypto in Rust, everything will be great 10 years from now. No way will a radically different new transport cryptosystem grant researchers 100 new bugs to play with; after all, we'll have option types.
Don't forget that 90% of the world's certificates are issued by five commercial CAs, who happen to be friendly with various national security agencies.
Still seeing lots of explanation about why the current system sucks, and not much about how a more robust system might be created and promptly adopted. Langley (the author) mentions short-lived certificates (either rapid expiration or via a 'must staple')... how soon can we enforce that? How short can that make the danger-period where the CA, and Google, and the "connected web" all know that a certificate is invalid, but a user-at-risk does not?
Why not other ways to rapid-broadcast invalidity in censorship-proof ways, so that a browser encircled by an enemy can quickly figure out something's wrong? (Or, why can't security professionals get around interdiction as effectively as copyright pirates do?)
how a more robust system might be created and promptly adopted
I'm quite fond of how the SSH host key system works.
Prompt me the first time I see a new key, provide me with supporting evidence (e.g. show me how many people have previously accepted this fingerprint for this domain) and alert me the same way in the future if the key ever changes.
If the 'supporting evidence' was plugin-based then this system could quickly become more user-friendly and trustworthy than the current centralised system can ever be.
There could be plugins to automatically trigger a SMS challenge on first contact with particularly sensitive sites. Multiple competing P2P web-of-trust plugins, plugins that let you follow trust-lists from third parties, etc.
In the current system you rely on a single, very questionable opinion on the trustworthiness of a given certificate. In the new system you'd be presented with a trust-score compiled from a whole range of opinions. The sources of which you chose before-hand.
Of course this approach doesn't include a license to print money for corrupt CA organisations and is not going to happen for that reason alone.
See my other post. Systems with shorter term authentication have been around for decades. The problem with X.509 is that it centralizes authorization with the browser vendor.
Yes, revoke is broken by design, especially with mobile and Chrome browser. I'd say it's broken everywhere except Firefox with OCSP Hard Fail enabled.
Thanks to this flaw StartSSL business model has become somewhat outdated IMHO with the free certs and paid revocations.
I'm dreaming that we can fix the revocations issue with 24hour valid certificates. Suggested at the end of my post.
But I must be naive on this as it's too simple, just haven't found the flaw in this myself. Yes, it needs technical orchestration, but at least it does not add extra layer of single point of failure for every session.
EDIT: Just finished the OP post and it does indeed also mention "short-lived certificates" in the end as a potential solution.
Indeed, short-lived certificates do seem like a solution to this problem. One downside might be the fact that (anecdotally) many users have inaccurate clocks. I read somewhere recently that a large web site has to back-date their new certificates, because, otherwise, certificate rotation/revocation causes a large spike in support tickets.
I think the author is a little disingenuous with the term "security theatre". Basically he argues that OCSP doesn't work because hard fail might cause DOS -- but fails to conclude that without OCSP SSL/TLS is useless. It's a long argument for saying that the CA system is broken (you can only trust the white-list chrome provides) -- and the sensible conclusion is that you cannot trust any other certificate chains (without OCSP) is left out.
Without certificates, SSL/TLS falls apart.
Perhaps a better use of CAs would be to always delegate authority to the domain owner -- we'd only need OCSP for the CAs, and domain owners could issue hour/day-valid certs via a cert infrastructure. That would push a lot of complexity down to domain owners, it would probably lead to a lot of errors in implementation -- but those errors would only affect the domains -- not the main CA trust chain as such.
I'm not sure if that would be an improvement or not -- but at least you could know that if a domain was run correctly, a valid certificate could actually be trusted...
I just did and if you were okay with a 0.001 probability of false positive you could list all 500,000 (possibly way off) certificates potentially exposed through heartbleed in only 877.5KB of space. The current Chrome CRL contains 24,161 serial numbers and takes up 305.3KB of space. While it isn't a perfect fix for the revocation problem it would certainly be much better than the status quo.
One problem might be that the 0.1% of sites hit by the false positive effectively couldn't use OCSP stapling but Chrome could just first call back to Google as a CRL proxy to avoid making an OCSP request when the site stapled a valid but potentially revoked OCSP response. Then just store that response from Google for current version of CRL in the cache. End result is that the unlucky false positive sites don't have tons of unnecessary (unnecessary as far as the OCSP spec is concerned) OCSP requests going to the CAs and the only thing they would notice is that a new visitior takes 100ms longer to make the first page load.
And through the magic of bloom filters if you wanted to bump the false positive rate down to 1 in 10,000 it only bloats the list to 1.14MB. Furthermore, there are methods to make the bloom filter scale-able such that a client doesn't have to necessarily download the whole bloom filter again if a bunch of elements are added to it and instead just download a portion of the data required for a full update.
The more I think about it the more I wonder why this isn't already in Chrome in some form or another. The only downside is weird networks where OCSP might be filtered, but not https, and access to Google is filtered.
Edit: One thing I feel stupid for overlooking is that Bloom filters aren't cryptographically secure so an attacker could theoretically find a serial number for some CA that would cause a site to always be a false positive but I don't think any CAs are still giving out serial numbers in a predictable way after the MD5 debacle and even if they were it would seem to be impractical to me. The fix would just be do a SHA256 hash of the serial instead of the serial itself.
Just make sure you still check revocation of code signing certificates. Otherwise you will end up running malware that is signed with a legit key they got off my stolen Windows laptop.
This argument only holds if the attacker controls every internet connection you use. If you're on a portable device or you're otherwise connecting through various networks, only a subset of which are compromised, revocations are still useful.
Exactly. If I'm on my trusted network at home and receive a big revocation list, and a few weeks later go to, say, Egypt, and someone tries to MITM me there with a stolen certificate, then it would show up as invalid.
When I hear these arguments, I always look for what is wrong with OCSP Must Staple. The author says that at the bottom it might be a solution with short lived certs, but I dont see the need for super short lived certs, only short lived OCSP staples. The author presents this as the problem:
> if the attacker still has control of the site, they can hop from CA to CA getting certificates. (And they will have the full OCSP validity period to use after each revocation.)
The solution here is to not allow OCSP stapling to request a new certificate and use a full OCSP check to verify that the cert wasnt revoked.
I'm honestly kind of surprised how little action there has been to assist with a migration away from the CA model. The technology is there, but people just don't seem interested enough to leverage it.
Systems like Namecoin could serve this purpose marvelously. Powerful devices have direct access to the entire cryptographically authenticated DNS and certificate database. Weak devices can specify whom they trust to provide them with DNS/certificate data, and even those devices get some cryptographic security guarantees thanks to technologies like SPV.
Why have a single entity at all? Moxie Marlinspike proposed Convergence (https://www.youtube.com/watch?v=Z7Wl2FW2TcA) as a solution - I think that something like that has far more potential wheels to travel than a Namecoin based system.
I should be able to choose who I trust, a notary system would allow me to do just that. No central CA systems.
The biggest concern I can see is Identity management, but, as mentioned by Moxie, most of these CA don't do anything close to proper Identity management any more - I have a number of certificates bought from quite a few different CA's all made out to my rabbit, at no fixed address.
Notaries can, of course, do additional verification - they could even advertise this as a premium.
I don't see why this can't be extended to DNS lookup's either. I trust X notaries and pin the results I get, I can choose to trust a majority, or be hyper paranoid and require everyone to agree. No need to run a power hungry blockchain, no single point of technology failure.
Technically, all of that is feasible today. And I imagine we will see a number of different technologies combined to form a proper, decentralised, system.
There's been plenty of action, but you can't turn the whole world on a dime.
Namecoin is fantastic in theory, but has the fatal flaw of using Bitcoin: the fastest number cruncher wins. Some would argue that the strength of Bitcoin's tech is that numerous currencies with different genesis blocks can flourish. That doesn't get us anywhere with naming, though.
Dead horse flog: the CA model's problem is that you can't do federated (global) naming and federated trust in the same system.
The migration away from the CA model is called "certificate pinning". Chrome uses it for high-value sites, and you use it whenever you ssh somewhere and the key's fingerprint is in your .ssh/known_hosts file.
If Namecoin is anywhere near as insecure as Bitcoin, it's a nonstarter. Yes, I know the cryptography underlying Bitcoin is secure, but as a matter of practical fact, Bitcoin itself as an end-user technology is hopelessly insecure. It's one thing having an endless stream of people waking up to find their bitcoins are irrevocably gone because someone hacked the computer, but we can't have domain names being irrevocably lost in the same way.
Does Namecoin actually work like that? If so, is there a similar alternative that doesn't?
I've wondered many times why OCSP isn't distributed as DNS is. When we talk about websites, surely there's no more than one certificate per hostname (or less, i.e. wildcards). I don't think we're talking here of something impossible to do or not feasible with our current technology and computing power.
Also, certificate "whitelisting" could be a part of the DNS protocol itself (return the IP address of the requested hostname and the hash of its current, valid certificate).
Just to clarify: OCSP is distributed, but I can't ask my local ISP OCSP server about your certificates. I have to ask your OCSP server about your certificates.
It seems the only problem with hard-fail is the risk of DoS attacks by targeting OCSP servers. However, if you include OCSP stapling you won't be affected. So a solution may be to encourage all users to enable revocation checking with hard-fail, and all servers to support OCSP stapling.
It's not the Internet, just the CA system. There are better systems for handling trust out there, for example, people have been signing each other's PGP keys at key signing parties for decades.
I am thinking that a HSTS option enabling hard-fail OCSP plus OCSP stapling is probably a good idea, though probably less secure than putting it in the certificate.
Let me get this straight. Sites across the internet are (hopefully) revoking their CAs and issuing new ones to address Heartbleed but Mr. Langley is suggesting that we shouldn't check for revoked CAs because it might not do anything and it's slow?
Sorry, but after the last few weeks I'll happily accept a little slowness for the security revocation checking provides in the cases where it does work, even if it's not 100% of the cases.
Well, his argument is also that the attacker can easily circumvent it, which is true, but it is still makes it slightly harder to do, because the attacker needs to remember it.
The article gives two reasons for why 'soft-fail' is required: Captive-portals, and OCSP server failure.
To deal with captive portals: have an SSL signed 'subdomain.google.com/you_are_on_the_internet' site/page that Google Chrome can use to check to see if it's captive or not. If it's captive, enable soft-fail. If internet access is available, set to hard-fail.
Websites these days are complex, with many (digital) moving parts - the database server(s), the static image server(s), dynamic response server(s), gateway server, probably a memcache server or something similar. If any one of those goes down, the site is unusable. Why then, should the OCSP server going down be considered any differently? Is a black-hat rented bot-net running a DDoS going to care if it's the main gateway server or the OCSP server?
But let's say we do consider disabled OCSP servers to be a client-side issue. Google could query and cache the OCSP server status, either with OCSP stapling or via some side-channel they build into Google Chrome.
The combination of both would allow hard-fail to be an option in Google Chrome.
Why not hard-fail by default and give the user the option to ignore/override it? Similar to the way other certificate warnings are shown to the end-user.
The author appears to entirely ignore attack vectors where the malicious party can record but not modify/block traffic.
Edit: I get it, I missed that for sites where the key has been changed the stolen key no longer allows such eavesdropping. Thank you to yuhong for helping point this out rather than just laughing at my ignorance while pushing me down the page.
I'm having a lot of trouble getting past: "Certificates bind a public key and an identity (commonly a DNS name) together."
X.509 certificates bind a public key and a human recognizable string (a "common name") together to create a verifiable digital identity. Over-simplified, X.509 is about solving the "I'm Spartacus" problem.
CRLs solve the "He was Spartacus" problem. I agree with the broad conclusion that CRLs aren't effective for human trust, but they are perfectly reasonable for machine trust.
Why didn't the author mention Kerberos? The default lifetime of a Kerberos ticket is designed around humans: roughly the length of a work shift in front of a computer terminal.
wpietri|12 years ago
timothya|12 years ago
[0]: https://github.com/openssl/openssl/commit/731f431497f463f3a2...
tptacek|12 years ago
einhverfr|12 years ago
Revovation isn't perfect and I would not suggest the current status quo is OK but the intermediary approach Chrome takes cannot be trusted as they have now shown.
If Chome will not show our cert as revoked what is the point of revoking the cert? The author has points but the approach Google ie taking is a cure worse than the disease...
hereonbusiness|12 years ago
Great. Maybe now, when half the internet is already compromised and all our certificates are not worth the bytes they're made of ... maybe we should try to come up with something better.
edit: Actually, this whole heartbleed affair has been quite eyeopening for me, so I'm thankful for that. But it certainly didn't help with the paranoia I feel the last couple of years while using services on the internet.
tptacek|12 years ago
cpeterso|12 years ago
gojomo|12 years ago
Why not other ways to rapid-broadcast invalidity in censorship-proof ways, so that a browser encircled by an enemy can quickly figure out something's wrong? (Or, why can't security professionals get around interdiction as effectively as copyright pirates do?)
moe|12 years ago
I'm quite fond of how the SSH host key system works.
Prompt me the first time I see a new key, provide me with supporting evidence (e.g. show me how many people have previously accepted this fingerprint for this domain) and alert me the same way in the future if the key ever changes.
If the 'supporting evidence' was plugin-based then this system could quickly become more user-friendly and trustworthy than the current centralised system can ever be.
There could be plugins to automatically trigger a SMS challenge on first contact with particularly sensitive sites. Multiple competing P2P web-of-trust plugins, plugins that let you follow trust-lists from third parties, etc.
In the current system you rely on a single, very questionable opinion on the trustworthiness of a given certificate. In the new system you'd be presented with a trust-score compiled from a whole range of opinions. The sources of which you chose before-hand.
Of course this approach doesn't include a license to print money for corrupt CA organisations and is not going to happen for that reason alone.
dvanduzer|12 years ago
AhtiK|12 years ago
Yes, revoke is broken by design, especially with mobile and Chrome browser. I'd say it's broken everywhere except Firefox with OCSP Hard Fail enabled.
Thanks to this flaw StartSSL business model has become somewhat outdated IMHO with the free certs and paid revocations.
I'm dreaming that we can fix the revocations issue with 24hour valid certificates. Suggested at the end of my post.
But I must be naive on this as it's too simple, just haven't found the flaw in this myself. Yes, it needs technical orchestration, but at least it does not add extra layer of single point of failure for every session.
EDIT: Just finished the OP post and it does indeed also mention "short-lived certificates" in the end as a potential solution.
ivanr|12 years ago
Short-lived certificates were explored in Towards Short-Lived Certificates http://crypto.stanford.edu/~dabo/pubs/abstracts/ssl-shortliv...
e12e|12 years ago
Without certificates, SSL/TLS falls apart.
Perhaps a better use of CAs would be to always delegate authority to the domain owner -- we'd only need OCSP for the CAs, and domain owners could issue hour/day-valid certs via a cert infrastructure. That would push a lot of complexity down to domain owners, it would probably lead to a lot of errors in implementation -- but those errors would only affect the domains -- not the main CA trust chain as such.
I'm not sure if that would be an improvement or not -- but at least you could know that if a domain was run correctly, a valid certificate could actually be trusted...
dvanduzer|12 years ago
pronoiac|12 years ago
MertsA|12 years ago
One problem might be that the 0.1% of sites hit by the false positive effectively couldn't use OCSP stapling but Chrome could just first call back to Google as a CRL proxy to avoid making an OCSP request when the site stapled a valid but potentially revoked OCSP response. Then just store that response from Google for current version of CRL in the cache. End result is that the unlucky false positive sites don't have tons of unnecessary (unnecessary as far as the OCSP spec is concerned) OCSP requests going to the CAs and the only thing they would notice is that a new visitior takes 100ms longer to make the first page load.
And through the magic of bloom filters if you wanted to bump the false positive rate down to 1 in 10,000 it only bloats the list to 1.14MB. Furthermore, there are methods to make the bloom filter scale-able such that a client doesn't have to necessarily download the whole bloom filter again if a bunch of elements are added to it and instead just download a portion of the data required for a full update.
The more I think about it the more I wonder why this isn't already in Chrome in some form or another. The only downside is weird networks where OCSP might be filtered, but not https, and access to Google is filtered.
Edit: One thing I feel stupid for overlooking is that Bloom filters aren't cryptographically secure so an attacker could theoretically find a serial number for some CA that would cause a site to always be a false positive but I don't think any CAs are still giving out serial numbers in a predictable way after the MD5 debacle and even if they were it would seem to be impractical to me. The fix would just be do a SHA256 hash of the serial instead of the serial itself.
pjscott|12 years ago
https://www.imperialviolet.org/2011/04/29/filters.html
eps|12 years ago
colons|12 years ago
captainmuon|12 years ago
chacham15|12 years ago
> if the attacker still has control of the site, they can hop from CA to CA getting certificates. (And they will have the full OCSP validity period to use after each revocation.)
The solution here is to not allow OCSP stapling to request a new certificate and use a full OCSP check to verify that the cert wasnt revoked.
wyager|12 years ago
Systems like Namecoin could serve this purpose marvelously. Powerful devices have direct access to the entire cryptographically authenticated DNS and certificate database. Weak devices can specify whom they trust to provide them with DNS/certificate data, and even those devices get some cryptographic security guarantees thanks to technologies like SPV.
sarahj|12 years ago
I should be able to choose who I trust, a notary system would allow me to do just that. No central CA systems.
The biggest concern I can see is Identity management, but, as mentioned by Moxie, most of these CA don't do anything close to proper Identity management any more - I have a number of certificates bought from quite a few different CA's all made out to my rabbit, at no fixed address.
Notaries can, of course, do additional verification - they could even advertise this as a premium.
I don't see why this can't be extended to DNS lookup's either. I trust X notaries and pin the results I get, I can choose to trust a majority, or be hyper paranoid and require everyone to agree. No need to run a power hungry blockchain, no single point of technology failure.
Technically, all of that is feasible today. And I imagine we will see a number of different technologies combined to form a proper, decentralised, system.
dvanduzer|12 years ago
Namecoin is fantastic in theory, but has the fatal flaw of using Bitcoin: the fastest number cruncher wins. Some would argue that the strength of Bitcoin's tech is that numerous currencies with different genesis blocks can flourish. That doesn't get us anywhere with naming, though.
Dead horse flog: the CA model's problem is that you can't do federated (global) naming and federated trust in the same system.
jrockway|12 years ago
rwallace|12 years ago
Does Namecoin actually work like that? If so, is there a similar alternative that doesn't?
mobiplayer|12 years ago
Also, certificate "whitelisting" could be a part of the DNS protocol itself (return the IP address of the requested hostname and the hash of its current, valid certificate).
mobiplayer|12 years ago
phunehehe0|12 years ago
Khaine|12 years ago
Something has to give. We need to abolish SSL/TLS and migrate to something that isn't broken by design
lazyjones|12 years ago
It's not the Internet, just the CA system. There are better systems for handling trust out there, for example, people have been signing each other's PGP keys at key signing parties for decades.
yuhong|12 years ago
Splendor|12 years ago
Sorry, but after the last few weeks I'll happily accept a little slowness for the security revocation checking provides in the cases where it does work, even if it's not 100% of the cases.
ars|12 years ago
takeda|12 years ago
anaphor|12 years ago
fragmede|12 years ago
To deal with captive portals: have an SSL signed 'subdomain.google.com/you_are_on_the_internet' site/page that Google Chrome can use to check to see if it's captive or not. If it's captive, enable soft-fail. If internet access is available, set to hard-fail.
Websites these days are complex, with many (digital) moving parts - the database server(s), the static image server(s), dynamic response server(s), gateway server, probably a memcache server or something similar. If any one of those goes down, the site is unusable. Why then, should the OCSP server going down be considered any differently? Is a black-hat rented bot-net running a DDoS going to care if it's the main gateway server or the OCSP server?
But let's say we do consider disabled OCSP servers to be a client-side issue. Google could query and cache the OCSP server status, either with OCSP stapling or via some side-channel they build into Google Chrome.
The combination of both would allow hard-fail to be an option in Google Chrome.
papaf|12 years ago
pencilcode|12 years ago
sp332|12 years ago
dwightgunning|12 years ago
x0x0|12 years ago
rdl|12 years ago
platinumdragon|12 years ago
[deleted]
akerl_|12 years ago
Edit: I get it, I missed that for sites where the key has been changed the stolen key no longer allows such eavesdropping. Thank you to yuhong for helping point this out rather than just laughing at my ignorance while pushing me down the page.
yuhong|12 years ago
dvanduzer|12 years ago
X.509 certificates bind a public key and a human recognizable string (a "common name") together to create a verifiable digital identity. Over-simplified, X.509 is about solving the "I'm Spartacus" problem.
CRLs solve the "He was Spartacus" problem. I agree with the broad conclusion that CRLs aren't effective for human trust, but they are perfectly reasonable for machine trust.
Why didn't the author mention Kerberos? The default lifetime of a Kerberos ticket is designed around humans: roughly the length of a work shift in front of a computer terminal.
final edit: meta-moderation is hard
lnanek2|12 years ago