I keep hoping they will help address non-Internet TLS. For example if you run a HTPC, fridge, printer, device controller or anything similar on your LAN and want to talk to it over the same LAN using TLS. Getting a workable cert is currently not possible: for example the LAN names aren't going to be unique.
Plex did solve this in conjunction with a certificate authority, but that solution only works for them. The general approach could work for others if someone like letsencrypt led the effort. https://blog.filippo.io/how-plex-is-doing-https-for-all-its-...
It's certainly possible to get a trusted certificate for a LAN-only device. DNS-based validation is your best option here. The only requirement is that you use an ICANN ("public") domain. This is not a requirement made up by Let's Encrypt, but rather by the CA/B Forum and applies to all CAs (for good reasons[1]!)
The Plex approach would be possible with Let's Encrypt, though you would have to find a way to avoid running into rate limits (via PSL or by making users use their own domains, which is admittedly only an argument if you're catering to a technical audience).
The fact that it would not be unique would fundamentally undermine the security of the CA system.
Nothing would stop someone from getting a certificate for the hostname "myfridge" on their LAN, then going to your LAN and using the same one to perform MitM for your "myfridge".
The plex approach is very interesting though! There would be a lot to think out, but LetsEncrypt could do it if anyone could.
> Getting a workable cert is currently not possible: for example the LAN names aren't going to be unique
Connectivity [1] and using a global namespace are orthogonal things: you can use global DNS namespace just fine independent of connectivity. So from the naming perspective it Just Works if you get certs for printer.yourhouse.you.tld and fridge.yourhouse.you.tld.
(Of course you'd still like an automated cert renewal system for this disconnected case, but that's just a "simple matter of programming".)
[1] assuming by "LAN" you meant "network disconnected from the Internet"
Amen! While it's possible to get certs for things like firewalls and other embedded devices, it's a big PITA. Factor in the short expiration times, and buying a 2-5 year cert becomes a lot more attractive for those use cases.
Just at the entire world is going HTTPS, my faith in the system is seriously waning. When Symantec bought Blue Coat, it made me start to think about how fragile this is. How long before Symantec gets an NSL demanding an appliance that can mint bogus certs on the fly for dropbox.com, facebook.com, twitter.com, etc...?
How effective is something like certificate pinning against fraudulent certs?
If they are logged in Certificate Transparency, then the world will know, the offending certificates will be immediately blacklisted, and Symantec will be booted from root programs.
With the ongoing advancements in Certificate Transparency, your faith in the Internet PKI should be growing, not waning.
I don't think they would be considered fraudulent at all, and in fact I'm pretty sure that's the "safety valve" built into the system and why public encryption is now being encouraged. I share your tinfoil hatted feelings wholeheartedly.
I'm still bitter about this chain of trust model. The fact that I have to get some other party to tell my users that they can trust me just seems wrong. They trust me because of personal history, not because some banner says they should.
Browsers and OS vendors shipping CAs seems to be the root of the problem, in my mind. Those should be distributed by the service providers, who are the actual trustworthy entities in the user's minds.
> Those should be distributed by the service providers, who are the actual trustworthy entities in the user's minds.
That's what HPKP does, basically, unless I'm misinterpreting what you mean with service providers.
HPKP is Trust on First Use, so it's not perfect, but the alternative - some kind of Web of Trust - is not really practical for non-technical, not-security-conscious users, IMO.
Aren't the root authorities in browsers/OSes just a means of short cutting the chain of trust validation by eliminating the need to validate the chain up to a single root?
I share your frustration and I understand that trying manage levels of trust a tough problem compounded by the fact that a user's expectations are fluid.
Its a bit like gun control laws; ultimately the criminals won't follow them. I was reading about some recent attacks and how hackers just steal certs or fool CA's into making certs for them. My understanding is that this is trivial for them to do in most cases. Turns out most CA's are run like security shitshows.
Meanwhile at work we're juggling dozens of certs left and right each with their own expiry as a handout to CAs. There's no reason why CAs cant sell me a cert that has a decade expiry. If the cryptography it uses goes bad, we'll just replace it. Why am I constantly buying these things?
Everything about CAs and browsers are wrong. Especially when many browsers ship with root certs from entities controlled by autocratic governments with zero accountability and involved in cybercrime and cyberspying. I'm giving incredible access to these nation states by downloading Firefox, Chrome, or IE. How is this "secure" again?
And when we visit your site for the first time, having never heard of you before, why should we trust you?
That's the point. Having some authority who did at least some minimal checking, to extensive checking, and who will verify you really are who you purport to be. Trust but verify probably plays a part in this.
But, remember, you don't have to go to HTTPS. There is no requirement for you to do so.
Let’s Encrypt has issued more than 5 million certificates in total since we launched to the general public on December 3, 2015. Approximately 3.8 million of those are active, meaning unexpired and unrevoked. Our active certificates cover more than 7 million unique domains.
How can you cover 7 million unique domains if you've only issued 5 million certificates?
This is great, I use LetsEncrypt for my company. however, the graph is a little misleading. Lets look closer:
LetsEncrypt is almost built upon the idea of frequently (and automatically) re-issuing your certificate(s). The graph's line shows what appears to be an accumulated sum of certificates issued by day.
If every 90 days most certificate(s) expire, of course the graph will look like that!
Whats most interesting to me is the steps up in the graph. It appears that the steps in the graph roughly occur on 70-90 day intervals.
Impressive growth for a great mission/service, but I wanted to point out the mechanics behind the graph. Hopefully others can offer some alternative perspectives!
Is it still problematic to issue lots of certs for lots of subdomains? I mean, still no wildcard certs and crazy rate limits, that disallow issuing 1000s of certs per day for user-generated subdomains?
Wildcard certs are also a huge need for platforms like Sandstorm.io which opens documents on arbitrary/randomly-generated subdomains. And as someone who hosts a lot of things on various subdomains in general, the idea of having a bunch of different certs is far less appealing than a wildcard cert.
But unfortunately it doesn't seem like Let's Encrypt currently has any plans to add wildcard certs any time soon.
Though if you are a hosting provider for example, I'm sure you could try to negotiate a deal with let's encrypt for more tolerant rate limits for a bit of sponsoring.
It sounds like you are doing something serious enough that Let's Encrypt might not meet your needs in other ways. Pay up for a wildcard cert or refactor subdomains out of your architecture.
My understanding is for intranet, you could use Let's Encrypt. For example, if I own .foo.com, and i want my intranet to be .internal.foo.com I need to make *.internal.foo.com in the DNS in order to verify I own .internal.foo.com, correct? But then doesn't that expose my 'internal' network? Hope there is a different way to solve this problem.
You don't need to "open up" your internal network (the ownership validation can happen via DNS), but the hostname would be public through Certificate Transparency.
Generally, if you're relying on your internal hostnames being secret (which is a terrible idea anyway), you should consider using an internal CA, because there's a good chance all public CAs will start logging every single certificate they issue to public logs, and that would include all the domains the certificate is valid for¹. Better yet, don't treat your hostnames as secrets.
¹ I think there has been some discussions about allowing CAs to censor DNS labels after the TLD+1 level for Certificate Transparency. Not sure if that's going to happen, I'm not a fan. This would still require that your CA supports this mechanism, something I don't think Let's Encrypt would do.
Remember, once you encrypt a web resource in SSL, you add a ton of baggage on top of any methods that might be used to access it.
I like a world in which I can 'nc' a web resource and manipulate it with unix primitives without a truckload of software dependencies.
If sensitive information is involved, then certainly - use SSL. I understand that we must give up conveniences for that functionality.
But there are a lot of web resources that have existed, do exist, and potentially exist that are completely benign ... I think we're shackling ourselves by chasing after this perfection.
Or, put another way, we're chaining ourselves to a world where web resources are only accessed by web browsers, and only by those web browsers that are chaining themselves to a fairly dubious security scheme...
Just as you can use "nc" for an HTTP resource, you can use "openssl s_client" or "ncat --ssl" (from the nmap project) or "socat" to manipulate an HTTPS resource using the same unix primitives. Which truckload of dependencies does this require? The Debian package for OpenSSL only depends on libc.
I do fully agree that the web is getting more tied to browsers, and to me that's worrying, but TLS is mostly a transparent tunnel over which you can use the same protocols; it's not part of that trend, in my opinion.
Tor Onions are technically an alternative. You access the hash of the public key? and you are only able to put up that URI if you have control over the private key. I guess something similar based on hashing and public key cryptography might be possible outside of Tor but it's not exactly user friendly to begin with.
This will likely never be the case due to how HTTPS actually works. As someone else stated, HTTPs is HTTP + TLS.
The "s" in HTTPS is for "secure", and TLS provides that security.
TLS is a evolving standard which is updated over time to add new features when necessary. When HTTPS is negotiated, it can seamlessly choose which version of TLS to use, based off what the client and server want to use.
So, HTTPS will never die due to lack of features. A new version of TLS will just be approved and deployed, and newer devices can use that while older devices can get by on an older version of TLS.
TLS is the successor to SSL. They are backwards compatible, so devices that support TLS also support SSL. The full version history, from newest to oldest, is: TLS 1.2, TLS 1.1, TLS 1.0, SSL 3, SSL 2. In reality, very few servers still use SSL 3 or SSL 2, due to known weaknesses, but colloquially, all the versions are just called "SSL".
TLS 1.3 is underway and will shortly be ready for primetime. Firefox and Cloudflare have already written some implementations based on the draft spec (sorta how routers will implemented the newest 802.11 standards before they are 100% official).
The situation is not ideal. But the consensus among browser makers is that the previously-relevant standards bodies move too slowly. They can implement new transport features independently (like Chrome did with SPDY and QUIC). But the downside is that fragmentation is more likely, as most browsers implemented SPDY's features in HTTP/2 but only Opera has added QUIC.
I see this as security theater. Most web pages don't need to be encrypted. Anything with a form should be, but if you're just viewing static content, there's little point. Yes, it obscures what content you're viewing, slightly. An observer often could figure that out from the file length.
Encrypting everything increases the demand for low-rent SSL certs. Anything below OV (Organization Validated) is junk, and if money is involved, an EV (Extended Validation) cert should be used. Trying to encrypt everything leads to messes such as Cloudflare's MITM certs which name hundreds of unrelated domains. This is a step backwards.
> Most web pages don't need to be encrypted. Anything with a form should be, but if you're just viewing static content, there's little point.
Some really cool HTML and JS functionality will only work over HTTPS.
> Yes, it obscures what content you're viewing, slightly. An observer often could figure that out from the file length.
If you have an attacker than can identify content solely from its length, you have bigger problems than an SSL cert can solve.
> Trying to encrypt everything leads to messes such as Cloudflare's MITM certs which name hundreds of unrelated domains. This is a step backwards.
I do not see the problem. All those domain owners consciously choose to have Cloudflare host their stuff. The cert might be a few KB bigger, but who cares?
[+] [-] rogerbinns|9 years ago|reply
Plex did solve this in conjunction with a certificate authority, but that solution only works for them. The general approach could work for others if someone like letsencrypt led the effort. https://blog.filippo.io/how-plex-is-doing-https-for-all-its-...
[+] [-] pfg|9 years ago|reply
The Plex approach would be possible with Let's Encrypt, though you would have to find a way to avoid running into rate limits (via PSL or by making users use their own domains, which is admittedly only an argument if you're catering to a technical audience).
[1]: https://cabforum.org/wp-content/uploads/Guidance-Deprecated-...
[+] [-] chrisfosterelli|9 years ago|reply
Nothing would stop someone from getting a certificate for the hostname "myfridge" on their LAN, then going to your LAN and using the same one to perform MitM for your "myfridge".
The plex approach is very interesting though! There would be a lot to think out, but LetsEncrypt could do it if anyone could.
[+] [-] fulafel|9 years ago|reply
Connectivity [1] and using a global namespace are orthogonal things: you can use global DNS namespace just fine independent of connectivity. So from the naming perspective it Just Works if you get certs for printer.yourhouse.you.tld and fridge.yourhouse.you.tld.
(Of course you'd still like an automated cert renewal system for this disconnected case, but that's just a "simple matter of programming".)
[1] assuming by "LAN" you meant "network disconnected from the Internet"
[+] [-] dorfsmay|9 years ago|reply
[+] [-] djrogers|9 years ago|reply
[+] [-] Retr0spectrum|9 years ago|reply
[+] [-] brians|9 years ago|reply
[+] [-] criddell|9 years ago|reply
How effective is something like certificate pinning against fraudulent certs?
[+] [-] agwa|9 years ago|reply
If the bogus certs are not logged in Certificate Transparency, they will be rejected by Chrome: https://security.googleblog.com/2015/10/sustaining-digital-c...
If they are logged in Certificate Transparency, then the world will know, the offending certificates will be immediately blacklisted, and Symantec will be booted from root programs.
With the ongoing advancements in Certificate Transparency, your faith in the Internet PKI should be growing, not waning.
[+] [-] 5ilv3r|9 years ago|reply
[+] [-] 5ilv3r|9 years ago|reply
Browsers and OS vendors shipping CAs seems to be the root of the problem, in my mind. Those should be distributed by the service providers, who are the actual trustworthy entities in the user's minds.
[+] [-] snowwrestler|9 years ago|reply
The chain of trust is not to tell your users to trust you. It's to tell your users not to trust me, even if I look just like you.
[+] [-] simbalion|9 years ago|reply
[+] [-] pfg|9 years ago|reply
That's what HPKP does, basically, unless I'm misinterpreting what you mean with service providers.
HPKP is Trust on First Use, so it's not perfect, but the alternative - some kind of Web of Trust - is not really practical for non-technical, not-security-conscious users, IMO.
[+] [-] cptskippy|9 years ago|reply
I share your frustration and I understand that trying manage levels of trust a tough problem compounded by the fact that a user's expectations are fluid.
[+] [-] drzaiusapelord|9 years ago|reply
Meanwhile at work we're juggling dozens of certs left and right each with their own expiry as a handout to CAs. There's no reason why CAs cant sell me a cert that has a decade expiry. If the cryptography it uses goes bad, we'll just replace it. Why am I constantly buying these things?
Everything about CAs and browsers are wrong. Especially when many browsers ship with root certs from entities controlled by autocratic governments with zero accountability and involved in cybercrime and cyberspying. I'm giving incredible access to these nation states by downloading Firefox, Chrome, or IE. How is this "secure" again?
[+] [-] toomanythings2|9 years ago|reply
That's the point. Having some authority who did at least some minimal checking, to extensive checking, and who will verify you really are who you purport to be. Trust but verify probably plays a part in this.
But, remember, you don't have to go to HTTPS. There is no requirement for you to do so.
[+] [-] Abundnce10|9 years ago|reply
How can you cover 7 million unique domains if you've only issued 5 million certificates?
[+] [-] schoen|9 years ago|reply
[+] [-] waterphone|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] cdolan92|9 years ago|reply
LetsEncrypt is almost built upon the idea of frequently (and automatically) re-issuing your certificate(s). The graph's line shows what appears to be an accumulated sum of certificates issued by day.
If every 90 days most certificate(s) expire, of course the graph will look like that!
Whats most interesting to me is the steps up in the graph. It appears that the steps in the graph roughly occur on 70-90 day intervals.
Impressive growth for a great mission/service, but I wanted to point out the mechanics behind the graph. Hopefully others can offer some alternative perspectives!
edit: Grammar, illogical sentence structure.
[+] [-] zzzcpan|9 years ago|reply
[+] [-] Freaky|9 years ago|reply
https://community.letsencrypt.org/t/rate-limits-for-lets-enc...
[+] [-] ocdtrekkie|9 years ago|reply
But unfortunately it doesn't seem like Let's Encrypt currently has any plans to add wildcard certs any time soon.
[+] [-] tracker1|9 years ago|reply
Though, it would be nice if the likes of dyndns names were given exception, since they are effectively second level tld's.
[+] [-] perlgeek|9 years ago|reply
[+] [-] theandrewbailey|9 years ago|reply
[+] [-] yeukhon|9 years ago|reply
[+] [-] pfg|9 years ago|reply
Generally, if you're relying on your internal hostnames being secret (which is a terrible idea anyway), you should consider using an internal CA, because there's a good chance all public CAs will start logging every single certificate they issue to public logs, and that would include all the domains the certificate is valid for¹. Better yet, don't treat your hostnames as secrets.
¹ I think there has been some discussions about allowing CAs to censor DNS labels after the TLD+1 level for Certificate Transparency. Not sure if that's going to happen, I'm not a fan. This would still require that your CA supports this mechanism, something I don't think Let's Encrypt would do.
[+] [-] simbalion|9 years ago|reply
[+] [-] rsync|9 years ago|reply
Remember, once you encrypt a web resource in SSL, you add a ton of baggage on top of any methods that might be used to access it.
I like a world in which I can 'nc' a web resource and manipulate it with unix primitives without a truckload of software dependencies.
If sensitive information is involved, then certainly - use SSL. I understand that we must give up conveniences for that functionality.
But there are a lot of web resources that have existed, do exist, and potentially exist that are completely benign ... I think we're shackling ourselves by chasing after this perfection.
Or, put another way, we're chaining ourselves to a world where web resources are only accessed by web browsers, and only by those web browsers that are chaining themselves to a fairly dubious security scheme...
[+] [-] icebraining|9 years ago|reply
I do fully agree that the web is getting more tied to browsers, and to me that's worrying, but TLS is mostly a transparent tunnel over which you can use the same protocols; it's not part of that trend, in my opinion.
[+] [-] arca_vorago|9 years ago|reply
[+] [-] nisa|9 years ago|reply
[+] [-] g8oz|9 years ago|reply
[+] [-] vizza|9 years ago|reply
[deleted]
[+] [-] projectramo|9 years ago|reply
That way when https is found to lack some feature, we can easily upgrade to httpz almost immediately?
[+] [-] vtlynch|9 years ago|reply
The "s" in HTTPS is for "secure", and TLS provides that security.
TLS is a evolving standard which is updated over time to add new features when necessary. When HTTPS is negotiated, it can seamlessly choose which version of TLS to use, based off what the client and server want to use.
So, HTTPS will never die due to lack of features. A new version of TLS will just be approved and deployed, and newer devices can use that while older devices can get by on an older version of TLS.
TLS is the successor to SSL. They are backwards compatible, so devices that support TLS also support SSL. The full version history, from newest to oldest, is: TLS 1.2, TLS 1.1, TLS 1.0, SSL 3, SSL 2. In reality, very few servers still use SSL 3 or SSL 2, due to known weaknesses, but colloquially, all the versions are just called "SSL".
TLS 1.3 is underway and will shortly be ready for primetime. Firefox and Cloudflare have already written some implementations based on the draft spec (sorta how routers will implemented the newest 802.11 standards before they are 100% official).
[+] [-] p4lindromica|9 years ago|reply
The http in https just means http. It has evolved from http/1.0 to http/1.1 to http/2.
I'm not sure what you're asking or how it is relevant to Let's Encrypt.
[+] [-] sp332|9 years ago|reply
[+] [-] Animats|9 years ago|reply
Encrypting everything increases the demand for low-rent SSL certs. Anything below OV (Organization Validated) is junk, and if money is involved, an EV (Extended Validation) cert should be used. Trying to encrypt everything leads to messes such as Cloudflare's MITM certs which name hundreds of unrelated domains. This is a step backwards.
[+] [-] theandrewbailey|9 years ago|reply
Some really cool HTML and JS functionality will only work over HTTPS.
> Yes, it obscures what content you're viewing, slightly. An observer often could figure that out from the file length.
If you have an attacker than can identify content solely from its length, you have bigger problems than an SSL cert can solve.
> Trying to encrypt everything leads to messes such as Cloudflare's MITM certs which name hundreds of unrelated domains. This is a step backwards.
I do not see the problem. All those domain owners consciously choose to have Cloudflare host their stuff. The cert might be a few KB bigger, but who cares?
[+] [-] mholt|9 years ago|reply