I'm all for it -- it's hard to understate the extent to which LetsEncrypt has improved the WebPKI situation. Although the effective single-vendor situation isn't great, the "this is just something you only do via an automated API" approach is absolutely the right one. And certificate lifetimes measured in days work just fine with that.
The only things that continue to amaze me are the number of (mostly "enterprise") software products that simply won't get with the times (or get it wrong, like renewing the cert, but continuing to use the old one until something is manually restarted), and the countless IT departments that still don't support any kind of API for their internal domains...
It's not single-vendor. The ACME protocol is also supported by the likes of GlobalSign, Sectigo, and Digicert.
You've got to remember that the reduction to a 45-day duration is industry-wide - driven by the browsers. Any CA not offering automated renewal (which in practice means ACME) is going to lose a lot of customers over the next few years.
> The only things that continue to amaze me are the number of (mostly "enterprise") software products that simply won't get with the times
Yeah, no one's rewriting a bunch of software to support automating a specific, internet-facing, sometimes-reliable CA.
Yes it's ACME, a standard you say. A standard protocol with nonstop changing profile requirements at LE's whim. Who's going to keep updating the software every 3 months to keep up? When the WebPKI sneeze in a different direction and change their minds yet again. Because 45 will become 30 will become 7 and they won't stop till the lifetime is 6 hours.
"Enterprise" products are more often than not using internal PKI so it's a waste.
I would like to see the metrics on how much time and resources are wasted babysitting all this automation vs. going in and updating a certificate manually once a year and not having to worry the automation will fail in a week.
Big news for both the lazy homelab admin that can set a TXT once and ultimately be more secure without spraying DNS Zone Edit tokens all over their infra AND for the poor enterprise folks that have to open a ticket and wait 3 weeks for a DNS record.
This replaces an anonymous token with a LetsEncrypt account identifier in DNS. As long as accounts are not 1:1 to humans, that seems fine. But I hope they keep the other challenges.
I really would have felt better with a random token that was tied to the account, rather than the account number itself. The CA side can of course decide to implement it either way , but all examples are about the account ID.
Thank god. The only remaining failure mode I’ve seen with LE certs recently is API key used to manipulate DNS records for the DNS-01 challenge via some provider (Cloudflare etc.) expiring or being disabled during improper user offboarding.
Since we're on the topic of certificates, my app (1M+ logins per day) uses certificate pinning with a cert that lasts for one year, because otherwise it would be a nightmare to roll the cert multiple times in production. But what would be the "modern" way to do smart and automated certificate pinning, now that short-lived certs are becoming the trend?
Don't. Don't pin to public certificates. You're binding your app to third-party infrastructure beyond your control. Things change, and often.
Note that pinning to a root or intermediate seems 'sensible' - but it isn't. Roots are going to start changing every couple of years.
Issuing/intermediate CAs will be down to 6 months, and may even need to be randomised so when you request a new cert, there's no guarantee it'll be from the same CA as before.
The certificates will expire, but (as far as I'm aware), you're still allowed to use the same private key for multiple certificates, so as long as you pin to the public key instead of to the certificate itself, you should be fine.
The real modern way to do certificate pinning is to not do certificate pinning at all, but I'm sure that you've already heard this countless times before. An alternative option would be to run your own private CA, generate a new public/private keypair every 45 days, and generate certificates with that public key using both your private CA and Let's Encrypt, and then pin your private CA instead of the leaf certificates.
I think the suggestion of pinning the public key and keeping the same private key across certs is the best option. But if you don't want that, perhaps this is a (high complexity, high fragility) alternative:
- Make sure your app checks that enough trusted embedded Signed Certificate Timestamps are present in the certificate (web browsers and the iOS and Android frameworks already do this by default).
- Disallow your app to trust certificates that are more recently requested than N hours. This might be hard to do.
- Set up monitoring to the certificate transparency logs to verify that no bad actor has obtained a certificate (and make sure you are always able to revoke them within N hours).
- Make sure you always have fresh keys with certificates in cold storage older than N hours, because you can't immediately use newly obtained certificates
Pinning the intermediate CA should work. Alternatively, calculate the cost of updating the cert pinning mechanism if it's custom and compare it to paid, 1 year certificates (though those will go away eventually too).
On the other hand, if you're using an app specific server, there's no need for you to use public certificates. A self-generated one with a five or ten year validity will pin just as nicely. That breaks if you need web browsers or third parties to talk to the same API, of course.
Forgetting the obvious security advantage for the moment, I've found this to actually be convenient that the lifetimes are rather short. I'm not disciplined when it comes to setting up my homelab projects, so in the past sometimes I'd just LE it and then worry about it when renewal failed. My family is the only consumer so who cares.
But then they set some shorter lifetime, and I was forced to set up automation and now I've gotten a way of doing it and it's pretty easy to do. So now I either `cloudflared` or make sure certbot is doing its thing.
Perhaps if they'd made that more inconvenient I would have started using Caddy or Traefik instead of my good old trusty nginx knowledge.
> It is already getting dangerously close to the duration of holiday freeze windows, compliance/audit enforced windows, etc.
How do those affect automated processes though? If the automation were to fail somehow during a freeze window, then surely that would be a case of fixing a system and thus not covered by the freeze window.
> Not to mention the undue bloat of CT logs.
I'm not sure what you mean by "CT logs", but I assume it's something to do with the certificate renewal automation. I can't see that you'd be creating GBs of logs that would be difficult to handle. Even a home-based selfhosted system would easily cope with certificate logs from running it hourly.
> Acceptable behavior includes renewing certificates at approximately two thirds of the way through the current certificate’s lifetime.
So you can start renewing with 30d of lifetime remaining. You probably want to retry once or twice before alerting. So lets say 28d between alert and expiry.
That seems somewhat reasonable. But is basically the lower margin of what I consider so. I feel like I should be able to walk away from a system for a month with no urgent maintenance needed. 28d is really cutting it close. I think the previous 60d was generous but that is probably a good thing.
I really hope they don't try to make it shorter than this. Because I really don't want to worry about certificate expiry during a vacation.
Alternatively they could make the acceptable behaviour much higher. For example make 32d certificates but it is acceptable to start renewing them after 24h. Because I don't really care how often my automation renews them. What matters is the time frame between being alerted due to renewal failure and expiry.
“I really hope they don’t try to make it shorter than this. Because I really don’t want to worry about certificate expiry during a vacation.”
You might want to consider force-renewing all your certs a few days before your vacation. Then you can go away for over 40 days. (Unless something else breaks…)
I’m maintaining a server with Let’s Encrypt certs for a B2B integration platform. Some partner systems still can’t just pin the CA and instead require a manual certificate update on their side. So every 90 days we do the same email ping-pong to get them to install the new cert — and now that window is getting cut in half.
Hopefully their software stack will be able to automate this by 2028.
I have often had the thought that CAs could be made obsolete by things like DNSSEC. I have no idea if I am correct, so I would like to ask: please tell me if/why I am wrong. Here is my argument:
- If you control the DNS record at the registrar, you can point it to whatever server you want and get a Let’s Encrypt certificate that way. Never mind the old model. So you need to trust the registrar, if they’re compromised that’s it.
- Back in the day, DNS is cached and plaintext and cannot be trusted
- Now, you can use DNSSEC. And you could have the registrar sign the DNS entry, so a cache/ISP/corporate can’t forge it…
- So: why not stick a key, with metadata (e.g. expiration) in a signed DNS record. Now you only trust one person (the registrar) rather than two (the registrar and the CA)
- And there’s no extra paperwork or process since you need to do all that stuff anyways to register the domain.
I don’t know much networking. What’s the flaw in this argument? Is there one?
I’m imagining a world where instead of a trust database of CAs, you instead trust the keys of the DNS root servers, and the CA system and DNS system merge into one.
This also gives you a neat story for custom certs, etc, as it becomes a domain problem. It’s now perfectly natural to say “I trust the X root of trust for domains Y” for e.g. your internal system because that’s what you’re using to resolve those URLs.
To be fair, part of this is my fascination with the idea of using cryptographic keys as addresses, like in many a cool protocol, making DNS literally into a CA of the form “assign this name to this identity (key)”.
Cert lifetimes are such a burden. I wanted to provide pre-configured server examples of my WebRTC project, something that was download-and-run without any more prior knowledge (an important point), which users could access from their LAN e.g. to test the examples from their phones (not from the useless localhost exemption that exists for secure contexts), for which a self-signed cert embedded in the examples was fine. New users could run them, new concepts (such as security and certificate management in production apps) could be learned at an apropriate time.
Until web browsers started to believe that no, that was too much of a convenience, so now long expiration certs became rejected. What's the proposed solution from the "industry"? to run a whole automation pipeline just to update a file in each example folder every few months? bonkers. These should be static examples, no reason to having to update those any earlier than every few years, at most.
A certificate is a binding of a cryptographic key, along with an attestation of control of a DNS record(s) at a point in time. DNS changes frequently. The attestation needs to be refreshed much more frequently to ensure accuracy.
One would hope they're also increasing rate limits along with this, but there's no indication of that yet.
> Up to 50 certificates can be issued per registered domain (or IPv4 address, or IPv6 /64 range) every 7 days. This is a global limit, and all new order requests, regardless of which account submits them, count towards this limit.
This is hard to deal with when you have a large number of subdomains and you'd rather (as per their recommendations) not issue SAN certificates with multiple subdomains on them.
We are working on further improvements to our rate limits, including adding more automation to how we adjust them. We're not ready to announce that yet.
We wanted to get this post out as soon as we'd decided on a timeline so everyone's on the same page here.
Certificates that look like renewals -- for the same set of names, from the same account -- are exempt from rate limits. This means that renewing (for example) every 30 days instead of every 60 days will not cost any rate limit tokens or require any rate limit overrides.
How do people here deal with distributed websites? I’m currently issuing one certificate on my machine and then Ansible-ing it into all the servers. I could issue one certificate for each server, but then at any given time I’d have dozens of certs for the same domain, and all must be valid. That doesn’t sound right either.
Organizations with many frontends/loadbalancers all serving the same site tend to adopt one of four solutions:
- Have one node with its own ACME account. It controls key generation and certificate renewal, and then the new key+cert is copied to all nodes that need it. Some people don't like this solution because it means you're copying private keys around your infrastructure.
- Have one node with its own ACME account. The other nodes generate their own TLS keys, then provide a CSR to the central node and ask it to do the ACME renewal flow on their behalf. This means you're never copying keys around, but it means that central node needs to (essentially) be an ACME server of its own, which is a more complex process to run.
- Have one ACME account, but copy its account key to every node. Have each node be in charge of its own renewal, all using that shared account. This again requires copying private keys around (though this time its the ACME key and not the TLS key).
- Give every node its own ACME account, and have each node be in charge of its own renewal.
The last solution is arguably the easiest. None of the nodes have to care about any of the others. However, it might run into rate limits; for example, LE limits the number of new account registrations per IPv6 range, so if you spin up a bunch of nodes all at once, some of them might fail to register their new accounts. And if your organization is large enough, it might run into some of LE's other rate limits, like the raw certificates-per-domain limit. Any of the above solutions would run into that rate limit at the same time, but rate limit overrides are most easily granted on a per-account basis, so having all the nodes share one account is useful in that regard.
Another factor in the decision-making process is what challenge you're using. If you're using a DNS-based challenge, then any of these solutions work equally well (though you may prefer to use one of the centralized solutions so that your DNS API keys don't have to live on every individual node). If you're using an HTTP-based challenge, you might be required to use a centralized solution, if you can't control which of your frontends receives the HTTP request for the challenge token.
Anyway, all of that is a long-winded way to say "there's no particularly wrong or right answer". What you're doing right now makes sense for your scale, IMO.
By "distributed websites" you mean multiple webservers for one FQDN? Usually TLS termination would happen higher up the stack than on the webservers themselves (reverse proxy, L7 load balancer, etc) and the cert(s) would live there. But if your infrastructure isn't that complicated then yes, the happy path is have each webserver independently handle its own certificate (but note your issuance rate limits, 5 certs per week for the exact same hostname[1]).
As someone who works at a company who has to manage millions of SSL certificates for IoT devices in extremely terrible network situations I dread this.
One of the biggest issues is handling renewals at scale, and I hate it. Another increasingly frusturation is challenges via DNS are not quick.
Are these IoT devices expected to be accessible via a regular Web browser from the public Internet? Does each of them represent a separate domain than needs a separate certificate, which it must not share with other similar devices?
I would strongly suggest that these certs have no reason to be from a public CA and thus you can (and should) move them to a private CA where these rules don't apply.
I'm sure this is for good reasons, but as someone that maintains a lot of ssl certificates, I'm not in love with this change. Sometimes things break with cert renewal, and it sometimes takes a chunk of time to detect and then sit down to properly fix those issues. This shortens the amount of time I will have to deal with that if it ever comes up (which is more often than you would expect), and increases the chances of running into rate limit issues too.
This is too short and the justification provided is flimsy at best.
I predict that normal people will began to get comfortable with ignoring SSL errors, even more than they already are. Perhaps we will see the proliferation of https-stripping proxies too.
I’m imagining that xkcd meme about internet infrastructure and one of the thin blocks holding the whole thing up being LE.
Is there any good argument for short lifetimes? The only argument I know of is that short lifetimes are supposedly better in case the key gets compromised, but I disagree. If the key can be compromised once it can be compromised again when it renews; the underlying cause of compromise doesn’t go away. NIST stopped recommending forced password rotation for this reason, it’s pseudosecurity.
Translation: Like any large bureaucracy, the certificate industry sees its own growth as a moral virtue, and no limits to the burdens which it should be free to impose on the rest of society.
According to TFA LE already offers a "shortlived" profile that issues 6-day certs if you want to stress test your automation, or just gain the security advantages of rapid certificate turnover immediately.
The goal is to move to short lived certs to make the fragile system of revocation lists and public certificate logs unnecessary.
"This change is being made along with the rest of the industry, as required by the CA/Browser Forum Baseline Requirements, which set the technical requirements that we must follow."
I dont follow. Why? Why not an hour? A ssl failure is a very effective way to shut down a site.
"you should verify that your automation is compatible with certificates that have shorter validity periods.
To ensure your ACME client renews on time, we recommend using ACME Renewal Information (ARI). ARI is a feature we’ve introduced to help clients know when they need to renew their certificates. Consult your ACME client’s documentation on how to enable ARI, as it differs from client to client. If you are a client developer, check out this integration guide."
Oh that sounds wonderful. So every small site that took the LE bait needs expensive help to stay online.
Do they track and publish the sites they take down?
They've been slowly moving the time lower and lower. It will go lower than 45 days in the future, but the reason why we don't go immediately to 1 hour is that it would be too much of a shock.
>So every small site that took the LE bait needs expensive help to stay online.
It's all automated. They don't need help to stay online.
>Oh that sounds wonderful. So every small site that took the LE bait needs expensive help to stay online.
I agree with the terminology "bait", because the defaults advocated by letsencrypt are horrible. Look at this guide [0].
They strongly push you towards the HTTP-01 challenge which is the one that requires the most amount of infrastructure (http webserver + certbot) and is the hardest to setup. The best challenge type in that list is TLS-ALPN-01 which they dissuade you from! "This challenge is not suitable for most people."
And yet when you look at the ACME Client for JVM frameworks like Micronaut [1], the default is TLS and its the simplest to set up (no DNS access or external webserver). Crazy.
ZeroConcerns|2 months ago
The only things that continue to amaze me are the number of (mostly "enterprise") software products that simply won't get with the times (or get it wrong, like renewing the cert, but continuing to use the old one until something is manually restarted), and the countless IT departments that still don't support any kind of API for their internal domains...
crote|2 months ago
You've got to remember that the reduction to a 45-day duration is industry-wide - driven by the browsers. Any CA not offering automated renewal (which in practice means ACME) is going to lose a lot of customers over the next few years.
schmuckonwheels|2 months ago
Yeah, no one's rewriting a bunch of software to support automating a specific, internet-facing, sometimes-reliable CA.
Yes it's ACME, a standard you say. A standard protocol with nonstop changing profile requirements at LE's whim. Who's going to keep updating the software every 3 months to keep up? When the WebPKI sneeze in a different direction and change their minds yet again. Because 45 will become 30 will become 7 and they won't stop till the lifetime is 6 hours.
"Enterprise" products are more often than not using internal PKI so it's a waste.
I would like to see the metrics on how much time and resources are wasted babysitting all this automation vs. going in and updating a certificate manually once a year and not having to worry the automation will fail in a week.
riffic|2 months ago
secret-noun|2 months ago
> We expect DNS-PERSIST-01 to be available in 2026
Very exciting!
https://datatracker.ietf.org/doc/html/draft-sheurich-acme-dn...
redrove|2 months ago
flowerthoughts|2 months ago
I really would have felt better with a random token that was tied to the account, rather than the account number itself. The CA side can of course decide to implement it either way , but all examples are about the account ID.
maxgashkov|2 months ago
bravetraveler|2 months ago
andrewmcwatters|2 months ago
[deleted]
raizer88|2 months ago
nickf|2 months ago
Don't pin to certs you don't control.
gucci-on-fleek|2 months ago
The real modern way to do certificate pinning is to not do certificate pinning at all, but I'm sure that you've already heard this countless times before. An alternative option would be to run your own private CA, generate a new public/private keypair every 45 days, and generate certificates with that public key using both your private CA and Let's Encrypt, and then pin your private CA instead of the leaf certificates.
throwaway89201|2 months ago
- Make sure your app checks that enough trusted embedded Signed Certificate Timestamps are present in the certificate (web browsers and the iOS and Android frameworks already do this by default).
- Disallow your app to trust certificates that are more recently requested than N hours. This might be hard to do.
- Set up monitoring to the certificate transparency logs to verify that no bad actor has obtained a certificate (and make sure you are always able to revoke them within N hours).
- Make sure you always have fresh keys with certificates in cold storage older than N hours, because you can't immediately use newly obtained certificates
Grikbdl|2 months ago
jeroenhd|2 months ago
On the other hand, if you're using an app specific server, there's no need for you to use public certificates. A self-generated one with a five or ten year validity will pin just as nicely. That breaks if you need web browsers or third parties to talk to the same API, of course.
jrjfjgkrj|2 months ago
ghxst|2 months ago
arjie|2 months ago
But then they set some shorter lifetime, and I was forced to set up automation and now I've gotten a way of doing it and it's pretty easy to do. So now I either `cloudflared` or make sure certbot is doing its thing.
Perhaps if they'd made that more inconvenient I would have started using Caddy or Traefik instead of my good old trusty nginx knowledge.
mike_d|2 months ago
It is already getting dangerously close to the duration of holiday freeze windows, compliance/audit enforced windows, etc.
Not to mention the undue bloat of CT logs.
ndsipa_pomu|2 months ago
How do those affect automated processes though? If the automation were to fail somehow during a freeze window, then surely that would be a case of fixing a system and thus not covered by the freeze window.
> Not to mention the undue bloat of CT logs.
I'm not sure what you mean by "CT logs", but I assume it's something to do with the certificate renewal automation. I can't see that you'd be creating GBs of logs that would be difficult to handle. Even a home-based selfhosted system would easily cope with certificate logs from running it hourly.
kevincox|2 months ago
So you can start renewing with 30d of lifetime remaining. You probably want to retry once or twice before alerting. So lets say 28d between alert and expiry.
That seems somewhat reasonable. But is basically the lower margin of what I consider so. I feel like I should be able to walk away from a system for a month with no urgent maintenance needed. 28d is really cutting it close. I think the previous 60d was generous but that is probably a good thing.
I really hope they don't try to make it shorter than this. Because I really don't want to worry about certificate expiry during a vacation.
Alternatively they could make the acceptable behaviour much higher. For example make 32d certificates but it is acceptable to start renewing them after 24h. Because I don't really care how often my automation renews them. What matters is the time frame between being alerted due to renewal failure and expiry.
cpach|2 months ago
You might want to consider force-renewing all your certs a few days before your vacation. Then you can go away for over 40 days. (Unless something else breaks…)
ensocode|2 months ago
Hopefully their software stack will be able to automate this by 2028.
nickf|2 months ago
tekne|2 months ago
- If you control the DNS record at the registrar, you can point it to whatever server you want and get a Let’s Encrypt certificate that way. Never mind the old model. So you need to trust the registrar, if they’re compromised that’s it.
- Back in the day, DNS is cached and plaintext and cannot be trusted
- Now, you can use DNSSEC. And you could have the registrar sign the DNS entry, so a cache/ISP/corporate can’t forge it…
- So: why not stick a key, with metadata (e.g. expiration) in a signed DNS record. Now you only trust one person (the registrar) rather than two (the registrar and the CA)
- And there’s no extra paperwork or process since you need to do all that stuff anyways to register the domain.
I don’t know much networking. What’s the flaw in this argument? Is there one?
I’m imagining a world where instead of a trust database of CAs, you instead trust the keys of the DNS root servers, and the CA system and DNS system merge into one.
This also gives you a neat story for custom certs, etc, as it becomes a domain problem. It’s now perfectly natural to say “I trust the X root of trust for domains Y” for e.g. your internal system because that’s what you’re using to resolve those URLs.
To be fair, part of this is my fascination with the idea of using cryptographic keys as addresses, like in many a cool protocol, making DNS literally into a CA of the form “assign this name to this identity (key)”.
cpach|2 months ago
https://sockpuppet.org/blog/2015/01/15/against-dnssec/
https://sockpuppet.org/blog/2016/10/27/14-dns-nerds-dont-con...
supriyo-biswas|2 months ago
ptman|2 months ago
j1elo|2 months ago
Until web browsers started to believe that no, that was too much of a convenience, so now long expiration certs became rejected. What's the proposed solution from the "industry"? to run a whole automation pipeline just to update a file in each example folder every few months? bonkers. These should be static examples, no reason to having to update those any earlier than every few years, at most.
noman-land|2 months ago
nickf|2 months ago
Ekaros|2 months ago
parliament32|2 months ago
> Up to 50 certificates can be issued per registered domain (or IPv4 address, or IPv6 /64 range) every 7 days. This is a global limit, and all new order requests, regardless of which account submits them, count towards this limit.
This is hard to deal with when you have a large number of subdomains and you'd rather (as per their recommendations) not issue SAN certificates with multiple subdomains on them.
mcpherrinm|2 months ago
We are working on further improvements to our rate limits, including adding more automation to how we adjust them. We're not ready to announce that yet.
We wanted to get this post out as soon as we'd decided on a timeline so everyone's on the same page here.
phasmantistes|2 months ago
kassner|2 months ago
phasmantistes|2 months ago
- Have one node with its own ACME account. It controls key generation and certificate renewal, and then the new key+cert is copied to all nodes that need it. Some people don't like this solution because it means you're copying private keys around your infrastructure.
- Have one node with its own ACME account. The other nodes generate their own TLS keys, then provide a CSR to the central node and ask it to do the ACME renewal flow on their behalf. This means you're never copying keys around, but it means that central node needs to (essentially) be an ACME server of its own, which is a more complex process to run.
- Have one ACME account, but copy its account key to every node. Have each node be in charge of its own renewal, all using that shared account. This again requires copying private keys around (though this time its the ACME key and not the TLS key).
- Give every node its own ACME account, and have each node be in charge of its own renewal.
The last solution is arguably the easiest. None of the nodes have to care about any of the others. However, it might run into rate limits; for example, LE limits the number of new account registrations per IPv6 range, so if you spin up a bunch of nodes all at once, some of them might fail to register their new accounts. And if your organization is large enough, it might run into some of LE's other rate limits, like the raw certificates-per-domain limit. Any of the above solutions would run into that rate limit at the same time, but rate limit overrides are most easily granted on a per-account basis, so having all the nodes share one account is useful in that regard.
Another factor in the decision-making process is what challenge you're using. If you're using a DNS-based challenge, then any of these solutions work equally well (though you may prefer to use one of the centralized solutions so that your DNS API keys don't have to live on every individual node). If you're using an HTTP-based challenge, you might be required to use a centralized solution, if you can't control which of your frontends receives the HTTP request for the challenge token.
Anyway, all of that is a long-winded way to say "there's no particularly wrong or right answer". What you're doing right now makes sense for your scale, IMO.
parliament32|2 months ago
[1] https://letsencrypt.org/docs/rate-limits/#new-certificates-p...
mattbillenstein|2 months ago
Snakes3727|2 months ago
One of the biggest issues is handling renewals at scale, and I hate it. Another increasingly frusturation is challenges via DNS are not quick.
nine_k|2 months ago
nickf|2 months ago
kyledrake|2 months ago
0x1ceb00da|2 months ago
theknarf|2 months ago
Won't that introduce new security problems? Seems like a step back.
unknown|2 months ago
[deleted]
unknown|2 months ago
[deleted]
M95D|2 months ago
TZubiri|2 months ago
cpach|2 months ago
silverliver|2 months ago
I predict that normal people will began to get comfortable with ignoring SSL errors, even more than they already are. Perhaps we will see the proliferation of https-stripping proxies too.
cedws|2 months ago
Is there any good argument for short lifetimes? The only argument I know of is that short lifetimes are supposedly better in case the key gets compromised, but I disagree. If the key can be compromised once it can be compromised again when it renews; the underlying cause of compromise doesn’t go away. NIST stopped recommending forced password rotation for this reason, it’s pseudosecurity.
socrateswasone|2 months ago
[deleted]
hulitu|2 months ago
45 days ? So long ? Who needs so long living certificates ? A couple of miliseconds shall be enough. /s
johnea|2 months ago
bell-cot|2 months ago
karel-3d|2 months ago
appointment|2 months ago
The goal is to move to short lived certs to make the fragile system of revocation lists and public certificate logs unnecessary.
ThePowerOfFuet|2 months ago
jakeogh|2 months ago
I dont follow. Why? Why not an hour? A ssl failure is a very effective way to shut down a site.
"you should verify that your automation is compatible with certificates that have shorter validity periods.
To ensure your ACME client renews on time, we recommend using ACME Renewal Information (ARI). ARI is a feature we’ve introduced to help clients know when they need to renew their certificates. Consult your ACME client’s documentation on how to enable ARI, as it differs from client to client. If you are a client developer, check out this integration guide."
Oh that sounds wonderful. So every small site that took the LE bait needs expensive help to stay online.
Do they track and publish the sites they take down?
Semaphor|2 months ago
To your actual content, unless you did something weird and special snowflake like, everything will just keep working with this.
charcircuit|2 months ago
>So every small site that took the LE bait needs expensive help to stay online.
It's all automated. They don't need help to stay online.
imtringued|2 months ago
I agree with the terminology "bait", because the defaults advocated by letsencrypt are horrible. Look at this guide [0].
They strongly push you towards the HTTP-01 challenge which is the one that requires the most amount of infrastructure (http webserver + certbot) and is the hardest to setup. The best challenge type in that list is TLS-ALPN-01 which they dissuade you from! "This challenge is not suitable for most people."
And yet when you look at the ACME Client for JVM frameworks like Micronaut [1], the default is TLS and its the simplest to set up (no DNS access or external webserver). Crazy.
[0] https://letsencrypt.org/docs/challenge-types/
[1] https://micronaut-projects.github.io/micronaut-acme/5.5.0/gu...