I like how the article describes how certificates work for both client and server. I know a little bit about it but what I read helps to reinforce what I already know and it taught me something new. I appreciate it when someone takes the time to explain things like this.
Thanks! I didn't intentionally write this for a broader audience (I didn't expect to see it while casually opening HN!). Our user base is quite diverse, so I try to find the balance between being too technical and over-explanatory. Glad it was helpful!
I would think it's more secure than clientAuth certs because if an attacker gets a misissued cert they'd have to actually execute a MitM attack to use it. In contrast, with a misissued clientAuth cert they can just connect to the server and present it.
Another fun fact: the Mozilla root store, which I'd guess the vast majority of XMPP servers are using as their trust store, has ZERO rules governing clientAuth issuance[1]. CAs are allowed to issue clientAuth-only certificates under a technically-constrained non-TLS sub CA to anyone they want without any validation (as long as the check clears ;-). It has never been secure to accept the clientAuth EKU when using the Mozilla root store.
> Is there a reason why dialback isn't the answer?
There are some advantages to using TLS for authentication as well as encryption, which is already a standard across the internet.
For example, unlike an XMPP server, CAs typically perform checks from multiple vantage points ( https://letsencrypt.org/2020/02/19/multi-perspective-validat... ). There is also a lot of tooling around TLS, ACME, CT logs, and such, which we stand to gain from.
In comparison, dialback is a 20-year-old homegrown auth mechanism, which is more vulnerable to MITM.
Nevertheless, there are some experiments to combine dialback with TLS. For example, checking that you get the same cert (or at least public key) when connecting back. But this is not really standardized, and can pose problems for multi-server deployments.
> It has never been secure to accept the clientAuth EKU when using the Mozilla root store.
Good job we haven't been doing this for a very long time by now :)
> CAs are allowed to issue clientAuth-only certificates under a technically-constrained non-TLS sub CA to anyone they want without any validation (as long as the check clears ;-). It has never been secure to accept the clientAuth EKU when using the Mozilla root store.
It has never been secure to to rely on the Mozilla root store at all, or any root store for that matter, as they all contain certificate authorities which are in actively hostile countries or can otherwise be coerced by hostile actors. The entire security of the web PKI relies on the hope that if some certificate authority does something bad it'll become known.
> The current CA ecosystem is *heavily* driven by web browser vendors (i.e. Google, Apple, Microsoft and Mozilla), and they are increasingly hostile towards non-browser applications using certificates from CAs that they say only provide certificates for consumption by web browsers.
Let's translate and simplify:
> The current CA ecosystem is Google. They want that only Google-applications get certificates from CAs.
For decades there have been a few entities interested in actually providing a working and trustworthy PKI for the Internet - it's called the Web PKI because in practice the only interested parties are always browser vendors.
There are always plenty of people who aren't interested in doing any hard work themselves but are along for the ride, and periodically some of those people are very angry because a system they expended no effort to maintain hasn't focused on their needs.
The Web PKI wasn't somehow blessed by God, people made this. If you do the hard work you can make your own PKI, with your own rules. If you aren't interested in doing that work, you get whatever the people who did the work wanted. This ought to be a familiar concept.
Huh? Google does not even make a web server, or any kind of major servers, unless you count GCP load balancers or whatever. You are confusing their control of the client (which is still significantly shared with Apple and Microsoft since they control OS-level certificate trusts) with the server side, who are the "customers" of the CA. Google has almost no involvement in that and couldn't care less what kind of code is requesting and using certificates.
No. HTTPS certificates are being abused for non-https purposes. CAs want to sell certificates for everything under the sun, and want to force those in the ecosystem to support their business, even though https certificates are not designed to be used for other things (mail servers for example).
If CAs don't want hostility from browser companies for using https certificate for non-http/browser applications, they should build their own thing.
Google has recently imposed a rule that CA roots trusted by Chrome must be used solely for the core server-authentication use case, and can't also be used for other stuff. They laid out the rationale here: https://googlechrome.github.io/chromerootprogram/moving-forw...
It's a little vague, but my understanding reading between the lines is that sometimes, when attempts were made to push through security-enhancing changes to the Web PKI, CAs would push back on the grounds that there'd be collateral damage to non-Web-PKI use cases with different cost-benefit profiles on security vs. availability, and the browser vendors want that to stop happening.
Let's Encrypt could of course continue offering client certificates if they wanted to, but they'd need to set up a separate root for those certificates to chain up to, and they don't think there's enough demand for that to be worth it.
The real takeaway is that there's never been a lot of real thought put into supporting client authentication - e.g. there's no root CA program for client certificates. To use a term from that discussion, it's usually just "piggybacked" on server authentication.
Imho because to put both into a certificate just by convention (or what was the reason to still do it?) is for a CA that has the webpki in scope is not best practice.
From experience people are often misleading the client authentication part as a substitute for user authentication what you simply don't get and than they are surprised that anyone with the certificate can login...
Yeah people with knowledge should know the difference but I have seen this way too many times...The thing I really see LE is problematic is the topic of revocation. Yes revocation is broken but the only working mechanism with ocsp stapling was brought to the graveyard (aka made optional by the cab) with the argument of data privacy issues under the normal ocsp umbrella...Yeah back to CRLs/proprietary browser revocation mechanisms such as CRLsets (https://www.grc.com/revocation/crlsets.htm#:~:text=What%20is...) combined with CTlogs as a reactive measure that simply don't work in practice/are too slow (e.g. remember the Fina CA/Cloudflare incident and the time it went unnoticed).
I have the feeling the driver for LE were rather the costs than the data privacy arguments brought up.
I can think of a of other ways that client certificates could work, but they have problems too:
1. Use DANE to verify the client certificate. But that requires DNSSEC, which isn't widely used. Would probably require new implemntations of the handshake to check the client cert, and would add latency since the server has to do a DNS call to verify the clients cer.
2. When the server receives a request it makes an https request to a well known enpdpoint on the domain in the client-cert's subject that contains a CA, it then checks that the client cert is signed by that CA. And the client generates the client cert with that CA (or even uses the same self-signed cert for both). This way the authenticity of the client CA is verified using the web PKI cert. But the implementation is kind of complicated, and has an even worse latency problem than 1.
3. The server has an endpoint where a client can request a client certificate from that server, probably with a fairly short expiration, for a domain, with a csr, or equivalent. The server then responds by making an https POST operation to a well known enpdpoint on the requested domain containing a certificate signed by the servers own CA. But for that to work, the registration request needs to be unauthenticated, and could possibly be vulnerable to DoS attacks. It also requires state on the client side, to connect the secret key with the final cert (unless the server generated a new secret key for the client, which probably isn't ideal). And the client should probably cache the cert until it expires.
And AFAIK, all of these would require changes to how XMPP and other federated protocols work.
Of these, (1) and (2) are already implemented in XMPP.
(1) just isn't that widely deployed due to low DNSSEC adoption and setup complexity, but there is a push to get server operators to use it if they can.
(2) is defined in RFC 7711: https://www.rfc-editor.org/rfc/rfc7711 however it has more latency and complexity compared to just using a valid certificate directly in the XMPP connection's TLS handshake. Its main use is for XMPP hosting providers that don't have access to a domain's HTTPS.
The second one doesn't seem excessively complicated and the latency could be mitigated by caching the CA for a reasonable period of time.
But if you're going to modify the protocol anyway then why not just put it in the protocol that a "server" certificate is to be trusted even if the peer server is initiating rather than accepting the connection? That's effectively what you would be doing by trusting the "server" certificate to authenticate the chain of trust for a "client" certificate anyway.
"This change is prompted by changes to Google Chrome’s root program requirements, which impose a June 2026 deadline to split TLS Client and Server Authentication into separate PKIs. Many uses of client authentication are better served by a private certificate authority, and so Let’s Encrypt is discontinuing support for TLS Client Authentication ahead of this deadline."
The problem here is that when alice@chat.example.com and bob@xmpp.example2.com talk to each other, chat.example.com asks "Are you xmpp.example2.com?" and xmpp.example2.com asks "Are you chat.example.com?"
If you strictly require the side that opens the TCP connection to only use client certs and require the side that gets the TCP connection to only use server certs, then workflows where both sides validate each other become impossible with a single connection.
You could have each server open a TCP connection to the other, but then you have a single conversation spread across multiple connections. It gets messy fast, especially if you try to scale beyond a single server -- the side that initiates the first outgoing connection has to receive the second incoming connection, so you have to somehow get your load balancer to match the second connection with the first and route it to the same box.
Then at the protocol level, you'd have to essentially have each connection's server send a random number challenge to the client saying "I can't authenticate clients because they don't have certs. So please echo this back on the other connection where you're the server and I can authenticate you." The complexity and subtlety of this coordination dance seems like you're just asking security issues.
If I was implementing XMPP I would be very tempted to say, "Don't be strict about client vs. server certs, let a client use a server cert to demonstrate ownership of a domain -- even if it's forbidden by RFC and even if we have to patch our TLS library to do it."
Code can just ignore the EKU. Especially if the ecosystem consists of things that are already using certificates in odd ways, as it shouldn't be making outgoing connections without it in the first place.
Client authentication with publicly-trusted (i.e. chaining to roots in one of the major 4 or 5 trust-store programs) is bad. It doesn't actually authenticate anything at all, and never has.
No-one that uses it is authenticating anything more than the other party has an internet connection and the ability, perhaps, to read.
No part of the Subject DN or SAN is checked. It's just that it's 'easy' to rely on an existing trust-store rather than implement something secure using private PKI.
Some providers who 'require' public TLS certs for mTLS even specify specific products and CAs (OV, EV from specific CAs) not realising that both the CAs and the roots are going to rotate more frequently in future.
A client cert can be stored, so it provides at least a little bit of identification certainty. It's very hard to steal or impersonate a specific client cert, so the site has a high likelihood of knowing you're the same person you were when you connected before (even though the initial connection may very well not have ID'd the correct person!). That has value.
But it also doesn't involve any particular trust in the CA either. Lets Encrypt has nothing to offer here so there's no reason for them to try to make promises.
I feel like using web pki for client authentication doesn't really make sense in the first place. How do you verify the common name/subject alt name actually matches when using a client cert.
Using web pki for client certs seems like a recipe for disaster. Where servers would just verify they are signed but since anyone can sign then anyone can spoof.
And this isn't just hypothetical. I remember xmlsec (a library for validating xml signature, primarily saml) used to use web pki for signature validation in addition to specified cert, which resulted in lot SAML bypasses where you could pass validation by signing the SAML response with any certificate from lets encrypt including the attackers.
You are correct, and the answer is - no-one using publicly-trusted TLS certs for client authentication is actually doing any authentication. At best, they're verifying the other party has an internet connection and perhaps the ability to read.
It was only ever used because other options are harder to implement.
Too late for an edit, i read a bit more about how xmpp works, i guess the cert is not really about network access controls or authenticating the connection, but authenticating the data is coming from the right server.
Is there any reason why things gravitate towards being web-centric, especially Google-centric?
Seeing that Google's browser policies triggered the LE change and the fact that most CAs are really just focusing on what websites need rather than non-web services isn't helpful considering that browsers now are terribly inefficient (I mean come on, 1GB of RAM for 3 tabs of Firefox whilst still buffering?!) yet XMPP is significantly more lightweight and yet more featureful compared to say Discord.
Google dominate the space because they have an active, robust trust-store program that they manage well. Apple the same. Mozilla and Microsoft too (though to a lesser extent).
If any ecosystem - such as XMPP - wishes to, they could start their own root-program, but many simply copy what Chrome or Mozilla do and then are surprised when things change.
Yes, definitely. Prosody supports DANE, but DNSSEC deployment continues to be an issue when talking about the public XMPP network at large. Ironically the .im TLD our own site is on still doesn't support it at all.
I really fail to understand or sympathize with Let's Encrypt limiting their certs so. What is gained by slamming the door on other applications than servers being able to get certs?
In this case I do think it makes sense for servers to accept certs even as marked by servers, since it's for a s2s use case. But this just feels like such an unnecessary clamping down. To have made certs finally plentiful, & available for use... Then to take that away? Bother!
Trust chains. Some implementations would accept an LE certificate for foo.com as a valid login for foo.com or something like that, because they treated all trusted certs the same, whether issued by the service being authenticated to, or some other CA.
It might be possible to relay communications between two servers and have one of them act as a client without knowing. Handshake verification prevents that in TLS, but there could be similar attacks.
jammcq|20 days ago
MattJ100|20 days ago
agwa|20 days ago
I would think it's more secure than clientAuth certs because if an attacker gets a misissued cert they'd have to actually execute a MitM attack to use it. In contrast, with a misissued clientAuth cert they can just connect to the server and present it.
Another fun fact: the Mozilla root store, which I'd guess the vast majority of XMPP servers are using as their trust store, has ZERO rules governing clientAuth issuance[1]. CAs are allowed to issue clientAuth-only certificates under a technically-constrained non-TLS sub CA to anyone they want without any validation (as long as the check clears ;-). It has never been secure to accept the clientAuth EKU when using the Mozilla root store.
[1] https://www.mozilla.org/en-US/about/governance/policies/secu...
MattJ100|20 days ago
There are some advantages to using TLS for authentication as well as encryption, which is already a standard across the internet.
For example, unlike an XMPP server, CAs typically perform checks from multiple vantage points ( https://letsencrypt.org/2020/02/19/multi-perspective-validat... ). There is also a lot of tooling around TLS, ACME, CT logs, and such, which we stand to gain from.
In comparison, dialback is a 20-year-old homegrown auth mechanism, which is more vulnerable to MITM.
Nevertheless, there are some experiments to combine dialback with TLS. For example, checking that you get the same cert (or at least public key) when connecting back. But this is not really standardized, and can pose problems for multi-server deployments.
> It has never been secure to accept the clientAuth EKU when using the Mozilla root store.
Good job we haven't been doing this for a very long time by now :)
account42|20 days ago
It has never been secure to to rely on the Mozilla root store at all, or any root store for that matter, as they all contain certificate authorities which are in actively hostile countries or can otherwise be coerced by hostile actors. The entire security of the web PKI relies on the hope that if some certificate authority does something bad it'll become known.
nilslindemann|20 days ago
Let's translate and simplify:
> The current CA ecosystem is Google. They want that only Google-applications get certificates from CAs.
direwolf20|19 days ago
tialaramex|19 days ago
There are always plenty of people who aren't interested in doing any hard work themselves but are along for the ride, and periodically some of those people are very angry because a system they expended no effort to maintain hasn't focused on their needs.
The Web PKI wasn't somehow blessed by God, people made this. If you do the hard work you can make your own PKI, with your own rules. If you aren't interested in doing that work, you get whatever the people who did the work wanted. This ought to be a familiar concept.
morpheuskafka|19 days ago
Huh? Google does not even make a web server, or any kind of major servers, unless you count GCP load balancers or whatever. You are confusing their control of the client (which is still significantly shared with Apple and Microsoft since they control OS-level certificate trusts) with the server side, who are the "customers" of the CA. Google has almost no involvement in that and couldn't care less what kind of code is requesting and using certificates.
mmsc|20 days ago
If CAs don't want hostility from browser companies for using https certificate for non-http/browser applications, they should build their own thing.
RobotToaster|20 days ago
ameliaquining|20 days ago
It's a little vague, but my understanding reading between the lines is that sometimes, when attempts were made to push through security-enhancing changes to the Web PKI, CAs would push back on the grounds that there'd be collateral damage to non-Web-PKI use cases with different cost-benefit profiles on security vs. availability, and the browser vendors want that to stop happening.
Let's Encrypt could of course continue offering client certificates if they wanted to, but they'd need to set up a separate root for those certificates to chain up to, and they don't think there's enough demand for that to be worth it.
duskwuff|20 days ago
https://cabforum.org/2025/06/11/minutes-of-the-f2f-65-meetin...
The real takeaway is that there's never been a lot of real thought put into supporting client authentication - e.g. there's no root CA program for client certificates. To use a term from that discussion, it's usually just "piggybacked" on server authentication.
mhurron|20 days ago
Lets Encrypt is just used for like, webservers right, why do this other stuff webservers never use.
Which does appear to be the thinking, though they blame Google, which also seems to have taken the 'webservers in general don't do this, it's not important' - https://letsencrypt.org/2025/05/14/ending-tls-client-authent...
pseudalopex|20 days ago
[1] https://letsencrypt.org/2025/05/14/ending-tls-client-authent...
gumarn_y|19 days ago
thayne|20 days ago
1. Use DANE to verify the client certificate. But that requires DNSSEC, which isn't widely used. Would probably require new implemntations of the handshake to check the client cert, and would add latency since the server has to do a DNS call to verify the clients cer.
2. When the server receives a request it makes an https request to a well known enpdpoint on the domain in the client-cert's subject that contains a CA, it then checks that the client cert is signed by that CA. And the client generates the client cert with that CA (or even uses the same self-signed cert for both). This way the authenticity of the client CA is verified using the web PKI cert. But the implementation is kind of complicated, and has an even worse latency problem than 1.
3. The server has an endpoint where a client can request a client certificate from that server, probably with a fairly short expiration, for a domain, with a csr, or equivalent. The server then responds by making an https POST operation to a well known enpdpoint on the requested domain containing a certificate signed by the servers own CA. But for that to work, the registration request needs to be unauthenticated, and could possibly be vulnerable to DoS attacks. It also requires state on the client side, to connect the secret key with the final cert (unless the server generated a new secret key for the client, which probably isn't ideal). And the client should probably cache the cert until it expires.
And AFAIK, all of these would require changes to how XMPP and other federated protocols work.
MattJ100|20 days ago
(1) just isn't that widely deployed due to low DNSSEC adoption and setup complexity, but there is a push to get server operators to use it if they can.
(2) is defined in RFC 7711: https://www.rfc-editor.org/rfc/rfc7711 however it has more latency and complexity compared to just using a valid certificate directly in the XMPP connection's TLS handshake. Its main use is for XMPP hosting providers that don't have access to a domain's HTTPS.
zrm|20 days ago
But if you're going to modify the protocol anyway then why not just put it in the protocol that a "server" certificate is to be trusted even if the peer server is initiating rather than accepting the connection? That's effectively what you would be doing by trusting the "server" certificate to authenticate the chain of trust for a "client" certificate anyway.
direwolf20|19 days ago
tkel|20 days ago
[1] https://snikket.org/service/quickstart/
[2] https://github.com/snikket-im/snikket-server/blob/master/ans...
syntheticnature|19 days ago
everfrustrated|20 days ago
"This change is prompted by changes to Google Chrome’s root program requirements, which impose a June 2026 deadline to split TLS Client and Server Authentication into separate PKIs. Many uses of client authentication are better served by a private certificate authority, and so Let’s Encrypt is discontinuing support for TLS Client Authentication ahead of this deadline."
TL;DR blame Google
bawolff|20 days ago
csense|19 days ago
If you strictly require the side that opens the TCP connection to only use client certs and require the side that gets the TCP connection to only use server certs, then workflows where both sides validate each other become impossible with a single connection.
You could have each server open a TCP connection to the other, but then you have a single conversation spread across multiple connections. It gets messy fast, especially if you try to scale beyond a single server -- the side that initiates the first outgoing connection has to receive the second incoming connection, so you have to somehow get your load balancer to match the second connection with the first and route it to the same box.
Then at the protocol level, you'd have to essentially have each connection's server send a random number challenge to the client saying "I can't authenticate clients because they don't have certs. So please echo this back on the other connection where you're the server and I can authenticate you." The complexity and subtlety of this coordination dance seems like you're just asking security issues.
If I was implementing XMPP I would be very tempted to say, "Don't be strict about client vs. server certs, let a client use a server cert to demonstrate ownership of a domain -- even if it's forbidden by RFC and even if we have to patch our TLS library to do it."
benjojo12|20 days ago
Avamander|20 days ago
nickf|20 days ago
No-one that uses it is authenticating anything more than the other party has an internet connection and the ability, perhaps, to read. No part of the Subject DN or SAN is checked. It's just that it's 'easy' to rely on an existing trust-store rather than implement something secure using private PKI.
Some providers who 'require' public TLS certs for mTLS even specify specific products and CAs (OV, EV from specific CAs) not realising that both the CAs and the roots are going to rotate more frequently in future.
nightpool|20 days ago
ajross|20 days ago
But it also doesn't involve any particular trust in the CA either. Lets Encrypt has nothing to offer here so there's no reason for them to try to make promises.
bawolff|20 days ago
Using web pki for client certs seems like a recipe for disaster. Where servers would just verify they are signed but since anyone can sign then anyone can spoof.
And this isn't just hypothetical. I remember xmlsec (a library for validating xml signature, primarily saml) used to use web pki for signature validation in addition to specified cert, which resulted in lot SAML bypasses where you could pass validation by signing the SAML response with any certificate from lets encrypt including the attackers.
xg15|20 days ago
This seems exactly like a reason to use client certs with public CAs.
You (as in, the server) cannot verify this at all, but a public CA could.
nickf|20 days ago
It was only ever used because other options are harder to implement.
account42|19 days ago
The CA verifies the subject just like any server certificate, which is what LE has already been doing.
The server verifies the subject by checking that the name in the certificate matches the name the client is claiming to be.
bawolff|20 days ago
So i guess that could make sense.
unknown|20 days ago
[deleted]
abnormalitydev|20 days ago
xg15|20 days ago
Yes, the reason is called "Chrome" and "90% market share"...
nickf|19 days ago
If any ecosystem - such as XMPP - wishes to, they could start their own root-program, but many simply copy what Chrome or Mozilla do and then are surprised when things change.
PunchyHamster|20 days ago
forty|20 days ago
denus|20 days ago
MattJ100|20 days ago
greatgib|19 days ago
jauntywundrkind|20 days ago
In this case I do think it makes sense for servers to accept certs even as marked by servers, since it's for a s2s use case. But this just feels like such an unnecessary clamping down. To have made certs finally plentiful, & available for use... Then to take that away? Bother!
sam_lowry_|19 days ago
They do what Google says.
rnhmjoj|19 days ago
direwolf20|19 days ago
It might be possible to relay communications between two servers and have one of them act as a client without knowing. Handshake verification prevents that in TLS, but there could be similar attacks.
unknown|20 days ago
[deleted]