top | item 47183396

Robust and efficient quantum-safe HTTPS

93 points| tptacek | 2 days ago |security.googleblog.com

20 comments

order

utopiah|10 hours ago

FWIW if you want to tinker on the topic I recommend OQS https://github.com/open-quantum-safe/ including Chromium, Apache, nginx, curl, etc. It's quite fun to play with.

boutell|9 hours ago

The pivot to MTC is a big change in the infrastructure of https. I wish other browsers were at least mentioned in this blog post. I'm curious about the future of letsencrypt as well.

utopiah|7 hours ago

Discussed few weeks ago on https://community.letsencrypt.org/t/post-quantum-crypto-road... specifically "The path we're more interested in is Merkle Tree Certificates, currently in design at the PLANTS working group at IETF. Chrome has indicated that they anticipate this to be their preferred approach to PQC. We're following that very closely, and are likely to deploy MTCs if it looks like that design is going to be supported widely." according to Matthew McPherrin, Let's Encrypt staff

Veserv|3 hours ago

While I appreciate more efficient and compact representations, I fail to see why this is particularly necessary. This article [1] on the same topic indicates a naive PQ chain is only ~40x the size of a current 4 KB chain. That means it is just ~160 KB.

If you have the legal minimum to be considered broadband in the US, you need ~100 Mbps, so that would add ~12 ms.

If you can stream one 4K video, you need ~20-40 Mbps, so that would add ~30-60 ms.

If you can stream one 1080p video you need to ~3-6 Mbps, so that would add ~200-400 ms.

Even on just a 1 Mbps connection, just barely enough to stream a single 480p video that would only add ~1 second.

And I doubt the weight of most of pages is lower than 160 KB. Many of them are probably dramatically higher, so the total effect of a extra 160 KB is just a few percent.

If there is a problem, it seems like it would be with poorly designed protocols and infrastructure which should be fixed as well instead of papering them over.

[1] https://arstechnica.com/security/2026/02/google-is-using-cle...

bwesterb|1 hour ago

The key will be 40x larger. Not that bad for the certs. It'll be about 15kB extra. Will depend on your use case if that's bad. For video it's fine. But not all browsing is video. At Cloudflare half of the QUIC connections we see transfer less than 8kB from server -> client total. On average 3-4kB of that is already certificates today. That'll probably be quite noticeable. https://blog.cloudflare.com/pq-2025/#do-we-really-care-about...

agwa|3 hours ago

At the beginning of a TCP connection, which is when the certificate chain is sent, you can't send more data than the initial congestion window without waiting for it to be acknowledged. 160KB is far beyond the initial congestion window, so on a high-latency connection the additional time would be higher than the numbers you calculated. Of course, if the web page is very bloated the user might not notice, but not all pages are bloated.

The increased certificate size would also be painful for Certificate Transparency logs, which are required to store certificates and transmit them to anyone who asks. MTC doesn't require logs to store the subject public key.

bastawhiz|2 hours ago

Let's say you visit a site that doesn't use H2. That's now nearly a megabyte (up from 24kb) of data across the six connections that HTTP/1.1 establishes.

You're on LTE? You have high packet loss over a wireless connection? The initial TCP window size is ~16kb in a lot of cases, now you need multiple round trips over a high latency connection just to make the connection secure. You'll probably need 3-4 round trips on a stable connection just for the certificate. On a bad connection? Good luck.