(no title)
Veserv | 6 hours ago
If you have the legal minimum to be considered broadband in the US, you need ~100 Mbps, so that would add ~12 ms.
If you can stream one 4K video, you need ~20-40 Mbps, so that would add ~30-60 ms.
If you can stream one 1080p video you need to ~3-6 Mbps, so that would add ~200-400 ms.
Even on just a 1 Mbps connection, just barely enough to stream a single 480p video that would only add ~1 second.
And I doubt the weight of most of pages is lower than 160 KB. Many of them are probably dramatically higher, so the total effect of a extra 160 KB is just a few percent.
If there is a problem, it seems like it would be with poorly designed protocols and infrastructure which should be fixed as well instead of papering them over.
[1] https://arstechnica.com/security/2026/02/google-is-using-cle...
bwesterb|4 hours ago
Veserv|3 hours ago
But even that is beside my point. The impact of making certificates larger should be, largely, just the cost of making them larger which, on average, would not actually be that significant of a impact. That is not the real problem. The problem is actually that there is so much broken crap everywhere in networks and network stacks that would either break or dramatically balloon what should otherwise be manageable costs.
Everybody just wants to paper over that by blaming the larger certificates when what is actually happening is that the larger certificates are revealing the rot. That is not to say that the proposal which reduces the size of the certificates is bad, I think it is good to do so, but fixing the proximal cause so you can continue to ignore the root cause is a recipe that got us into this ossified, brittle networking mess.
agwa|5 hours ago
The increased certificate size would also be painful for Certificate Transparency logs, which are required to store certificates and transmit them to anyone who asks. MTC doesn't require logs to store the subject public key.
Veserv|5 hours ago
You can already configure your initial congestion window, and if you are connecting to a system expecting the use of PQ encryption, you should set your initial congestion window to be large enough for the certificate; doing otherwise is height of incompetence and should be fixed.
You could also use better protocols like QUIC which has a independently flow controlled crypto stream and you can avoid amplification attacks by pre-sending adequate amounts of data to stop amplification prevention from activating.
And I fail to see how going from 4 KB of certificate chain to 160 KB of certificate chain poses a serious storage or transmission problem. You can fit literal millions into RAM on reasonable servers. You can fit literal billions into storage on reasonable servers. Sure, if you exactly right-sized your CT servers you might need to upgrade them, but the absolute amount of resources you need for this is miniscule.
bastawhiz|5 hours ago
You're on LTE? You have high packet loss over a wireless connection? The initial TCP window size is ~16kb in a lot of cases, now you need multiple round trips over a high latency connection just to make the connection secure. You'll probably need 3-4 round trips on a stable connection just for the certificate. On a bad connection? Good luck.
Veserv|5 hours ago
Exactly, using a blanket default initial congestion window of 16 KB is stupid. Even ignoring that it was chosen when average bandwidth was many times less and thus should be increased anyways to something on the order of the average BDP or you should use a better congestion control algorithm, it is especially stupid if you are beginning a connection that has a known minimum requirement before useful data can be sent.
These things should be fixed as well instead of papering them over. Your system should work well regardless of the size of the certificate chain except for the fundamental overhead of having a larger chain.