(no title)
raggi | 8 hours ago
The aggressive tone is no defense against practical problems such as the poor scalability of such a solution.
> You could also use better protocols like QUIC which has a independently flow controlled crypto stream and you can avoid amplification attacks by pre-sending adequate amounts of data to stop amplification prevention from activating.
Not before key exchange it doesn't. There's no magic bullet here.
A refresher on the state of TFO and QUIC PMTU might be worthwhile here before jumping this far ahead.
Veserv|7 hours ago
> Not before key exchange it doesn't. There's no magic bullet here.
I was incorrect. Rereading the QUIC standard I see that they do not flow control the CRYPTO packet number space/stream. I thought they did because it is so easy to do that I did it as a afterthought. Truly another example of fundamental design errors introducing accidental complexity that should be fixed instead of papered over.
ekr____|7 hours ago
A basic source of concern here is whether it's safe for the server to use an initial congestion window large enough to handle the entire PQ certificate chain without having an unacceptable risk of congestion collapse or other negative consequences. This is a fairly complicated question of network dynamics and the interaction of a bunch of different potentially machines sharing the same network resources, and is largely independent of the network protocol in use (QUIC versus TCP). It's possible that IW20 (or whatever) is fine, but it may well may not be.
There are two secondary issues: 1. Whether the certificate chain is consuming an unacceptable fraction of total bandwidth. I agree that this is less likely for many network flows, but as noted above, there are some flows where it is a large fraction of the total.
2. Potential additional latency introduced by packet loss and the necessary round trip. Every additional packet increases the chance of one of them being lost and you need the entire certificate chain.
It seems you disagree about the importance of these issues, which is an understandable position, but where you're losing me is that you seem to be attributing this to the design of the protocols we're using. Can you explain further how you think (for instance) QUIC could be different that would ameliorate these issues?