> In a typical HTTP/2 server implementation, the server will still have to do significant amounts of work for canceled requests, such as allocating new stream data structures, parsing the query and doing header decompression, and mapping the URL to a resource. For reverse proxy implementations, the request may be proxied to the backend server before the RST_STREAM frame is processed. The client on the other hand paid almost no costs for sending the requests. This creates an exploitable cost asymmetry between the server and the client.
I'm surprised this wasn't foreseen when HTTP/2 was designed. Amplification attacks were already well known from other protocols.
I'm similarly similarly surprised it took this long for this attack to surface, but maybe HTTP/2 wasn't widely enough deployed to be a worthwhile target till recently?
I was surprised too, but if you look at the timelines then RST_STREAM seems to have been present in early versions of SPDY, and SPDY seems mostly to have been designed around 2009. Attacks like Slowloris were coming out at about the same time, but they weren't well-known.
On the other hand, SYN cookies were introduced in 1996, so there's definitely some historic precedent for attacks in the (victim pays Y, attacker pays X, X<<Y) class.
Another reason to keep foundational protocols small. HTTP/2 has been around for more than a decade (including SPDY), and this is a first time this attack type surfaced. I wonder what surprises HTTP/3 and QUIC hide...
“Cancelation” should really be added to the “hard CS problems” list.
Like the others on that list (off by one, cache invalidation etc) it isn’t actually hard-hard, but rather underestimated and overlooked.
I think if we took half the time we spend on creation, constructors, initialization, and spent that design time thinking about destruction, cleanup, teardown, cancelation etc, we’d have a lot fewer bugs, in particular resource exhaustion bugs.
I really like Rust's async for its ability to immediately cancel Futures, the entire call stack together, at any await point, without needing cooperation from individual calls.
I would like to remind everyone that Google invented HTTP/2.
Now they are telling us a yarn about how they are heroically saving us from the problem they created, but without mentioning the part that they created it.
The nerve of these tech companies! Microsoft has been doing this for decades, too.
It depends on what you think a "request flood" attack is.
With HTTP/1.1 you could send one request per RTT [0]. With HTTP/2 multiplexing you could send 100 requests per RTT. With this attack you can send an indefinite number of requests per RTT.
I'd hope the diagram in this article (disclaimer: I'm a co-author) shows the difference, but maybe you mean yet another form of attack than the above?
[0] Modulo HTTP/1.1 pipelining which can cut out one RTT component, but basically no real clients use HTTP/1.1 pipelining, so its use would be a very crisp signal that it's abusive traffic.
The new technique described avoids the maximum limit on number of requests per second (per client) the attacker can get the server to process. By sending both requests and stream resets within the same single connection, the attacker can send more requests per connection/client than used to be possible, so it is perhaps cheaper as an attack and/or more difficult to stop
It doesn't apply to HTTP/3 because the receiver has to extend the stream concurrency maximum before the sender can open a new stream. This attack works because the sender doesn't have to wait for that after sending a reset in HTTP/2.
HTTP2 is not TCP on TCP (that's a very basic recipe for a complete disaster, the moment any congestion kicks in); it's mostly just multiplexing concurrent HTTP requests over a single TCP connection.
HTTP3 is using UDP for different reasons, although it effectively re-implements TCP from the application POV (it's still HTTP under the hood after all). Basically with plain old TCP your bandwidth is limited by latency, because every transmitted frame has to be acknowledged - sequentially. Some industries/applications (like transferring raw video files over the pond) have been using specialized, UDP-based transfer protocols for a while for this reason. You only need to re-transmit those frames you know didn't make it, in any order it suits you.
And the throttling seems even simple: give each IP address an initial allowance of A requests, then increase the allowance every T time up to a maximum of B. Perhaps A=B=10, T=150ms.
[+] [-] dang|2 years ago|reply
The largest DDoS attack to date, peaking above 398M rps - https://news.ycombinator.com/item?id=37831062
HTTP/2 Zero-Day Vulnerability Results in Record-Breaking DDoS Attacks - https://news.ycombinator.com/item?id=37830998
[+] [-] comice|2 years ago|reply
[+] [-] jabart|2 years ago|reply
[+] [-] vdfs|2 years ago|reply
https://www.nginx.com/blog/http-2-rapid-reset-attack-impacti...
[+] [-] js2|2 years ago|reply
I'm surprised this wasn't foreseen when HTTP/2 was designed. Amplification attacks were already well known from other protocols.
I'm similarly similarly surprised it took this long for this attack to surface, but maybe HTTP/2 wasn't widely enough deployed to be a worthwhile target till recently?
[+] [-] tptacek|2 years ago|reply
[+] [-] gnfargbl|2 years ago|reply
On the other hand, SYN cookies were introduced in 1996, so there's definitely some historic precedent for attacks in the (victim pays Y, attacker pays X, X<<Y) class.
[+] [-] kristopolous|2 years ago|reply
As with most things like this, probably many hundreds of unimportant people saw it and tried it out.
Trying to do it on Google, with a serious effort, that's the wacky part.
[+] [-] the8472|2 years ago|reply
[+] [-] jeroenhd|2 years ago|reply
Luckily, HTTP/1.1 still works. You can always enable it in your browser configuration and in your web servers if you don't like the protocol.
[+] [-] shepherdjerred|2 years ago|reply
[+] [-] Borg3|2 years ago|reply
[deleted]
[+] [-] scrpl|2 years ago|reply
[+] [-] cmeacham98|2 years ago|reply
[+] [-] liveoneggs|2 years ago|reply
[+] [-] londons_explore|2 years ago|reply
[+] [-] klabb3|2 years ago|reply
Like the others on that list (off by one, cache invalidation etc) it isn’t actually hard-hard, but rather underestimated and overlooked.
I think if we took half the time we spend on creation, constructors, initialization, and spent that design time thinking about destruction, cleanup, teardown, cancelation etc, we’d have a lot fewer bugs, in particular resource exhaustion bugs.
[+] [-] pornel|2 years ago|reply
[+] [-] jart|2 years ago|reply
[+] [-] fefe23|2 years ago|reply
Now they are telling us a yarn about how they are heroically saving us from the problem they created, but without mentioning the part that they created it.
The nerve of these tech companies! Microsoft has been doing this for decades, too.
[+] [-] gsich|2 years ago|reply
[+] [-] arisudesu|2 years ago|reply
[+] [-] jsnell|2 years ago|reply
With HTTP/1.1 you could send one request per RTT [0]. With HTTP/2 multiplexing you could send 100 requests per RTT. With this attack you can send an indefinite number of requests per RTT.
I'd hope the diagram in this article (disclaimer: I'm a co-author) shows the difference, but maybe you mean yet another form of attack than the above?
[0] Modulo HTTP/1.1 pipelining which can cut out one RTT component, but basically no real clients use HTTP/1.1 pipelining, so its use would be a very crisp signal that it's abusive traffic.
[+] [-] bribroder|2 years ago|reply
[+] [-] ta1243|2 years ago|reply
https://news.ycombinator.com/item?id=37830998
https://news.ycombinator.com/item?id=37830987
[+] [-] n4te|2 years ago|reply
[+] [-] cratermoon|2 years ago|reply
[+] [-] qingcharles|2 years ago|reply
https://github.com/dotnet/runtime/issues/93303
[+] [-] vlovich123|2 years ago|reply
[+] [-] stonogo|2 years ago|reply
[+] [-] unethical_ban|2 years ago|reply
It seems HTTP2 is TCP on TCP for HTTP messages specifically. This must be why HTTP3 is over a UDP based protocol.
[+] [-] rollcat|2 years ago|reply
HTTP3 is using UDP for different reasons, although it effectively re-implements TCP from the application POV (it's still HTTP under the hood after all). Basically with plain old TCP your bandwidth is limited by latency, because every transmitted frame has to be acknowledged - sequentially. Some industries/applications (like transferring raw video files over the pond) have been using specialized, UDP-based transfer protocols for a while for this reason. You only need to re-transmit those frames you know didn't make it, in any order it suits you.
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] devit|2 years ago|reply
And the throttling seems even simple: give each IP address an initial allowance of A requests, then increase the allowance every T time up to a maximum of B. Perhaps A=B=10, T=150ms.
[+] [-] o11c|2 years ago|reply
You can't simply blacklist weird connections entirely, since legitimate clients can use those features.
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] 1vuio0pswjnm7|2 years ago|reply
Dont forget about nghttp2
[+] [-] winternewt|2 years ago|reply
[+] [-] cloudyq|2 years ago|reply
[deleted]
[+] [-] windows10code|2 years ago|reply
[deleted]