top | item 37830987

The novel HTTP/2 'Rapid Reset' DDoS attack

365 points| jsnell | 2 years ago |cloud.google.com

106 comments

order
[+] comice|2 years ago|reply
Nice to see that the haproxy people had spotted this kind of issue with http/2 and apparently mitigated it back in 2018: https://www.mail-archive.com/[email protected]/msg44134.h...
[+] jabart|2 years ago|reply
Nice, I was looking for this type of information for haproxy. Gives me a lot of confidence in their new QUIC feature.
[+] js2|2 years ago|reply
> In a typical HTTP/2 server implementation, the server will still have to do significant amounts of work for canceled requests, such as allocating new stream data structures, parsing the query and doing header decompression, and mapping the URL to a resource. For reverse proxy implementations, the request may be proxied to the backend server before the RST_STREAM frame is processed. The client on the other hand paid almost no costs for sending the requests. This creates an exploitable cost asymmetry between the server and the client.

I'm surprised this wasn't foreseen when HTTP/2 was designed. Amplification attacks were already well known from other protocols.

I'm similarly similarly surprised it took this long for this attack to surface, but maybe HTTP/2 wasn't widely enough deployed to be a worthwhile target till recently?

[+] tptacek|2 years ago|reply
It's not really an amplification attack. It's just drastically more efficiently using TCP connections.
[+] gnfargbl|2 years ago|reply
I was surprised too, but if you look at the timelines then RST_STREAM seems to have been present in early versions of SPDY, and SPDY seems mostly to have been designed around 2009. Attacks like Slowloris were coming out at about the same time, but they weren't well-known.

On the other hand, SYN cookies were introduced in 1996, so there's definitely some historic precedent for attacks in the (victim pays Y, attacker pays X, X<<Y) class.

[+] kristopolous|2 years ago|reply
> I'm similarly similarly surprised it took this long for this attack to surface

As with most things like this, probably many hundreds of unimportant people saw it and tried it out.

Trying to do it on Google, with a serious effort, that's the wacky part.

[+] the8472|2 years ago|reply
So we needed HTTP2 to deliver ads, trackers and bloated frontend frameworks faster. And now it delivers attacks faster too.
[+] jeroenhd|2 years ago|reply
HTTP/2 makes the browsing experience of high-latency connections a lot more tolerable. It also makes loading web pages in general faster.

Luckily, HTTP/1.1 still works. You can always enable it in your browser configuration and in your web servers if you don't like the protocol.

[+] shepherdjerred|2 years ago|reply
Are you suggesting that we didn't need HTTP2? What's the real alternative here?
[+] scrpl|2 years ago|reply
Another reason to keep foundational protocols small. HTTP/2 has been around for more than a decade (including SPDY), and this is a first time this attack type surfaced. I wonder what surprises HTTP/3 and QUIC hide...
[+] cmeacham98|2 years ago|reply
DNS is a small protocol and is abused by DDoS actors worldwide for relay attacks.
[+] liveoneggs|2 years ago|reply
QUIC didn't account for amplification attacks in its design and the people complaining about it were initially dismissed.
[+] klabb3|2 years ago|reply
“Cancelation” should really be added to the “hard CS problems” list.

Like the others on that list (off by one, cache invalidation etc) it isn’t actually hard-hard, but rather underestimated and overlooked.

I think if we took half the time we spend on creation, constructors, initialization, and spent that design time thinking about destruction, cleanup, teardown, cancelation etc, we’d have a lot fewer bugs, in particular resource exhaustion bugs.

[+] pornel|2 years ago|reply
I really like Rust's async for its ability to immediately cancel Futures, the entire call stack together, at any await point, without needing cooperation from individual calls.
[+] jart|2 years ago|reply
I know that's true of C libraries. POSIX thread cancelation is one of those things where its mere existence pervades everything in its implications.
[+] fefe23|2 years ago|reply
I would like to remind everyone that Google invented HTTP/2.

Now they are telling us a yarn about how they are heroically saving us from the problem they created, but without mentioning the part that they created it.

The nerve of these tech companies! Microsoft has been doing this for decades, too.

[+] gsich|2 years ago|reply
They tried to solve problems that weren't existant.
[+] arisudesu|2 years ago|reply
Can anyone can explain what's novel about this attack that isn't plain old requests flood?
[+] jsnell|2 years ago|reply
It depends on what you think a "request flood" attack is.

With HTTP/1.1 you could send one request per RTT [0]. With HTTP/2 multiplexing you could send 100 requests per RTT. With this attack you can send an indefinite number of requests per RTT.

I'd hope the diagram in this article (disclaimer: I'm a co-author) shows the difference, but maybe you mean yet another form of attack than the above?

[0] Modulo HTTP/1.1 pipelining which can cut out one RTT component, but basically no real clients use HTTP/1.1 pipelining, so its use would be a very crisp signal that it's abusive traffic.

[+] bribroder|2 years ago|reply
The new technique described avoids the maximum limit on number of requests per second (per client) the attacker can get the server to process. By sending both requests and stream resets within the same single connection, the attacker can send more requests per connection/client than used to be possible, so it is perhaps cheaper as an attack and/or more difficult to stop
[+] vlovich123|2 years ago|reply
Wouldn’t this same attack apply to QUIC (and HTTP/3)?
[+] stonogo|2 years ago|reply
It doesn't apply to HTTP/3 because the receiver has to extend the stream concurrency maximum before the sender can open a new stream. This attack works because the sender doesn't have to wait for that after sending a reset in HTTP/2.
[+] unethical_ban|2 years ago|reply
I got out of web proxy management a while back and haven't had to delve into HTTP2 or HTTP3.

It seems HTTP2 is TCP on TCP for HTTP messages specifically. This must be why HTTP3 is over a UDP based protocol.

[+] rollcat|2 years ago|reply
HTTP2 is not TCP on TCP (that's a very basic recipe for a complete disaster, the moment any congestion kicks in); it's mostly just multiplexing concurrent HTTP requests over a single TCP connection.

HTTP3 is using UDP for different reasons, although it effectively re-implements TCP from the application POV (it's still HTTP under the hood after all). Basically with plain old TCP your bandwidth is limited by latency, because every transmitted frame has to be acknowledged - sequentially. Some industries/applications (like transferring raw video files over the pond) have been using specialized, UDP-based transfer protocols for a while for this reason. You only need to re-transmit those frames you know didn't make it, in any order it suits you.

[+] devit|2 years ago|reply
Isn't this trivially mitigated by throttling?

And the throttling seems even simple: give each IP address an initial allowance of A requests, then increase the allowance every T time up to a maximum of B. Perhaps A=B=10, T=150ms.

[+] o11c|2 years ago|reply
The whole point of a 'D'DoS is that there are numerous compromised IP addresses, which only need to make maybe one connection each.

You can't simply blacklist weird connections entirely, since legitimate clients can use those features.