top | item 31647033

RFC 9114 – HTTP/3

156 points| mfiguiere | 3 years ago |rfc-editor.org | reply

76 comments

order
[+] nateberkopec|3 years ago|reply
Why does HTTP3 still support server push? When Chrome dropped it from their HTTP/2 implementation, I thought it was dead.

Cloudflare has also said they’re not implementing server push in their HTTP/3 support and will instead encourage Early Hints.

[+] simmervigor|3 years ago|reply
Co-chair of the QUIC WG here. Early in the process, we decided it was important that HTTP/3 provide parity with HTTP/2. That helps application developers to migrate back and forth between versions with less friction. So we include features of bidirectional stream multiplexing, request/response stream cancellation, server push, header compression, and graceful connection closure.

Some of those are more optional than others i.e. Server Push. Whether implementations choose to support a feature or not, at least the IETF maintain the close parity at the protocol level.

Stream prioritization is a different matter. That was very tricky in HTTP/3 [1], and through the process we designed Extensible Priorities, which is simpler and works over both H2 and H3. That was published today also as RFC 9218[2].

[1] https://blog.cloudflare.com/adopting-a-new-approach-to-http-...

[2] https://www.rfc-editor.org/info/rfc9218

Edit: typo fix

[+] MuffinFlavored|3 years ago|reply
Dumb question/super out of the loop.

QUIC/HTTP2/HTTP3/whatever, are my options still only base64-encoded (in case of binary messages) server-side events and/or websockets, or is there some kind of bi-directional HTTP2/HTTP3 thing yet that's adoptable + usable?

[+] PikachuEXE|3 years ago|reply
I don't understand why server push is in the standard but not Early Hints.

I saw some articles but seems both server side & client side support are lacking.

(I can be wrong)

At least I don't see early hint in Nginx...

[+] akshayshah|3 years ago|reply
This is great! Despite how little the draft has changed over the last year, many communities seemed reluctant to work on an implementation until the RFC was published. Go definitely fell into this camp. Kudos to .NET for shipping HTTP/3 support in Kestrel already!

I have to say that I'm most excited for RFC 9110 (HTTP semantics), though :)

[+] api|3 years ago|reply
Happy to see UDP being used here, because it means that networks that don't properly support UDP are now "broken."

There was for a time a risk that the Internet would become a HTTP-only network (at least at the endpoint) supporting TCP to port 80 and 443 and not a lot else.

Really. I've seen networks like this.

[+] thayne|3 years ago|reply
> the "http" scheme associates authority with the ability to receive TCP connections on the indicated port of whatever host is identified within the authority component. Because HTTP/3 does not use TCP, HTTP/3 cannot be used for direct access to the authoritative server for a resource identified by an "http" URI

Was the possibility of using a different scheme (maybe http3 or h3) considered in addition to the altsvc mechansims?

[+] simmervigor|3 years ago|reply
Yes it was, there was a pretty lengthy discussion dating back to 2017 on the issue tracker https://github.com/quicwg/base-drafts/issues/253

TL;DR just like HTTP/2, we wanted to avoid friction in deploying these protocols. Having to rewrite URLs because of new schemes is pretty unpalatable, it has major impact. Instead, HTTP/3 can rely on other IETF-defined mechanisms like Alt-Svc (RFC 7838) and the more recent SVCB / HTTPS RR [1] DNS-based methods. The latter has been deployed on Cloudflare a while [2] and supported in Firefox. Other user agents have also expressed interest or intent to support it.

The net outcome is that developers can by and large focus on HTTP semantics, and let something a little further down the stack worry more about versions. Sometime devs will need to peek into that area, but not the majority.

[1] - https://datatracker.ietf.org/doc/html/draft-ietf-dnsop-svcb-... [2] - https://blog.cloudflare.com/speeding-up-https-and-http-3-neg...

[+] dfawcus|3 years ago|reply
Am I the only one who finds the HTML rendered version[1] from the IETF site easier to read than the one from the rfc-editor site?

[1] https://datatracker.ietf.org/doc/html/rfc9114

[+] capableweb|3 years ago|reply
Am I the only one who think there should be a "Am I the only one?" law named after some famous internet citizen who said something clever about "Am I the only one..." messages?

As one data point, I find https://www.rfc-editor.org/rfc/rfc9114.html easier on the eyes, mainly because of the font choice. I also like that I can link directly to titles (https://www.rfc-editor.org/rfc/rfc9114.html#name-other-schem...) where the title itself is visible in the URL rather than just the section-number (https://www.rfc-editor.org/rfc/rfc9114.html#section-3.1.2) as if I'm participating in a discussion about the RFC and links are being thrown around, I can know which section is being referred to without opening the link.

But, I can also see how some people prefer the datatracker viewer. The only thing it's really missing is a Table of Contents as a sidebar so it's easier to navigate. Otherwise they are mostly the same.

[+] eadmund|3 years ago|reply
I'm with you. I don't know who decided that RFCs should be displayed in a proportional font, but he was IMHO mistaken.

At least they are still available as plain text.

[+] autoexec|3 years ago|reply

   Several characteristics of HTTP/3 provide an observer an opportunity
   to correlate actions of a single client or server over time.  These
   include the value of settings, the timing of reactions to stimulus,
   and the handling of any features that are controlled by settings.

    As far as these create observable differences in behavior, they could
   be used as a basis for fingerprinting a specific client.

   HTTP/3's preference for using a single QUIC connection allows
   correlation of a user's activity on a site.  Reusing connections for
   different origins allows for correlation of activity across those
   origins.

   Several features of QUIC solicit immediate responses and can be used
   by an endpoint to measure latency to their peer; this might have
   privacy implications in certain scenarios.

It feels like we've been moving backwards in terms of privacy since HTTP/2 even as those same privacy issues have been increasingly exploited by private companies and governments. It's sad to see the situation continuously worsened instead of being improved over time. It feels like a push for performance whatever the cost, or perhaps (more pessimistically) a push to degrade privacy and security while justifying it or distracting us from it with increased performance. It's strange to see security and privacy issues inherent to these specifications acknowledged, but not addressed.
[+] 1vuio0pswjnm7|3 years ago|reply
As an end user, I already get satisfactory performance with the existing protcols. What slows things down is the neverending gratuitous Javascript and automatic connections not initiated by the user for the purposes of advertising, tracking and telemetry. Even viewing these RFCs in the best light, as "improvements", it stands to reason that end users will not be the ones who benefit most (if at all) from them. It is reminiscient of personal computers in decades past that kept increasing in speed and power only to have those increases usurped by gas-like software developed on the latest, most expensive workstations by software companies aiming to license newer versions, in some cases under agreements for pre-installation with OEMS aiming to sell newer hardware. "Gas-like" because over time it seemed to expand to fill available space. The newer PCs could do more, faster, behind the scenes, but the user experience did not change; for the user, it generally still took the same amount of time to do the same routine tasks. To put it another way, these "improvements" in well-established transport protocols may mean "tech" companies will be better able to exploit end users, e.g., more data collected and more advertising served without performance degradation, perhaps leading to more commerce, but not that end users will be better able to avoid exploitation, e.g. through improved privacy and security from "tech" companies and their advertiser customers.
[+] simmervigor|3 years ago|reply
For what it's worth, the HTTP/2 specification contains almost identical phrasing as HTTP/3 [1]. That's not unexpected, the considerations are similar: using a single connection with connection-specific values has potential implications. However, the possibility for fingerprinting is inherent in every protocol and HTTP semantics itself paints out the ways common features might get used.

Considerations are just that. Choices in light of considerations can be traded off by client implementations - they can make their own choices about how to address these matters.

The work in the IETF to define oblivious DNS and oblivious HTTP [2] protocols is a step towards reducing such surfaces, at the trade off of functionality.

[1] https://www.rfc-editor.org/rfc/rfc9113.html#section-10.8 [2] https://datatracker.ietf.org/wg/ohai/about/

[+] d110af5ccf|3 years ago|reply
Doesn't it make more sense to separate concerns? Build the highest performance protocol possible that can operate efficiently under a wide array of conditions and then route via an overlay such as i2p whose sole job is to provide privacy if you want to do that.

Also isn't an implementation free to avoid things like connection reuse if it so chooses? For that matter some browsers support fully containerizing things per site visited.

[+] mndrix|3 years ago|reply
> This document describes a mapping of HTTP semantics over QUIC.

Hooray!

[+] rektide|3 years ago|reply
It's so cool to have this split. QUIC is so capable, such a neat set of ideas for a transport. That HTTP can express itself in terms of another spec is one of the most compelling, longest-hardest won & best victories for "abstraction" that computing has seen. That we'll be able to iterate on HTTP in new ways, by having a common semantic base, keeps the future open & iterable. Huge wins.

Obligatory link to Mark Nottingham's "A New Definition of HTTP3"[1], which talks about the split of HTTP into a semantic definition in RFC9110[2] & the creation of HTTP2/RFC9113 and HTTP3 (over QUIC, this document).

[1] https://www.mnot.net/blog/2022/06/06/http-core https://news.ycombinator.com/item?id=31647149 (3 points, 26m ago, 0 comments)

[2] "HTTP Semantics" https://www.rfc-editor.org/rfc/rfc9110

[3] "HTTP2" https://www.rfc-editor.org/rfc/rfc9113

[+] Matthias247|3 years ago|reply
Note that even the previous draft specification (https://datatracker.ietf.org/doc/draft-ietf-quic-http/34/) was a mapping of HTTP semantics on top of QUIC. It didn't carry too many transport specific concerns - which e.g. HTTP/2 did, since it cared about flow control and other things.

I have yet to read RFC 9114, but I guess it's refinded to fit better on top of the new HTTP semantics spec (RFC 9110).

[+] exabrial|3 years ago|reply
You know what http needs? Built in port location as part of the protocol... preferably via DNS.
[+] Gigachad|3 years ago|reply
What use cases are you thinking of? As far as I can see, the main and perhaps only use case is hosting websites behind ISPs which block the default ports. Although since http is leaving TCP, I wonder if these blocks will continue to work.
[+] stevekemp|3 years ago|reply
Do you mean something like the existing SRV records, or something else?
[+] charcircuit|3 years ago|reply
SRV records already exist.
[+] bawolff|3 years ago|reply
It really doesn't.
[+] tannhaeuser|3 years ago|reply
So was it really worth it to make it so damn complicated and put it out of range for most developers and existing programs in terms of implementation effort, sidestepping IP networking even, for actual hypertext transport when the web is contracting as we speak? Ignoring incidental use of HTTP for "REST APIs" (an oxymoron in itself).
[+] inopinatus|3 years ago|reply
It's nice to have progress in some areas.

However, I'm disappointed to observe that we're on to version 3 and the HTTP editors are still writing tortured thickets of handwaving cross-references to avoid specifying which DNS records it uses.

Instead, HTTP continues to refer normatively to the http(s) URIs, which eventually confess:

"This specification does not mandate a particular registered name lookup technology".

and then everyone squats the A/AAAA address records by default, effectively denying them for any other purpose. There are some obvious consequential misbehaviours (such as the perennial "apex record" problem), but more broadly, HTTP subverts the DNS by de facto appropriating potentially all labels, and continuing to do so represents a middle finger in the face of every other protocol designer. This situation gets more rusted-on and harder to figure out remediation with each revision, and there's a lesson or two in here for anyone who develops protocols for a living.

[+] ghoulish45|3 years ago|reply
There’s a lesson or two in understanding and respecting the difference between the two technologies you’re discussing.

www.example.com. A 192.0.2.1, for example, will work for both HTTP and SSH and whatever else you want to put there. If you have SSH on 22/tcp and HTTP on 80/tcp and HTTP over TLS on 443/tcp, all of those are reachable via the same exact A record in DNS without even clarifying what port you intend. Plain “ssh www.example.com” would work just fine.

That’s why your statements about subversion of the DNS and squatting on DNS RR types show you to not understand what you’re dealing with, which makes your emotional state about it even more discouraging. You seem to be upset that HTTP has a well-known port. That’s the only sense I can make from what you’re saying, anyway, because you’re ranting at something you seem to have an extremely loose grip on understanding. Hundreds, if not thousands, of protocols start their life with gethostbyname, which looks at A/AAAA. That’s not unique to HTTP and, more importantly, doesn’t deny the A and AAAA RRs for any other purpose. Your gripe makes no sense.

The apex record problem has to do with the DNS specification and the behavior of CNAMEs. It was an issue before the Web existed (crazy, right?). I sense that your career has primarily been involved with DNS as an enabling mechanism for HTTP systems, probably mostly working with virtual DNS hosts like VIPs (given that you incorrectly distinguish “hosts” from the apparent squatting you’re observing in another comment) and you have next to zero context on how very different they are and how little they have to do with each other.

Before you start accusing people of middle fingers and such, you might want to put aside your rage and question yourself: do I fully understand what I’m mad about?

[+] zarzavat|3 years ago|reply
I don’t understand. What has DNS got to do with HTTP? If I connect to http://foobar.onion/ it’s not using DNS, isn’t that what they mean by “not requiring a particular lookup technology”?

And what is the issue with A/AAAA records exactly?

[+] tsimionescu|3 years ago|reply
Doesn't SSH "subvert DNS" just as much, as it also relies on A/AAAA records when you type in `ssh user@fqdn.`? Or `ping fqdn.`, for that matter?
[+] aidenn0|3 years ago|reply
Historically, isn't this what the "www" prefix to a URL is for? www.example.com for HTTP, ftp.example.com for ftp, &c.
[+] Gordonjcp|3 years ago|reply
> and then everyone squats the A/AAAA address records by default, effectively denying them for any other purpose.

Can you clearly and lucidly explain what you mean here? How is HTTP "denying" A records from pointing people to servers being used for other protocols?