> Moving stuff around (from User-Agent to Sec-CH-UA-*) doesn't really solve much. That is, having to request this information before getting it doesn't help if sites routinely request all of it.
I think this is sort of ignoring the whole point of the proposal. By making sites request this information rather than simply always sending it like the User-Agent header currently does, browsers gain the ability to deny excessively intrusive requests when they occur.
That is to say, "sites routinely request all of it" is precisely the problem this proposal is intended to solve.
There are some good points in this post about things which can be improved with specific Sec-CH-UA headers, but the overall position seems to be based on a failed understanding of the purpose of client hints.
> browsers gain the ability to deny excessively intrusive requests when they occur
But Set-Cookie kind of proves what happen to that kind of feature. If at first sites gets used to be able to request it and get it, then the browsers that deny anything will simply be ignored. And then those browsers will start providing everything, because they don't want to be left out in the cold.
That's what happened to User-Agent, that's what happened to Set-Cookie, and I can't see why it won't happen to Sec-CH-UA-*. Which the post hints at several times. Set-Cookie was supposed to have the browser ask the user to confirm whether they wanted to set a cookie. Not many clients doing that today.
To be honest, I feel the proposal is a bit naïve if it thinks that websites and all browsers will suddenly be on their best behaviour.
1. Move entropy from "you get it by default" to "you have to ask for it".
2. Add new APIs that allow you to do things that previously exposed a lot of entropy in a more private way.
3. Add a budget for the total amount of entropy a site is allowed to get for a user, preventing identifying users across sites through fingerprinting.
Client hints are part of step #1. Not especially useful on its own, but when later combined with #3 sites now have a strong incentive to reduce what they ask for to just what they need.
(Disclosure: I work on ads at Google, speaking only for myself)
Well, if the browsers can just deny those requests, then they can just drop the information entirely. (And they are dropping them from the UA.)
From the two non-harmful pieces, one is of interest of all sites, and the other one has the implementation broken on Chrome, so sites will have to use an alternative mechanism anyway. If there's any value on the idea, Google can propose them with a set of information that brings value, instead of just fingerprinting people.
>By making sites request this information rather than simply always sending it like the User-Agent header currently does, browsers gain the ability to deny excessively intrusive requests when they occur.
Having to request it is a terrible idea to begin with. If I want to use different templates for mobile vs desktop, I need to know, on the backend, whether the device is a mobile device, and I need it on the very first request. Having to request these headers explicitly is an unnecessary complication that would slow down the first load.
However it is nice that there's now a separate header that gives a yes or no answer on whether it's a mobile device.
"By making sites request this information rather than simply sending it like the User-Agent header currently does..."
This is also true with respect to SNI which leaks the domain name in clear text on the wire. The popular browsers send it even when it is not required.
The forward proxy configuration I wrote distinguishes the sites (CDNs) that actually need SNI and the proxy only sends it when required. The majority of websites submitted to HN do not need it. I also require TLSv1.3 and strip out unecessary headers. It all works flawlessly with very few exceptions.
We could argue that sending so much unecessary information as popular browsers do when technically it is not necessary for the user is user hostile. It is one-sided. "Tech" companies and others interested in online advertising have been using this data to their advantage for decades.
> "User Agents MUST return the empty string for model if mobileness is false. User Agents MUST return the empty string for model even if mobileness is true, except on platforms where the model is typically exposed." (quoted from https://wicg.github.io/ua-client-hints/#user-agent-model)
Honestly now - who drafts and approves these specs? Not only does it make no sense whatsoever to encode such information this way - it also results in unimaginable amounts of bandwidth going to complete waste, on a planetary scale.
This is just plain incompetence. How did we let the technology powering the web devolve into this burning pile of nonsense?
I would rather have all this information (along with whatever is being inferred from them) be exposed through a Javascript API instead of having browsers indiscriminately flood global networks with potential PII.
Chrome came up with this? Figures. Stay evil, Google.
A JavaScript API has been considered as a replacement for the user agent string, but it has two big downsides:
1) JavaScript must be enabled. If it's not, then the server can't get any of the user agent data - at all.
2) The server won't get the user agent data until after it has already responded to the first request it receives from a client. That makes it a lot less useful overall. Having to load a page, then perhaps redirect the user using JS based on what the JS API says is a bit untidy.
> UA Client Hints proposes that information derived from the User Agent header field could only be sent to servers that specifically request that information, specifically to reduce the number of parties that can passively fingerprint users using that information. We find that the addition of new information about the UA, OS, and device to be harmful as it increases the information provided to sites for fingerprinting, without a commensurate improvements in functionality or accountability to justify that. In addition to not including this information, we would prefer freezing the User Agent string and only providing limited information via the proposed NavigatorUAData interface JS APIs. This would also allow us to audit the callers. At this time, freezing the User Agent string without any client hints (which is not this proposal) seems worth prototyping. We look forward to learning from other vendors who implement the "GREASE-like UA Strings" proposal and its effects on site compatibility.
Authors of new Client Hints are advised to carefully consider whether
they need to be able to be added by client-side content (e.g.,
scripts) or whether the Client Hints need to be exclusively set by
the user agent. In the latter case, the Sec- prefix on the header
field name has the effect of preventing scripts and other application
content from setting them in user agents. Using the "Sec-" prefix
signals to servers that the user agent -- and not application content
-- generated the values. See [FETCH] for more information.
As near as I can tell, the bit they're talking about in the Fetch standard is just this:
These are forbidden so the user agent remains in full control over them.
Names starting with `Sec-` are reserved to allow new headers to be minted
that are safe from APIs using fetch that allow control over headers by
developers, such as XMLHttpRequest.
I hope they avoid situations like the SameSite=None debacle[0] if they are going to freeze the User Agent header and not provide an alternative.
The assertion of Mozilla seems to be:
>At the time sites deploy a workaround, they can’t necessarily know what future browser version won’t have the need for the workaround. Can we guarantee only retrospective use? Do Web developers care enough about retrospective workarounds for evergreen browsers?
When there are significant numbers of users on devices like iPads that don't get updated any more, you can't rely on "evergreen browsers".
Because when you’re implementing a new spec that is still in “draft” status and constantly being updated, things could have changed drastically in 7 months and 4 major versions?
Ajedi32|4 years ago
I think this is sort of ignoring the whole point of the proposal. By making sites request this information rather than simply always sending it like the User-Agent header currently does, browsers gain the ability to deny excessively intrusive requests when they occur.
That is to say, "sites routinely request all of it" is precisely the problem this proposal is intended to solve.
There are some good points in this post about things which can be improved with specific Sec-CH-UA headers, but the overall position seems to be based on a failed understanding of the purpose of client hints.
Svip|4 years ago
But Set-Cookie kind of proves what happen to that kind of feature. If at first sites gets used to be able to request it and get it, then the browsers that deny anything will simply be ignored. And then those browsers will start providing everything, because they don't want to be left out in the cold.
That's what happened to User-Agent, that's what happened to Set-Cookie, and I can't see why it won't happen to Sec-CH-UA-*. Which the post hints at several times. Set-Cookie was supposed to have the browser ask the user to confirm whether they wanted to set a cookie. Not many clients doing that today.
To be honest, I feel the proposal is a bit naïve if it thinks that websites and all browsers will suddenly be on their best behaviour.
jefftk|4 years ago
1. Move entropy from "you get it by default" to "you have to ask for it".
2. Add new APIs that allow you to do things that previously exposed a lot of entropy in a more private way.
3. Add a budget for the total amount of entropy a site is allowed to get for a user, preventing identifying users across sites through fingerprinting.
Client hints are part of step #1. Not especially useful on its own, but when later combined with #3 sites now have a strong incentive to reduce what they ask for to just what they need.
(Disclosure: I work on ads at Google, speaking only for myself)
marcosdumay|4 years ago
From the two non-harmful pieces, one is of interest of all sites, and the other one has the implementation broken on Chrome, so sites will have to use an alternative mechanism anyway. If there's any value on the idea, Google can propose them with a set of information that brings value, instead of just fingerprinting people.
jsbdk|4 years ago
Browsers can just not send a UA header
grishka|4 years ago
However it is nice that there's now a separate header that gives a yes or no answer on whether it's a mobile device.
1vuio0pswjnm7|4 years ago
This is also true with respect to SNI which leaks the domain name in clear text on the wire. The popular browsers send it even when it is not required.
The forward proxy configuration I wrote distinguishes the sites (CDNs) that actually need SNI and the proxy only sends it when required. The majority of websites submitted to HN do not need it. I also require TLSv1.3 and strip out unecessary headers. It all works flawlessly with very few exceptions.
We could argue that sending so much unecessary information as popular browsers do when technically it is not necessary for the user is user hostile. It is one-sided. "Tech" companies and others interested in online advertising have been using this data to their advantage for decades.
csmpltn|4 years ago
Honestly now - who drafts and approves these specs? Not only does it make no sense whatsoever to encode such information this way - it also results in unimaginable amounts of bandwidth going to complete waste, on a planetary scale.
This is just plain incompetence. How did we let the technology powering the web devolve into this burning pile of nonsense?
dmitriid|4 years ago
Approves: no one.
Chrome just releases them in stable versions with little to no discussion, and the actual specs remain in draft stages.
Edit: grammar
joshuamorton|4 years ago
I mean sure http being plaintext is silly but that's not down to the authors of this particular rfc.
bzbarsky|4 years ago
theandrewbailey|4 years ago
Chrome came up with this? Figures. Stay evil, Google.
esprehn|4 years ago
jedimastert|4 years ago
https://wicg.github.io/ua-client-hints/#interface
daveoc64|4 years ago
1) JavaScript must be enabled. If it's not, then the server can't get any of the user agent data - at all.
2) The server won't get the user agent data until after it has already responded to the first request it receives from a client. That makes it a lot less useful overall. Having to load a page, then perhaps redirect the user using JS based on what the JS API says is a bit untidy.
admax88q|4 years ago
hypertele-Xii|4 years ago
ocdtrekkie|4 years ago
justshowpost|4 years ago
https://mozilla.github.io/standards-positions/#ua-client-hin...
jrochkind1|4 years ago
banana_giraffe|4 years ago
daveoc64|4 years ago
The assertion of Mozilla seems to be:
>At the time sites deploy a workaround, they can’t necessarily know what future browser version won’t have the need for the workaround. Can we guarantee only retrospective use? Do Web developers care enough about retrospective workarounds for evergreen browsers?
When there are significant numbers of users on devices like iPads that don't get updated any more, you can't rely on "evergreen browsers".
[0] - https://www.chromium.org/updates/same-site/incompatible-clie...
unknown|4 years ago
[deleted]
fnord77|4 years ago
intentional?
mort96|4 years ago
Knowing the exact make and model of an Android device is a lot higher entropy than knowing the exact make and model of an iPhone.
dmitriid|4 years ago
That quote from the first comment on the issue is just a cherry on top.
Chrome 88 was released in December 2020. 7 months ago.
ThePadawan|4 years ago
oefrha|4 years ago