(no title)
dochne | 2 years ago
It's particularly vexing to me as the main reason that people end up with misconfigured servers at all is because after they've configured their new cert (incorrectly) their web browser gives them a tick and they think they've done it right - after all, why wouldn't they?
kevincox|2 years ago
IDK what happened in this case but it is pretty easy to imagine Chrome accidentally allowed validation against certificates in its local cache. Maybe it added some sort of validation cache to avoid rechecking revocation lists and OSCP or similar and it would use intermediates from other sites. Then people tested their site in Chrome and it seemed to work. Now Firefox seems broken if they don't support this. So they decided to implement this and do something more robust by preloading a fixed list rather than whatever happens to be in the cache.
Basically no browser wants to be the first to stop supporting this hack.
jakub_g|2 years ago
https://bugzilla.mozilla.org/show_bug.cgi?id=399324#c16
hannob|2 years ago
Plenty of people believe these days that this was never a wise guideline to begin with (see https://www.ietf.org/archive/id/draft-iab-protocol-maintenan... which unfortunately never made it to an RFC). However, one of the problems is that once you started accepting misconfigurations, it's hard to change your defaults.
ekr____|2 years ago
https://datatracker.ietf.org/doc/rfc9413/
chowells|2 years ago
drdaeman|2 years ago
For me the revelatory moment was in mid-00s, when everyone screamed anathema at XHTML, saying it was bad because it required people to write well-formed documents, when everyone just wanted to slap random tags and somehow have that steaming mess to still work.
There must me some sort of law that says in tech the crudest pile of hacks wins over any formally elegant solution every single time those hacks lets one do something that requires extra effort otherwise, even if it works only by wildest chance.
TedDoesntTalk|2 years ago
This is one perspective. Another is to be robust and resilient. Resiliency is a hallmark of good engineering. I get the sense you have not worked on server-side software that has thousands or millions of different clients.
sleevi|2 years ago
The way X.509 was designed - to the very first version - was the notion that you have your set of CAs you trust, I have my set, and they’re different. Instead of using The Directory to resolve the path from your cert to someone I trust, PKIX (RFC 2459-et-al) defined AIA.
So the intent here was that there’s no “one right chain to rule them all”: there’s _your_ chain to your root, _my_ chain to my root, all for the same cert, using cross-certificates.
Browsers adopted X.509 before PKIX existed, and they assumed just enough of the model to get things to work. The standards were developed after, and the major vendors didn’t all update their code to match the standards. Microsoft, Sun, and many government focused customers did (and used the NIST PKITS test to prove it), Netscape/later Mozilla and OpenSSL did not: they kept their existing “works for me” implementations.
https://medium.com/@sleevi_/path-building-vs-path-verifying-... Discusses this a bit more. In modern times, the TLS RFCs better reflect that there’s no “one right chain to rule them all”. Even if you or I aren’t running our own roots that we use to cross-sign CAs we trust, we still have different browsers/trust stores taking different paths, and even in the same browser, different versions of the trust store necessitating different intermediates.
TLS has no way of negotiating what the _client’s_ trust store is in a performant, privacy-preserving way. https://datatracker.ietf.org/doc/draft-kampanakis-tls-scas-l... or https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-t... are explorations of the problem space above, though: how to have the server understand what the client will trust, so it can send the right certificate (… and omit the chain)
fsckboy|2 years ago
CableNinja|2 years ago
olliej|2 years ago
But one other problem that happens isn't necessarily "browser 1 fixed this configuration" and browser 2 copied them. It can (and often was) "browser 1 has a bug that means this is broken configuration works", and now for compatibility browser 2 implements the same behaviour. Then browser 1 finds this bug, goes to fix it, and finds that there are sites that depend on it and it also works in other browsers so even if it started off as a bug they can no longer fix it.
That's why there's increasing amounts of work involved in trying to ensure new specifications are free of ambiguities and such now before they're actually turned on by default. Even now thought you still have places where the spec has ambiguity/gaps in the specification where people will go "whatever IE/Chrome does is correct" even if the specification has a gap/ambiguity that allows different behaviour (which these days is considered a specification bug), even if other browsers agree, it's super easy for a developer to say "the most common browser is definitionally the correct implementation".
Back when I worked on engines and in committees I probably spent cumulatively more than a year do nothing but going through specification gaps working out what behaviour was _required_ to ensure sufficiently compatible behavior between different browsers. I spent months on key events and key codes alone, trying to work out which events need to be sent, which key codes, how IM/IME (input method [editor] mechanism used for non-latin text) systems interact with it, etc. As part of this I added the ability to create IMEs in javascript to the webkit test infrastructure because otherwise it was super easy to break random IMEs because they all behave completely differently in response to single key presses.
samus|2 years ago
It's the same reason why browsers must be able to robustly digest HTML5 tagsoup instead of just blanking out, which is how a conforming XML processor would have to react.
stefan_|2 years ago
I remember that OpenSSL also validates certificate chains with duplicates, despite that obviously breaking the chain property. That’s wasteful but also very annoying because TLS libraries like BearSSL don’t (I guess you could hack it and remember the previous hash and stay fixed space).
tialaramex|2 years ago
In practice other than the position of the end entity's certificate, the "chain" is just a set of documents which might aid your client in verifying that this end entity certificate is OK. If you receive, in addition to the end entity certificate, certs A, B, C and D it's completely fine if certificate D has expired, certificate B is malformed and certificate A doesn't relate to this end-entity certificate at all as far as you're concerned if you're able (perhaps with the aid of C) to conclude that yes, this is the right end entity and it's a trustworthy certificate.
Insisting on a chain imagines that the Web PKI's trust graph is a DAG and it is not. So since the trust graph we're excerpting has cycles and is generally a complete mess we need to accept that we can't necessarily turn a section of that graph (if it even was one graph which it isn't, each client possibly has a slightly different trust set) into a chain.
kevingadd|2 years ago
unknown|2 years ago
[deleted]
gregmac|2 years ago
Browsers have a long history of accepting bad data, including malformed headers, invalid HTML, and maintaining workarounds for long-since-fixed bugs. This isn't really that different.
samus|2 years ago
If it actually is, I am ready to eat my words, but the actual blame would be on the webserver developers then. Default settings should be boring, but secure; advanced configuration should be approachable; and dangerous settings should require the admin to jump through hoops.