top | item 38906426

(no title)

dochne | 2 years ago

It still remains a mystery to me why browsers felt they should "fix" this server misconfiguration.

It's particularly vexing to me as the main reason that people end up with misconfigured servers at all is because after they've configured their new cert (incorrectly) their web browser gives them a tick and they think they've done it right - after all, why wouldn't they?

discuss

order

kevincox|2 years ago

A common way that these work is that 1 browser does it, then if the others don't copy they appear "broken" to users.

IDK what happened in this case but it is pretty easy to imagine Chrome accidentally allowed validation against certificates in its local cache. Maybe it added some sort of validation cache to avoid rechecking revocation lists and OSCP or similar and it would use intermediates from other sites. Then people tested their site in Chrome and it seemed to work. Now Firefox seems broken if they don't support this. So they decided to implement this and do something more robust by preloading a fixed list rather than whatever happens to be in the cache.

Basically no browser wants to be the first to stop supporting this hack.

hannob|2 years ago

This is ultimately an application of the "robustness principle" or Poestel's law, which was how people build stuff in the early Internet.

Plenty of people believe these days that this was never a wise guideline to begin with (see https://www.ietf.org/archive/id/draft-iab-protocol-maintenan... which unfortunately never made it to an RFC). However, one of the problems is that once you started accepting misconfigurations, it's hard to change your defaults.

chowells|2 years ago

It's Postel's Law being bad advice yet again. No, you should not be liberal in what you accept, because being liberal in what you accept causes even more malformed data to appear in the ecosystem.

drdaeman|2 years ago

That battle is long lost.

For me the revelatory moment was in mid-00s, when everyone screamed anathema at XHTML, saying it was bad because it required people to write well-formed documents, when everyone just wanted to slap random tags and somehow have that steaming mess to still work.

There must me some sort of law that says in tech the crudest pile of hacks wins over any formally elegant solution every single time those hacks lets one do something that requires extra effort otherwise, even if it works only by wildest chance.

TedDoesntTalk|2 years ago

> bad advice ... being liberal in what you accept causes even more malformed data to appear in the ecosystem.

This is one perspective. Another is to be robust and resilient. Resiliency is a hallmark of good engineering. I get the sense you have not worked on server-side software that has thousands or millions of different clients.

sleevi|2 years ago

Because it wasn’t actually a server misconfiguration, nor was it, as others have speculated, about Postel’s Law.

The way X.509 was designed - to the very first version - was the notion that you have your set of CAs you trust, I have my set, and they’re different. Instead of using The Directory to resolve the path from your cert to someone I trust, PKIX (RFC 2459-et-al) defined AIA.

So the intent here was that there’s no “one right chain to rule them all”: there’s _your_ chain to your root, _my_ chain to my root, all for the same cert, using cross-certificates.

Browsers adopted X.509 before PKIX existed, and they assumed just enough of the model to get things to work. The standards were developed after, and the major vendors didn’t all update their code to match the standards. Microsoft, Sun, and many government focused customers did (and used the NIST PKITS test to prove it), Netscape/later Mozilla and OpenSSL did not: they kept their existing “works for me” implementations.

https://medium.com/@sleevi_/path-building-vs-path-verifying-... Discusses this a bit more. In modern times, the TLS RFCs better reflect that there’s no “one right chain to rule them all”. Even if you or I aren’t running our own roots that we use to cross-sign CAs we trust, we still have different browsers/trust stores taking different paths, and even in the same browser, different versions of the trust store necessitating different intermediates.

TLS has no way of negotiating what the _client’s_ trust store is in a performant, privacy-preserving way. https://datatracker.ietf.org/doc/draft-kampanakis-tls-scas-l... or https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-t... are explorations of the problem space above, though: how to have the server understand what the client will trust, so it can send the right certificate (… and omit the chain)

fsckboy|2 years ago

TLS implementations for linux IMAP email back in the day would fail-over to unencrypted credentials if the TLS handshake was unsuccessful. Not sure if that was somebody's Postellian interpretation or if it was just the spec. We had to actually block the unencrypted ports in the firewall because there was no way to tell from the client side whether you had automatically been downgraded to in-the-clear or not.

CableNinja|2 years ago

In my hosting days, we relied on the ssl checker that ssl-shopper has. Browser was never considered a valid test for us. It was final validation, but a proper ssl checker was the real test

olliej|2 years ago

As @kevincox says there's a problem where if one browser does it, then users complain a site "works in this-generations-IE" forcing the other browsers to duplicate the behaviour.

But one other problem that happens isn't necessarily "browser 1 fixed this configuration" and browser 2 copied them. It can (and often was) "browser 1 has a bug that means this is broken configuration works", and now for compatibility browser 2 implements the same behaviour. Then browser 1 finds this bug, goes to fix it, and finds that there are sites that depend on it and it also works in other browsers so even if it started off as a bug they can no longer fix it.

That's why there's increasing amounts of work involved in trying to ensure new specifications are free of ambiguities and such now before they're actually turned on by default. Even now thought you still have places where the spec has ambiguity/gaps in the specification where people will go "whatever IE/Chrome does is correct" even if the specification has a gap/ambiguity that allows different behaviour (which these days is considered a specification bug), even if other browsers agree, it's super easy for a developer to say "the most common browser is definitionally the correct implementation".

Back when I worked on engines and in committees I probably spent cumulatively more than a year do nothing but going through specification gaps working out what behaviour was _required_ to ensure sufficiently compatible behavior between different browsers. I spent months on key events and key codes alone, trying to work out which events need to be sent, which key codes, how IM/IME (input method [editor] mechanism used for non-latin text) systems interact with it, etc. As part of this I added the ability to create IMEs in javascript to the webkit test infrastructure because otherwise it was super easy to break random IMEs because they all behave completely differently in response to single key presses.

samus|2 years ago

It's very difficult in practice to shift the blame to the website. Even though the browser would be right in refusing connection, the net effect is that the user would just use another browser to access that website. The proper workaround (Firefox shipping intermediate certificates), doesn't actually damage security. It just means more work for the maintainers. That's a fair tradeoff for achieving more market share.

It's the same reason why browsers must be able to robustly digest HTML5 tagsoup instead of just blanking out, which is how a conforming XML processor would have to react.

stefan_|2 years ago

Do browsers or is this another OpenSSL Easter egg we all have to live with?

I remember that OpenSSL also validates certificate chains with duplicates, despite that obviously breaking the chain property. That’s wasteful but also very annoying because TLS libraries like BearSSL don’t (I guess you could hack it and remember the previous hash and stay fixed space).

tialaramex|2 years ago

The chain "property" was never enforced anywhere of consequence and is gone in TLS 1.3

In practice other than the position of the end entity's certificate, the "chain" is just a set of documents which might aid your client in verifying that this end entity certificate is OK. If you receive, in addition to the end entity certificate, certs A, B, C and D it's completely fine if certificate D has expired, certificate B is malformed and certificate A doesn't relate to this end-entity certificate at all as far as you're concerned if you're able (perhaps with the aid of C) to conclude that yes, this is the right end entity and it's a trustworthy certificate.

Insisting on a chain imagines that the Web PKI's trust graph is a DAG and it is not. So since the trust graph we're excerpting has cycles and is generally a complete mess we need to accept that we can't necessarily turn a section of that graph (if it even was one graph which it isn't, each client possibly has a slightly different trust set) into a chain.

kevingadd|2 years ago

Maybe in the modem days the smaller certificate was considered ideal for connection overhead?

gregmac|2 years ago

It wasn't long ago when TLS was not the norm and many, many sites were served over plain HTTP, even when they accepted logins or contained other sensitive data. There's a good chance this decision was a trade-off to make TLS simpler to get working in order to get more sites using it.

Browsers have a long history of accepting bad data, including malformed headers, invalid HTML, and maintaining workarounds for long-since-fixed bugs. This isn't really that different.

samus|2 years ago

Really? You receive two files from your CA. One of them is the leaf, the other one is the chain. You just have to upload the latter (not the former) into the server's config directory. That doesn't sound that hard.

If it actually is, I am ready to eat my words, but the actual blame would be on the webserver developers then. Default settings should be boring, but secure; advanced configuration should be approachable; and dangerous settings should require the admin to jump through hoops.