(no title)
temac | 1 year ago
Why inadequate (in the absolute)? This can be automated and let's encrypt allows verification through DNS, moreover this allows verification for wildcard certificates.
Now in this particular case maybe they should have gone through HTTP, and even automated with ACME. But there is nothing inadequate in the absolute in DNS verification. Besides allowing wildcard it also allows verification when you don't control the web server(s), when you don't even have a webserver at all, when the standard ports are occupied for something else, etc.
Hizonner|1 year ago
The point of X.509 certificates is that you can't rely on information you get either from the DNS or from the HTTP server. If you could, you wouldn't need the whole mess in the first place.
Sure, the verification helps, because you have to successfully fool both the client and the CA. But if you can fool one, there's a strong chance that you can fool the other. In the end, the CA is still relying on exactly the same information that the client isn't supposed to have to rely on.
The original idea behind X.509 was that verification would be "out of band", but that turned out to be expensive and non-scalable, so the X.509 world, including the CA/BF, resorted to this very weak kind of verification. They try to backstop it with stuff like certificate transparency, but it's just adding epicycles that aren't particularly reassuring.
If everybody used DNSSEC, then DNS-based verification would be OK. But at that point you ought to just distribute key hashes through the DNS and dispense with X.509 entirely. That's actually what should have happened, and probably what would have happened if X.509 hadn't still been such a cash cow at the times when the various standards solidified beyond all chance of improvement. Because of that "cash cow" status, there was a lot of obvious sabotage aimed at entrenching X.509 and fighting any attempt to improve the situation. And now we're stuck with it.
schoen|1 year ago