top | item 20469157

(no title)

spectre256 | 6 years ago

It's definitely worth repeating the warning that, while very useful, Strict-Transport-Security should be deployed with special care!

While the author's example of `max-age=3600` means there's only an hour of potential problems, enabling Strict-Transport-Security has the potential to prevent people from accessing your site if for whatever reason you are no longer able to serve HTTPS traffic.

Considering another common setting is to enable HSTS for a year, its worth enabling only deliberately and with some thought.

discuss

order

txcwpalpha|6 years ago

Unless your site is nothing but a dumb billboard serving nothing but static assets (and maybe even then...), the inability to serve HTTPS traffic should be considered a breaking issue and you shouldn't be serving anything until your HTTPS is restored. "Reduced security" is not a valid fallback option.

That might not be something that a company's management team wants to hear, but indicating to your users that falling back to insecure HTTP is just something that happens sometimes and they should continue using your site is one of the worst things you can possibly do in terms of security.

bityard|6 years ago

Here's a real example of how HSTS can break a site: My personal, non-public wiki is secured by HTTPS with a certificate valid for 5 years. I thought it would be neat to enable HSTS for it because what could go wrong?

Well, just last week the HTTPS certificate expired in the middle of the day. I had about a half days' worth of work typed up into the browser's text field and when I clicked "submit", all of my work vanished and Firefox only showed a page stating that the certificate was invalid and that nothing could be done about it. I clicked the back button, same thing. Forward button, same thing. A half-days worth of work vanished into thin air.

Is this my fault for letting the certificate expire? Absolutely. Should I have used letsencrypt so I didn't have to worry about it. Sure. Should I be using a notes system that doesn't throw away my work when there's a problem saving it? Definitely. I don't deny that there's lots that I could have done to prevent this from being a problem and lots that I need to fix in the future.

But it does point out that if you use HSTS, you have to be _really_ sure that _all_ your ducks are in a row or it _will_ come back to bite you eventually.

ergothus|6 years ago

> the inability to serve HTTPS traffic should be considered a breaking issue

> "Reduced security" is not a valid fallback option.

Agreed! But if my HTTPS is broken, I might well want to replace my site with an HTTP page explaining that we'll be back soon. If that is impossible until the max_age expires, that can lead to an awkward explanation to the higher-ups.

gorkish|6 years ago

Everything you said is true, but it does not provide any reasonable argument that HSTS as it is designed and implemented is a valid way to enforce this. The potential for malicious or accidental misuse to cause an effectively immediate and irreversible domain-wide DoS is simply too great. I am quite surprised that the feature made it through planning and implementation to begin with.

idlewords|6 years ago

This is a silly and absolutist position to take on HTTP. Everything depends on context, and in many cases it is far better to serve things over open HTTP than go offline.

sjwright|6 years ago

Not being able to serve HTTPS is not a real concern. It seems possible but in reality it simply won’t happen. If it ever does break, you fix it, you don’t change protocols.

Once you go HTTPS you’re all in regardless whether or not you’ve set HSTS headers. Let’s say your HTTPS certificate fails and you can’t get it replaced. So what, you’re going to temporarily move back to HTTP for a few days? Not going to happen! Everyone has already bookmarked/linked/shared/crawled your HTTPS URLs. There is no automated way to downgrade people to HTTP, so only the geeks who would even think to try removing the “s” will be able to visit. And most geeks won’t even do that because we’ve probably never encountered a situation where that has ever helped.

SquareWheel|6 years ago

It happened to me. I served my site over TLS and used HSTS. Hosting got expensive, so I rebuilt my site on Github Pages and hosted there. It was another year and a half before they rolled out HTTPS for custom domains.

In that case, old visitors were rejected due to the policy. I wish I had set a lower duration.

Someone1234|6 years ago

It is a good point.

I would like to add that a lot of web-apps break if they aren't served over HTTPS regardless, due to the Secure flag being set on cookies. For example if we run ours in HTTP (even for development) it will successfully set the cookie (+Secure +HttpOnly) but cannot read the cookie back and you get stuck on the login page indefinitely.

So we just set ours to a year, and consider HTTPS to be a mission critical tier component. It goes down the site is simply "down."

HSTS is kind of the "secret sauce" that gives developers coverage to mandate Secure cookies only. Before then we'd get caught in "what if" bikeshedding[0].

[0] https://en.wiktionary.org/wiki/bikeshedding

pimterry|6 years ago

The only risk is if you've served HTTPS traffic properly with HSTS headers to users, and then your server is later unable to correctly handle HTTPS traffic. Note that HSTS headers on a non-HTTPS response are ignored.

Whilst there's cases where you might fail to serve HTTPS traffic temporarily (i.e. if your cert expires and you don't handle it) almost all HTTPS problems are quick fixes, and are probably your #1 priority regardless of HSTS. If your HTTPS setup is broken and your application has any real security concerns at all then it's arguably better to be inaccessible, rather than quietly allow insecure traffic in the meantime, exposing all your users' traffic. I don't know many good reasons you'd suddenly want to go from HTTPS back to only supporting plain HTTP either. I just can't see any realistic scenarios where HSTS causes you extra problems.

BCharlie|6 years ago

I think it's a good point which is why I set the time low, even though many other resources set it to a week or longer. I just don't like very long cache times for anything that can break, so that site owners have a little more flexibility in case something goes wrong down the line.

ehPReth|6 years ago

Speaking of HSTS.. does anyone here know if Firebase Hosting (Google Cloud) plans to support custom HSTS headers with custom domains? I can’t add things like includesubdomains or preload at present unfortunately

mtgx|6 years ago

> if for whatever reason you are no longer able to serve HTTPS traffic

Isn't that how it should work? Would you rather use Gmail over HTTP if its HTTPS stopped working? Besides, just supporting HTTP fallback means you're much more vulnerable to downgrade attacks -- it's the first thing attackers will attempt to use.

zaarn|6 years ago

I set HSTS to 10 years. My infrastructure isn't even capable of serving HTTP other than for LetsEncrypt certs. An outage on HTTPS is a full outage. Most of my sites handle user data in some way, so HTTPS is mandatory anyway, as per my interpretation of the GDPR.

tialaramex|6 years ago

I don't get people who worry about _feature_ pinning like this.

I imagine them looking at a business continuity plan and being aghast - why are we spending money to manage the risk from a wildfire in California overwhelming our site there, yet we haven't spent ten times as much on a zombie werewolf defence grid or to protect against winged bears?

HSTS defends against a real problem that actually happens, like those Californian wildfires, whereas "whatever reason you are no longer able to serve HTTPS traffic" is a fantasy like the winged bears that you don't need to concern yourself with.