This is a good basic overview of the basic headers, but I suggest spending some time on Scott Helme's blog. He runs securityheaders.io, a free service that scans your site, and assigns it a letter grade based on what headers and configurations you've applied.
For instance, his explanation of Content Security Policy headers is much more detailed than in the OP's link.
It's definitely worth repeating the warning that, while very useful, Strict-Transport-Security should be deployed with special care!
While the author's example of `max-age=3600` means there's only an hour of potential problems, enabling Strict-Transport-Security has the potential to prevent people from accessing your site if for whatever reason you are no longer able to serve HTTPS traffic.
Considering another common setting is to enable HSTS for a year, its worth enabling only deliberately and with some thought.
Unless your site is nothing but a dumb billboard serving nothing but static assets (and maybe even then...), the inability to serve HTTPS traffic should be considered a breaking issue and you shouldn't be serving anything until your HTTPS is restored. "Reduced security" is not a valid fallback option.
That might not be something that a company's management team wants to hear, but indicating to your users that falling back to insecure HTTP is just something that happens sometimes and they should continue using your site is one of the worst things you can possibly do in terms of security.
Not being able to serve HTTPS is not a real concern. It seems possible but in reality it simply won’t happen. If it ever does break, you fix it, you don’t change protocols.
Once you go HTTPS you’re all in regardless whether or not you’ve set HSTS headers. Let’s say your HTTPS certificate fails and you can’t get it replaced. So what, you’re going to temporarily move back to HTTP for a few days? Not going to happen! Everyone has already bookmarked/linked/shared/crawled your HTTPS URLs. There is no automated way to downgrade people to HTTP, so only the geeks who would even think to try removing the “s” will be able to visit. And most geeks won’t even do that because we’ve probably never encountered a situation where that has ever helped.
I would like to add that a lot of web-apps break if they aren't served over HTTPS regardless, due to the Secure flag being set on cookies. For example if we run ours in HTTP (even for development) it will successfully set the cookie (+Secure +HttpOnly) but cannot read the cookie back and you get stuck on the login page indefinitely.
So we just set ours to a year, and consider HTTPS to be a mission critical tier component. It goes down the site is simply "down."
HSTS is kind of the "secret sauce" that gives developers coverage to mandate Secure cookies only. Before then we'd get caught in "what if" bikeshedding[0].
The only risk is if you've served HTTPS traffic properly with HSTS headers to users, and then your server is later unable to correctly handle HTTPS traffic. Note that HSTS headers on a non-HTTPS response are ignored.
Whilst there's cases where you might fail to serve HTTPS traffic temporarily (i.e. if your cert expires and you don't handle it) almost all HTTPS problems are quick fixes, and are probably your #1 priority regardless of HSTS. If your HTTPS setup is broken and your application has any real security concerns at all then it's arguably better to be inaccessible, rather than quietly allow insecure traffic in the meantime, exposing all your users' traffic. I don't know many good reasons you'd suddenly want to go from HTTPS back to only supporting plain HTTP either. I just can't see any realistic scenarios where HSTS causes you extra problems.
I think it's a good point which is why I set the time low, even though many other resources set it to a week or longer. I just don't like very long cache times for anything that can break, so that site owners have a little more flexibility in case something goes wrong down the line.
Speaking of HSTS.. does anyone here know if Firebase Hosting (Google Cloud) plans to support custom HSTS headers with custom domains? I can’t add things like includesubdomains or preload at present unfortunately
> if for whatever reason you are no longer able to serve HTTPS traffic
Isn't that how it should work? Would you rather use Gmail over HTTP if its HTTPS stopped working? Besides, just supporting HTTP fallback means you're much more vulnerable to downgrade attacks -- it's the first thing attackers will attempt to use.
I set HSTS to 10 years. My infrastructure isn't even capable of serving HTTP other than for LetsEncrypt certs. An outage on HTTPS is a full outage. Most of my sites handle user data in some way, so HTTPS is mandatory anyway, as per my interpretation of the GDPR.
I don't get people who worry about _feature_ pinning like this.
I imagine them looking at a business continuity plan and being aghast - why are we spending money to manage the risk from a wildfire in California overwhelming our site there, yet we haven't spent ten times as much on a zombie werewolf defence grid or to protect against winged bears?
HSTS defends against a real problem that actually happens, like those Californian wildfires, whereas "whatever reason you are no longer able to serve HTTPS traffic" is a fantasy like the winged bears that you don't need to concern yourself with.
Instead of X-Frame-Options one should use CSP's frame-ancestors option, it has wider support among modern browsers. But CSP deserves more than one paragraph in general.
He also missed Expect-Staple and Expect-CT, in addition to that, most of security headers have the option to specify an URI where failures are sent, very important in production environments.
Expect-CT is pretty marginal. In principle a browser could implement Certificate Transparency but then only bother to enforce it if Expect-CT is present, in practice the policy ends up being that they'll enforce CT system-wide after some date. Setting Expect-CT doesn't have any effect on a browser that can't understand SCTs anyway, so that leaves basically no audience.
Furthermore, especially with Symantec out of the picture, there is no broad consumer market for certificates from the Web PKI which don't have SCTs. The audience of people who know they want a certificate is hugely tilted towards people with very limited grasp of what's going on, almost all of whom definitely need embedded SCTs or they're in for a bad surprise. So it doesn't even make sense to have a checkbox for "I don't want SCTs" because 99% of people who click it were just clicking boxes without understanding them and will subsequently complain that the certificate doesn't "work" because it didn't have any SCTs baked into it.
There are CAs with no logging for either industrial applications which aren't built out of a web browser (and so don't check SCTs) and are due to be retired before it'd make sense to upgrade them (most are gone in 2019 or 2020) or for specialist customers like Google whose servers are set up to go get them SCTs at the last moment, to be stapled later. Neither is a product with a consumer audience. Which means neither is a plausible source of certificates for your hypothetical security adversary.
As a result, in reality Expect-CT doesn't end up defending you against anything that's actually likely to happen, making it probably a waste of a few bytes.
That is true! I do set frame-ancestors in the sample CSP for this reason. I could probably do a dedicated post on CSP to do it justice, but don't want to overwhelm anyone who just wants to start setting headers.
One good reason to set both options, as I mention in the post, is that scanners who rate site security posture may penalize site owners who don't set both - no harm in doing it that I know of.
> X-frame-options is obsolete. Most browsers complain loudly on the console or ignore the header.
The deny option seems to work just fine. My default browser (Firefox) doesn't complain. MDN doesn't indicate any browsers have dropped support. Plus, dropping support would be an unmitigated and unnecessary unforced security error, by making old sites insecure. Do you have a link to an example of a browser ignoring the header?
And in the examples setting both. Because in my experience you cannot set multiple [1]. Lots of people instead set it to * which is both bad and restricts use of other request options (such as withCredentials). It looks like the current working solution is to use regexes to return the right domain [2], but I'm currently having trouble getting that to work, so if there's some better solution that works for people I'd love to hear it.
I think the problem that people are running into with CORS is that their webserver was created before CORS was a thing, so it's tough to configure it correctly. What you want to do is if you allow the provided Origin, echo it back in Access-Control-Allow-Origin.
Envoy has a plugin to do this (envoy.cors), allowing you to configure allowed origins the way people want (["*.example.com", "foo.com"]) and then emitting the right headers when a request comes in. It also emits statistics on how many requests were allowed or denied, so you can monitor that your rules are working correctly. If you are using something else, I recommend just having your web application do the logic and supply the right headers. (You should also be prepared to handle an OPTIONS request for CORS preflight.)
You are right on this - I thought you could set multiple sites by setting multiple headers, but it doesn't work that way, which I should have known because headers don't work that way in general...
The recommended way to do multiple sites seems to be to have the server read the request header, check it against a whitelist, then dynamically respond with it, which seems terrible.
Thanks for catching this - I updated the post to reflect this and make it more clear.
If anyone's interested, I wrote a guide a while ago on adding these headers via Cloudflare Workers, which can be helpful if you're hosting a static site on S3, GitHub Pages, etc. where you can't add these headers directly:
The nginx header directives are all not in correct syntax with the extra ":", and for those directives with multiple values, it should be wrapped within a "" (such as "1; mode=block"), here is the correct settings:
The X-XSS-Protection header recommendation is a Zombie recommendation which is at best outdated and at worst harmful. Its origins are based on old IE bugs but it introduces worse issues.
IMHO, the best value for X-XSS-Protection is either 0 (disabling it completely like Facebook does) or not providing the value at all and just letting the client browser use its default. Why?
First, XSS 'protection' is about to not be implemented by most browsers. Google has decide to deprecate Chrome's XSS Auditor[0], and stop supporting XSS 'protection'. Microsoft has already removed its XSS filter from Edge[1]. Mozilla has never bothered to support it in Firefox.
So most leading net companies already think it doesn't work. Safari of course supports the much stronger CSP. So it's only possibly useful on IE - if you don't support IE, might as well save the bytes.
Second, XSS 'protection' protects less than one might think. In all implementing browsers, it has always been implemented as part of the HTML parser, making it useless against DOM-based attacks (and strictly inferior to CSP)[2].
Worse, the XSS 'protection' can be used to create security flaws. IE's default is to detect XSS and try to filter it out, this has been known to be buggy to the point of creating XSS on safe pages[3], which is why the typical recommendation has been the block behaviour. But blocking has been itself exploited in the past[4], and has side-channel leaks that even Google considers too difficult to catch[0] to the point of preferring to remove XSS 'protection' altogether. Blocking has an obvious social exploitation which can create attacks or make attacks more serious.[5]
In short, the best idea is to get rid of browsers' XSS 'protection' ASAP in favour of CSP, preferably by having all browsers deprecate it. This is happening anyway, so might as well save the bytes. But if you do provide the header, I suggest disable XSS 'protection' altogether.
[5] Assume that an attacker has enough access to normally allow XSS. If he does not, the filter is useless. If he does, the attacker can by definition trigger the filter. So trigger the filter, make a webpage be blocked, and call the affected user as "support". From there the exploitation is obvious, and can be much worse than mere XSS. Now, remember that all those XSS filters in all likelihood have false positives, that may not be blocked by other defences because they're not attacks. So It's quite possible the filter introduces a social attack that wouldn't be possible otherwise!
The author recommends either changing the default behaviour to block or disabling the filter altogether. I believe experience has shown this protection method cannot be fixed.
Ultimately, safe code is code that can be reasoned about but there never was even any specification for this 'feature'. By comparison, CSP has a strict specification. It covers more attacks, and has a better failure mode between XSS protections' filter and block entire page load behaviours.
[+] [-] deftnerd|6 years ago|reply
For instance, his explanation of Content Security Policy headers is much more detailed than in the OP's link.
https://scotthelme.co.uk/content-security-policy-an-introduc...
[+] [-] el_duderino|6 years ago|reply
https://scotthelme.co.uk/security-headers-is-changing-domain...
[+] [-] t34543|6 years ago|reply
[+] [-] spectre256|6 years ago|reply
While the author's example of `max-age=3600` means there's only an hour of potential problems, enabling Strict-Transport-Security has the potential to prevent people from accessing your site if for whatever reason you are no longer able to serve HTTPS traffic.
Considering another common setting is to enable HSTS for a year, its worth enabling only deliberately and with some thought.
[+] [-] txcwpalpha|6 years ago|reply
That might not be something that a company's management team wants to hear, but indicating to your users that falling back to insecure HTTP is just something that happens sometimes and they should continue using your site is one of the worst things you can possibly do in terms of security.
[+] [-] sjwright|6 years ago|reply
Once you go HTTPS you’re all in regardless whether or not you’ve set HSTS headers. Let’s say your HTTPS certificate fails and you can’t get it replaced. So what, you’re going to temporarily move back to HTTP for a few days? Not going to happen! Everyone has already bookmarked/linked/shared/crawled your HTTPS URLs. There is no automated way to downgrade people to HTTP, so only the geeks who would even think to try removing the “s” will be able to visit. And most geeks won’t even do that because we’ve probably never encountered a situation where that has ever helped.
[+] [-] Someone1234|6 years ago|reply
I would like to add that a lot of web-apps break if they aren't served over HTTPS regardless, due to the Secure flag being set on cookies. For example if we run ours in HTTP (even for development) it will successfully set the cookie (+Secure +HttpOnly) but cannot read the cookie back and you get stuck on the login page indefinitely.
So we just set ours to a year, and consider HTTPS to be a mission critical tier component. It goes down the site is simply "down."
HSTS is kind of the "secret sauce" that gives developers coverage to mandate Secure cookies only. Before then we'd get caught in "what if" bikeshedding[0].
[0] https://en.wiktionary.org/wiki/bikeshedding
[+] [-] pimterry|6 years ago|reply
Whilst there's cases where you might fail to serve HTTPS traffic temporarily (i.e. if your cert expires and you don't handle it) almost all HTTPS problems are quick fixes, and are probably your #1 priority regardless of HSTS. If your HTTPS setup is broken and your application has any real security concerns at all then it's arguably better to be inaccessible, rather than quietly allow insecure traffic in the meantime, exposing all your users' traffic. I don't know many good reasons you'd suddenly want to go from HTTPS back to only supporting plain HTTP either. I just can't see any realistic scenarios where HSTS causes you extra problems.
[+] [-] BCharlie|6 years ago|reply
[+] [-] ehPReth|6 years ago|reply
[+] [-] mtgx|6 years ago|reply
Isn't that how it should work? Would you rather use Gmail over HTTP if its HTTPS stopped working? Besides, just supporting HTTP fallback means you're much more vulnerable to downgrade attacks -- it's the first thing attackers will attempt to use.
[+] [-] zaarn|6 years ago|reply
[+] [-] tialaramex|6 years ago|reply
I imagine them looking at a business continuity plan and being aghast - why are we spending money to manage the risk from a wildfire in California overwhelming our site there, yet we haven't spent ten times as much on a zombie werewolf defence grid or to protect against winged bears?
HSTS defends against a real problem that actually happens, like those Californian wildfires, whereas "whatever reason you are no longer able to serve HTTPS traffic" is a fantasy like the winged bears that you don't need to concern yourself with.
[+] [-] barathvutukuri|6 years ago|reply
[deleted]
[+] [-] undecidabot|6 years ago|reply
Also, for "Set-Cookie", the relatively new "SameSite"[2] directive would be a good addition for most sites.
Oh, and for CSP, check Google's evaluator out[3].
[1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Re...
[2] https://www.owasp.org/index.php/SameSite
[3] https://csp-evaluator.withgoogle.com
[+] [-] will4274|6 years ago|reply
[+] [-] Avamander|6 years ago|reply
He also missed Expect-Staple and Expect-CT, in addition to that, most of security headers have the option to specify an URI where failures are sent, very important in production environments.
[+] [-] tialaramex|6 years ago|reply
Furthermore, especially with Symantec out of the picture, there is no broad consumer market for certificates from the Web PKI which don't have SCTs. The audience of people who know they want a certificate is hugely tilted towards people with very limited grasp of what's going on, almost all of whom definitely need embedded SCTs or they're in for a bad surprise. So it doesn't even make sense to have a checkbox for "I don't want SCTs" because 99% of people who click it were just clicking boxes without understanding them and will subsequently complain that the certificate doesn't "work" because it didn't have any SCTs baked into it.
There are CAs with no logging for either industrial applications which aren't built out of a web browser (and so don't check SCTs) and are due to be retired before it'd make sense to upgrade them (most are gone in 2019 or 2020) or for specialist customers like Google whose servers are set up to go get them SCTs at the last moment, to be stapled later. Neither is a product with a consumer audience. Which means neither is a plausible source of certificates for your hypothetical security adversary.
As a result, in reality Expect-CT doesn't end up defending you against anything that's actually likely to happen, making it probably a waste of a few bytes.
[+] [-] BCharlie|6 years ago|reply
One good reason to set both options, as I mention in the post, is that scanners who rate site security posture may penalize site owners who don't set both - no harm in doing it that I know of.
[+] [-] Grollicus|6 years ago|reply
[+] [-] BCharlie|6 years ago|reply
[+] [-] mitchtbaum|6 years ago|reply
* [HTTP Signatures](https://tools.ietf.org/id/draft-cavage-http-signatures-01.ht...)
* [draft-cavage-http-signatures-10 - Signing HTTP Messages](https://tools.ietf.org/html/draft-cavage-http-signatures-10)
* [https://www.rfc-editor.org/rfc/rfc4686.txt](https://www.rfc-...
* [https://www.rfc-editor.org/rfc/rfc3335.txt](https://www.rfc-...
[+] [-] the_common_man|6 years ago|reply
[+] [-] will4274|6 years ago|reply
The deny option seems to work just fine. My default browser (Firefox) doesn't complain. MDN doesn't indicate any browsers have dropped support. Plus, dropping support would be an unmitigated and unnecessary unforced security error, by making old sites insecure. Do you have a link to an example of a browser ignoring the header?
[+] [-] floatingatoll|6 years ago|reply
[+] [-] dalf|6 years ago|reply
Example :
Documentation: https://developer.mozilla.org/en-US/docs/Web/HTTP/Feature_Po...[+] [-] joecot|6 years ago|reply
> Access-Control-Allow-Origin: http://www.one.site.com
> Access-Control-Allow-Origin: http://www.two.site.com
And in the examples setting both. Because in my experience you cannot set multiple [1]. Lots of people instead set it to * which is both bad and restricts use of other request options (such as withCredentials). It looks like the current working solution is to use regexes to return the right domain [2], but I'm currently having trouble getting that to work, so if there's some better solution that works for people I'd love to hear it.
1. https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Error... 2. https://stackoverflow.com/questions/1653308/access-control-a...
[+] [-] jrockway|6 years ago|reply
Envoy has a plugin to do this (envoy.cors), allowing you to configure allowed origins the way people want (["*.example.com", "foo.com"]) and then emitting the right headers when a request comes in. It also emits statistics on how many requests were allowed or denied, so you can monitor that your rules are working correctly. If you are using something else, I recommend just having your web application do the logic and supply the right headers. (You should also be prepared to handle an OPTIONS request for CORS preflight.)
[+] [-] BCharlie|6 years ago|reply
The recommended way to do multiple sites seems to be to have the server read the request header, check it against a whitelist, then dynamically respond with it, which seems terrible.
Thanks for catching this - I updated the post to reflect this and make it more clear.
[+] [-] jakejarvis|6 years ago|reply
If anyone's interested, I wrote a guide a while ago on adding these headers via Cloudflare Workers, which can be helpful if you're hosting a static site on S3, GitHub Pages, etc. where you can't add these headers directly:
https://jarv.is/notes/security-headers-cloudflare-workers/
[+] [-] hcheung|6 years ago|reply
[+] [-] cujanovic|6 years ago|reply
[+] [-] kureikain|6 years ago|reply
I will include in in my newsletter[0] next monday if you don't mind.
---
[0]: https://betterdev.link
[+] [-] wheresvic1|6 years ago|reply
[1] https://www.npmjs.com/package/helmet
[+] [-] yyyk|6 years ago|reply
IMHO, the best value for X-XSS-Protection is either 0 (disabling it completely like Facebook does) or not providing the value at all and just letting the client browser use its default. Why?
First, XSS 'protection' is about to not be implemented by most browsers. Google has decide to deprecate Chrome's XSS Auditor[0], and stop supporting XSS 'protection'. Microsoft has already removed its XSS filter from Edge[1]. Mozilla has never bothered to support it in Firefox.
So most leading net companies already think it doesn't work. Safari of course supports the much stronger CSP. So it's only possibly useful on IE - if you don't support IE, might as well save the bytes.
Second, XSS 'protection' protects less than one might think. In all implementing browsers, it has always been implemented as part of the HTML parser, making it useless against DOM-based attacks (and strictly inferior to CSP)[2].
Worse, the XSS 'protection' can be used to create security flaws. IE's default is to detect XSS and try to filter it out, this has been known to be buggy to the point of creating XSS on safe pages[3], which is why the typical recommendation has been the block behaviour. But blocking has been itself exploited in the past[4], and has side-channel leaks that even Google considers too difficult to catch[0] to the point of preferring to remove XSS 'protection' altogether. Blocking has an obvious social exploitation which can create attacks or make attacks more serious.[5]
In short, the best idea is to get rid of browsers' XSS 'protection' ASAP in favour of CSP, preferably by having all browsers deprecate it. This is happening anyway, so might as well save the bytes. But if you do provide the header, I suggest disable XSS 'protection' altogether.
[0] https://groups.google.com/a/chromium.org/forum/#!msg/blink-d...
[1] https://developer.microsoft.com/en-us/microsoft-edge/platfor...
[2] e.g. https://github.com/WebKit/webkit/blob/d70365e65de64b8f6eaf1f...
[3] CVE-2014-6328, CVE-2015-6164, CVE-2016-3212..
[4] https://portswigger.net/blog/abusing-chromes-xss-auditor-to-...
[5] Assume that an attacker has enough access to normally allow XSS. If he does not, the filter is useless. If he does, the attacker can by definition trigger the filter. So trigger the filter, make a webpage be blocked, and call the affected user as "support". From there the exploitation is obvious, and can be much worse than mere XSS. Now, remember that all those XSS filters in all likelihood have false positives, that may not be blocked by other defences because they're not attacks. So It's quite possible the filter introduces a social attack that wouldn't be possible otherwise!
Hattip: https://frederik-braun.com/xssauditor-bad.html which gave me even more reasons to think browsers' XSS 'protection' is awful. I didn't know about [2] before reading his entry.
[+] [-] yyyk|6 years ago|reply
The author recommends either changing the default behaviour to block or disabling the filter altogether. I believe experience has shown this protection method cannot be fixed.
Ultimately, safe code is code that can be reasoned about but there never was even any specification for this 'feature'. By comparison, CSP has a strict specification. It covers more attacks, and has a better failure mode between XSS protections' filter and block entire page load behaviours.
[+] [-] BCharlie|6 years ago|reply