This is pretty great. I guess he gave this talk at HOPE, but it's laser scoped to startups, down to the order in which he gives the advice:
* Enable HSTS
* Don't link to HTTP:// javascript resources from HTTPS pages
* Set the secure flag on cookies
Very few of the sites we test enable HSTS. But it's easy to do; it's just an extra header you set.
The only quibble I might have is the fatalism he has about mixed-security Javascript links. I'd go further than he does: when you source Javascript from a third party, you have leased your users security out to that third party. Don't like the way that sounds? Doesn't matter: it's a fact. Companies should radically scale back the number of third parties that they allow to "bug" their pages.
Another technology to start preparing for is TACK. It allows you, the server owner, to control browser pinning of your certs while maintaining CA mobility. This gives you the control over your security that Google has over Gmail via Chrome cert pinning without having to issue a new browser build every time you change CAs.
One way to think of it is like a domain transfer lock but with cryptography. You control how you unlock your pin to allow mobility to a new CA by sticking a signed file on your SSL server.
A solution going forward to contain 3rd party javascript is HTML5 sandbox iframe. This allows declaring a whitelist of permissions 3rd party code should be granted. Only about 40% of browsers support this feature [1]. For unsupported browsers, the external javascript continues working without the security guarantees, so it's no worse than the situation now.
It's actually kind of a pain to enable HSTS because it makes you fix all the places where you're downgrading to HTTP. You should definitely do it if you care whether your users' sessions get hijacked, but it's not _just_ flipping a switch.
Using SSL properly is not particularly difficult in theory, but there are many moving pieces so that the whole thing ends up being hard. For example, it's often easy to forget a crucial step. To address this, I wrote SSL/TLS Deployment Best Practices, which contains 22 recommendations in 5 categories:
I encourage everyone to read through it, and follow it. Once you know what to do, it's easy. Part 2, dealing with advanced topics, is coming in October.
> There's a second collarary to this: attackers can set your HTTPS cookies too.
If your app uses session ID cookies, then another implication of this is that attackers can set a user's session ID to a value they know, wait for the user to log in, and then use the session ID to hijack the logged-in session. To prevent this make sure you regenerate session IDs when logging a user in. (This isn't the only reason to regenerate session IDs on log in but it's a very compelling one.)
The author seems to gloss over the importance of browser built-in HSTS lists. If you're just relying on a response header to tell the browser to use HTTPS, aren't you still vulnerable? Isn't that the same fundamental problem with redirecting to HTTPS via Location headers?
In other words, a MITM could downgrade any HTTPS traffic and simply remove that STS header. The browser would be none the wiser.
Please for the love of god, if you're working at google and read this: Add a deeply set option to FORCIBLY enable that button in all situations where it might appear. We sometimes have certificate issues with our proxy server at my workplace and it makes Chrome practically unusable when they happen.
I know what I'm doing. I'll reset the option when the underlying issue is resolved, and overall it's a great feature for the browser, but I need to have the ability to be responsible for myself.
It's very straightforward for a proxy to have its own CA=YES certificate and mint/sign certs for every HTTPS site the proxy sees on the fly. If you have a corporate proxy that is intercepting HTTPS traffic, that is what it should be doing.
Then, the proxy makes its certificate available to users, you download it, and add it to your CA certs via the UI that browsers provide for that; HTTPS magically appears to work again.
Honest question, for those who have done it: what are the downsides of allowing your whole site to be accessed via SSL?
Obviously, you need to be a bit more diligent about making asset urls protocol-relative (which can be a PITA across a large, dynamically generated site), but are there any other gotchas? Server load? Reduced cache-ability?
You can have good cacheability—you just need to send explicit Cache-Control headers (which is a good idea anyway).
If you don't do SSL properly (e.g. non-SSL-terminating load-balancer can break SSL session resuming by forwarding requests to different servers which don't share tickets) then you'll have lower front-end performance.
webpagetest.org nicely shows connections including time spent on SSL negotiation, so you can use it to check your SSL overhead.
In my case, I work for a SaaS provider that performs virtual hosting using customer-provided SSL certificates (myservice.customer.com). This puts us in the unenviable position of having to maintain thousands of IP address endpoints, one per customer, along with all the network-related complexity that goes along with it.
SNI would help a lot, but unfortunately it will never be a feature in the SSL client code in Windows XP (which MSIE uses) and so we're stuck with this for the foreseeable future.
It mentions the free StartSSL certificates, as does their page. But what isn't clear is if the certificates are free to renew after a year (ie this is just a teaser).
I currently use a self signed cert and certificate patrol, but apps (in particular Thunderbird) are becoming increasingly hostile to that.
HTTPSEverywhere is a firefox/chrome browser plugin, that will ensure connections that can be HTTPS are. It also does a good job preventing ssl stripping.
I'm wondering if there could be an equivalent DNS entry that might help signal a site should only be accessed via SSL? Then you could possibly protect against initial access as well as returning users.
We can't do a blocking DNS lookup other than for A/AAAA records. About 5% of Chrome users cannot resolve TXT records because the network is filtering the DNS requests. (i.e. we know that the network is up and we're asking about a DNS name that we known exists, but we get a timeout.)
DNSSEC could allow this to work, if the connection between the client and a DNSSEC-enabled recursive resolver were secure. But if you're on the LAN of the client (for example, a wireless network) you can spoof every DNS response and the client is boned.
What I take out of this, beside things I knew of already (and most others as well) is:
* Chrome wants to FORCE you to buy an SSL certificate.
* The guy suggest getting one from StartSSL BUT those are crap for 2 reasons: you can only have ONE domain, else you have to pay. The TOS are horrible.
So, dear imperialviolet, if you want me to use certificates that your company trusts (and by extension, your users), get up with it and make Google provide free, unlimited SSL certificates.
> you can only have ONE domain, else you have to pay
It's one name per certificate (well, two: yourdomain.com and whatever.yourdomain.com) but you can order multiple certificates for multiple subdomains in the same or different domains at no charge.
If you can't afford the $43/year for a Thawte starter cert, you have no business running a domain of your own. Seriously, less than $4 a month - that's going to be dwarfed by any sort of hosting you might be paying for.
And it's only one domain per cert, so your entire argument is silly.
Somewhat related question: It's fairly common for sites to have static files (images/css) served on a different (sub)domain. What are you supposed to do when the html content is being served on HTTPS? Should the static files be on HTTPS as well? If so, wouldn't it need a different certificate? Certificates are only valid for a single domain, after all.
If you don't want to pay a premium for a wildcard certificate, you could just get another certificate for the static subdomain. You can get an unlimited number of free, single subdomain certificates from StartSSL.
honest question: why can't banks when customers open a new account, give them a card with 1. the bank's ip addresses (in each region) and 2. their printed public key (ssl or ssh format). and why doesn't the bank ask for a public key from each customer? in-person key exchange.
no one even pays attention to the client side of ssl. how many of you use your own ssl certificates? you basically can't under the cert authorty scheme. it's a racket and no one is going to pay for these. and do the banks even care? they use tactics like cookies and follow-up emails to verify customers (hardware).
and why does the bank have to be able to switch their ip address without telling anyone? what if the same was true for phone numbers? people would be like wtf? load balancing? c'mon. too difficult ot type? thnk about the trade-offs in security, all for the sake of not looking at a number? ipv4 is no longer than an area code and phone number. just tell people where your servers are and let them choose the one that is nearest. which incidentally, contrary to conventional wisdom, is not _always_ the one that will be the most responsive in the ever-changing state of the network.
there's nothing more annoying than being subjected to using trial and error and you are not allowed to do any of the trial when the errors start coming. out of your control.
what happened to the concept of "important numbers"? are we to believe you only need to remember "google.com" or "yourbank.com"? that's a security problem waiting to happen.
second honest question: why does bank website need to embed links to third party reources and require that customers enable their browsers to access all these indiscriminantly (user doesn't get to choose) and to enable javascript?
is javascript needed for security of a connect or to accomplish a financial transaction? because that's all i need from the bank website.
i think we're past the point where customers need to be enticed to use the web to do things like banking and shopping. they're going to be forced to. so we can forgo the silly demonstrations and gratuitous use of javascript. save for "show HN".
what we need is simplicity, reliability and security.
Does anyone have any recommendations for search terms I could use to put together a list of news stories / posts about known man-in-the-middle attacks that have occurred?
If includeSubDomains is set for HSTS, does that mean that a cert for https://foo.com/ is required instead of https://www.foo.com/ in order to protect cookies set for foo.com and under?
It's not clear to me from what docs that I have been able to find.
[+] [-] tptacek|13 years ago|reply
* Enable HSTS
* Don't link to HTTP:// javascript resources from HTTPS pages
* Set the secure flag on cookies
Very few of the sites we test enable HSTS. But it's easy to do; it's just an extra header you set.
The only quibble I might have is the fatalism he has about mixed-security Javascript links. I'd go further than he does: when you source Javascript from a third party, you have leased your users security out to that third party. Don't like the way that sounds? Doesn't matter: it's a fact. Companies should radically scale back the number of third parties that they allow to "bug" their pages.
[+] [-] NateLawson|13 years ago|reply
Another technology to start preparing for is TACK. It allows you, the server owner, to control browser pinning of your certs while maintaining CA mobility. This gives you the control over your security that Google has over Gmail via Chrome cert pinning without having to issue a new browser build every time you change CAs.
One way to think of it is like a domain transfer lock but with cryptography. You control how you unlock your pin to allow mobility to a new CA by sticking a signed file on your SSL server.
http://tack.io/
[Disclosure: one of the authors of TACK is a former co-worker.]
[+] [-] moonboots|13 years ago|reply
[1] http://caniuse.com/#feat=iframe-sandbox
[+] [-] rabidsnail|13 years ago|reply
[+] [-] philjohn|13 years ago|reply
//www.example.com/path/to/asset.js
this will then use the same transport as the containing page uses.
[+] [-] mike-cardwell|13 years ago|reply
Less than a year ago, you were saying HSTS wasn't worth the trouble. Ref: https://news.ycombinator.com/item?id=2909613
Glad you've changed your mind.
[+] [-] ivanr|13 years ago|reply
https://www.ssllabs.com/projects/best-practices/
I encourage everyone to read through it, and follow it. Once you know what to do, it's easy. Part 2, dealing with advanced topics, is coming in October.
[+] [-] getsat|13 years ago|reply
[+] [-] unknown|13 years ago|reply
[deleted]
[+] [-] agwa|13 years ago|reply
If your app uses session ID cookies, then another implication of this is that attackers can set a user's session ID to a value they know, wait for the user to log in, and then use the session ID to hijack the logged-in session. To prevent this make sure you regenerate session IDs when logging a user in. (This isn't the only reason to regenerate session IDs on log in but it's a very compelling one.)
[+] [-] jenseng|13 years ago|reply
In other words, a MITM could downgrade any HTTPS traffic and simply remove that STS header. The browser would be none the wiser.
[+] [-] DannoHung|13 years ago|reply
I know what I'm doing. I'll reset the option when the underlying issue is resolved, and overall it's a great feature for the browser, but I need to have the ability to be responsible for myself.
[+] [-] tptacek|13 years ago|reply
Then, the proxy makes its certificate available to users, you download it, and add it to your CA certs via the UI that browsers provide for that; HTTPS magically appears to work again.
[+] [-] agl|13 years ago|reply
You can disable all certificate checking with --ignore-certificate-errors but it is as bad as it sounds.
Rather, to correctly support MITM proxies you should install their CA certificate locally.
[+] [-] flatline3|13 years ago|reply
This is the problem.
> ... and it makes Chrome practically unusable when they happen
This is not the problem.
[+] [-] Flimm|13 years ago|reply
[+] [-] harshreality|13 years ago|reply
chrome://net-internals/#hsts
Are you looking for something else?
[+] [-] sp332|13 years ago|reply
[+] [-] isaacaggrey|13 years ago|reply
Also, by using DuckDuckGo [1] over HTTPS you get the same ruleset in HTTPS Everywhere [2] even if you don't have the extension installed.
[1] https://duckduckgo.com/
[2] http://www.gabrielweinberg.com/blog/2010/09/duckduckgo-imple...
[+] [-] timr|13 years ago|reply
Obviously, you need to be a bit more diligent about making asset urls protocol-relative (which can be a PITA across a large, dynamically generated site), but are there any other gotchas? Server load? Reduced cache-ability?
[+] [-] pornel|13 years ago|reply
If you don't do SSL properly (e.g. non-SSL-terminating load-balancer can break SSL session resuming by forwarding requests to different servers which don't share tickets) then you'll have lower front-end performance.
webpagetest.org nicely shows connections including time spent on SSL negotiation, so you can use it to check your SSL overhead.
[+] [-] otterley|13 years ago|reply
SNI would help a lot, but unfortunately it will never be a feature in the SSL client code in Windows XP (which MSIE uses) and so we're stuck with this for the foreseeable future.
[+] [-] emmelaich|13 years ago|reply
(http://paulirish.com/2010/the-protocol-relative-url/)
Because they wound't be so PITAish would they?
[+] [-] ceph_|13 years ago|reply
[+] [-] unknown|13 years ago|reply
[deleted]
[+] [-] rogerbinns|13 years ago|reply
I currently use a self signed cert and certificate patrol, but apps (in particular Thunderbird) are becoming increasingly hostile to that.
[+] [-] spindritf|13 years ago|reply
Yes, they are. StartSSL will even send you a reminder e-mail.
[+] [-] jastr|13 years ago|reply
https://www.eff.org/https-everywhere/
[+] [-] rdl|13 years ago|reply
(and a great advertisement for using Chrome in secure settings where you need a web browser)
The irony of Google being one of the main http-only JS resources for a long time was kind of amusing, though.
[+] [-] clarkevans|13 years ago|reply
[+] [-] agl|13 years ago|reply
[+] [-] peterwwillis|13 years ago|reply
[+] [-] zobzu|13 years ago|reply
* Chrome wants to FORCE you to buy an SSL certificate.
* The guy suggest getting one from StartSSL BUT those are crap for 2 reasons: you can only have ONE domain, else you have to pay. The TOS are horrible.
So, dear imperialviolet, if you want me to use certificates that your company trusts (and by extension, your users), get up with it and make Google provide free, unlimited SSL certificates.
Til then, no dice.
[+] [-] spindritf|13 years ago|reply
It's one name per certificate (well, two: yourdomain.com and whatever.yourdomain.com) but you can order multiple certificates for multiple subdomains in the same or different domains at no charge.
[+] [-] jaggederest|13 years ago|reply
And it's only one domain per cert, so your entire argument is silly.
[+] [-] dinkumthinkum|13 years ago|reply
[+] [-] willfarrell|13 years ago|reply
[+] [-] kmfrk|13 years ago|reply
[+] [-] yorhel|13 years ago|reply
[+] [-] moonboots|13 years ago|reply
[+] [-] pwaring|13 years ago|reply
Actually that's not the case, you can get single certificates which cover different domains, using the Subject Alternative Name field.
[+] [-] rmc|13 years ago|reply
[+] [-] justhw|13 years ago|reply
'//www.your-cdn.com/image.jpg'
In other words don't specify http or https in the url, just do '//your-url.com/new.js'
[+] [-] adgar|13 years ago|reply
[+] [-] honestq|13 years ago|reply
no one even pays attention to the client side of ssl. how many of you use your own ssl certificates? you basically can't under the cert authorty scheme. it's a racket and no one is going to pay for these. and do the banks even care? they use tactics like cookies and follow-up emails to verify customers (hardware).
and why does the bank have to be able to switch their ip address without telling anyone? what if the same was true for phone numbers? people would be like wtf? load balancing? c'mon. too difficult ot type? thnk about the trade-offs in security, all for the sake of not looking at a number? ipv4 is no longer than an area code and phone number. just tell people where your servers are and let them choose the one that is nearest. which incidentally, contrary to conventional wisdom, is not _always_ the one that will be the most responsive in the ever-changing state of the network.
there's nothing more annoying than being subjected to using trial and error and you are not allowed to do any of the trial when the errors start coming. out of your control.
what happened to the concept of "important numbers"? are we to believe you only need to remember "google.com" or "yourbank.com"? that's a security problem waiting to happen.
second honest question: why does bank website need to embed links to third party reources and require that customers enable their browsers to access all these indiscriminantly (user doesn't get to choose) and to enable javascript?
is javascript needed for security of a connect or to accomplish a financial transaction? because that's all i need from the bank website.
i think we're past the point where customers need to be enticed to use the web to do things like banking and shopping. they're going to be forced to. so we can forgo the silly demonstrations and gratuitous use of javascript. save for "show HN".
what we need is simplicity, reliability and security.
[+] [-] jerhewet|13 years ago|reply
[+] [-] newman314|13 years ago|reply
It's not clear to me from what docs that I have been able to find.
[+] [-] jenrzzz|13 years ago|reply
[+] [-] kodisha|13 years ago|reply
Does it work properly?
If i have www.mydomain.com with certificate A, and api.mydomain.com with a certificate B, can i make CORS call with javascript?
(i know that if you try it with self signed cert, it will just drop the request)
[+] [-] samt|13 years ago|reply