Facebook eventually addressed the issue by making the site accessible over HTTPS—though, as the authors of the 2008 paper note, HTTPS can be a "rigid and costly" solution.
This same excuse has existed for about as long as HTTPS, which dates to Netscape Navigator 1. Is it still that "rigid and costly"? Is there a technical reason that this is an unsolvable problem?
Considering the increase in computer and network speed over the last decade and a half, it seems strange that this would still be the case. Perhaps it's just that without pressure from competitors there is no pressure on the sites to solve it?
I don't know where the authors got "rigid and costly", but I think it's just BS... 2008 was a long time ago though, so maybe it was slightly more challenging then.
When Google went over to SSL for Gmail, they said
"On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that. "
http://techie-buzz.com/tech-news/google-switch-ssl-cost.html
That, coupled with the general availability of cheap and free CA signed certificates makes the claim pretty baseless.
1024bit certificates are only valid until December this year. Then we have to use 2048bit, destroying the benefit we get from faster computers.
You need SSL accelerators or pretty fast hardware to handle more than a few thousand SSL handshakes per second on a single machine. This is one of the places cost comes from.
HTTPS is not rigid and costly for a single connection. It's rigid and costly because it prevents simple options for caching like sharing google hosted jquery. You can still use a CDN, but then you have more complexity in managing your certificates and pay 1/3 more (Cloudfront pricing).
HTTPS is manageable where needed for security. Sleazy ISPs are making it necessary even where security is not a concern -- for example, viewing Apple's homepage.
This thread illustrates exactly the problem with HTTPS.
The proponents of HTTPS everywhere try to sweep the significant downsides under the rug, whilst others are spreading all kinds of unfounded FUD about HTTPS.
The bottom line is that "your mileage may vary". For some applications, SSL is trivial, and there is no excuse not to do it. For other scenarios it's a nightmare with all kinds of undocumented complications.
I'm currently working on providing SSL for a SaaS service with various client domains on AWS (i.e., with a limited number of IP-addresses). Doable, but far from trivial or inexpensive.
I have no background in crypto stuff, but the main issues seem to be (a) SSL virtually requires buying certificates so small/hobby sites won't use it, (b) it involves multiple round-trips and latency is never going to go away unless someone invents faster-than-light communication, and (c) part of what makes encryption work is that it's computationally hard on today's hardware; when it's not, we move to a different algorithm that is, so the computational cost to using encryption versus not using encryption will always be there.
I heard on the latest This Week In Security that Comcast was apparently not just injecting JS, but injecting bad JS. Meaning that closures weren't used, so name collisions could occur with the actual sites users were visiting.
For this reason, I can see HTTPS becoming standard, even for public, non-logged in users. I'm in the process of updating my site to be all-HTTPS and recently got confirmation (as much as one could ever expect) from Google there's no SEO penalty (http://goo.gl/sbtxq).
Thinking about ad injection, it is actually quite scary what a ISP can do. Not only is it easy to display ads (or possibly even malware), but even worse my ISP is installed as a default CA by Firefox. So that they can even inject into SSL connections, with the only "warning" that the certificate was signed by the ISP...
My problem with HTTPS is Google: They push it on every front, but - for various reasons - consider HTTPS and HTTP different pages, meaning you do not get link juice from any HTTP links if you're site is HTTPS only.
(1. don't listen to people telling you otherwise, it's an expensive experiment 2. redirects do not transfer all the juice, they count as links themselves, from my experience it's like not having external links to your site at all 3. If you do not depend on Google b/c you're SaaS, go for HTTPS only)
I remember reading about someone with neighbours that were stealing their wifi. You can do much more interesting things with a proxy server than just inject ads: http://www.ex-parrot.com/pete/upside-down-ternet.html
Is Ars just ripping sites now? Although now changed, https://news.ycombinator.com/item?id=5505890 pointed to an Ars article from yesterday (that reached the front page) that was basically a copy-paste of an SE question
This is probably the best counter-argument to the best counter-argument that gets leveled at the people promoting HTTPS-everywhere. People like to say that HTTPS everywhere would break transparent cacheing by ISPs. After all, HTTP is designed to allow caching proxies to exist inline and still supports dynamic content gracefully (er, somewhat, anyway).
But in fact the same features that make transparent caching easy make this kind of shenanigans easy. There are tons of companies in this space now. Not just people like NebuAd and R66T, but lots of "subscriber messaging systems" like FrontPorch (which I've heard sells messaging data for behavioral advertising) and PerfTech (which has assured me that they do no such thing).
This should be an easy way to push back one of the last "real" arguments against using HTTPS everywhere. There's no excuse not to be running your site on HTTPS all the time - it protects you and your users from all sorts of mischief for a minimal overhead.
It's getting to the stage now where I think domains should be sold with an SSL certificate as standard (minimal vetting, no warranty) - just enough to provide encryption, rather than treating it as an optional extra.
Holy cow. It seems this has had a direct effect: they're no longer injecting javascript into webpages. I just tried Amazon, eBay, and a few others where the script injection used to be present, and it's no longer there.
I absolutely can not understate just how happy I am about this.
HTTPS will not prevent this: the ISP can issue their own CA to their users and then decrypt/encrypt https as it passes through them. (Many corporations already do this). What will prevent this is legislation and/or competition.
It amazes me that US Internet access has very little of either. All the drawbacks of monopoly Internet, with all the drawbacks of unregulated Internet.
I believe ISPs can not have a transparent HTTPS proxy without the "invalid certificate" browser warning.
ISP users would have to manually trust their ISP's CA.
Wouldn't you know it, the CMA Communications (the ISP mentioned in the post) website is not accessible via HTTPS. "View My Bill" and similar link you off to a third party domain.
Yet another reason blindly running javascript from unknown parties is a bad idea. Whitelists for progressive enhancement I want should always have been the default.
There is a good chance that such practices may be found to be illegal in a UK court (Regulation of Investigatory Powers Act primarily with some discussion about applying the Data Protection Act or Computer Misuse Act) but they haven't been tested yet. Both companies very quickly stepped back from the 'trial' they were conducting when it became clear there might be public support for a test case.
An interesting article but was overly verbose. Could have said the same thing in half the space.
However, a big FU to Arstechnica for prostituting the name of Apple to get more visits to the article. The headline did not need to imply that Apple.com was hacked or that Apple was somehow unaware of what's happening at their site.
It's sleazy journalism and beneath the usual ethics of Arstechnica
As exciting as this is to be posted on a high-volume website, I honestly doubt CMA is going to change their practices on this issue.
If anything, the Acceptable Use Policy change on the 4th was a sign that they'd be reluctant to change their stance on this issue at all. They honestly don't care.
They might change their tune if they get a letter from a lawyer or two. Can't imagine Apple, for example, likes the idea of third-party ads overlayed on their site.
I'm terrified to let them tamper with it but Congress really needs to make laws that regulate ISP behavior in the USA. They will never do it on their own.
The problem with such a bill is it will have a dozen riders for very horrible things.
[+] [-] nkurz|13 years ago|reply
This same excuse has existed for about as long as HTTPS, which dates to Netscape Navigator 1. Is it still that "rigid and costly"? Is there a technical reason that this is an unsolvable problem?
Considering the increase in computer and network speed over the last decade and a half, it seems strange that this would still be the case. Perhaps it's just that without pressure from competitors there is no pressure on the sites to solve it?
[+] [-] stephen_g|13 years ago|reply
When Google went over to SSL for Gmail, they said "On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that. " http://techie-buzz.com/tech-news/google-switch-ssl-cost.html
That, coupled with the general availability of cheap and free CA signed certificates makes the claim pretty baseless.
[+] [-] benjiweber|13 years ago|reply
You need SSL accelerators or pretty fast hardware to handle more than a few thousand SSL handshakes per second on a single machine. This is one of the places cost comes from.
[+] [-] kevinpet|13 years ago|reply
HTTPS is manageable where needed for security. Sleazy ISPs are making it necessary even where security is not a concern -- for example, viewing Apple's homepage.
[+] [-] onemorepassword|13 years ago|reply
The proponents of HTTPS everywhere try to sweep the significant downsides under the rug, whilst others are spreading all kinds of unfounded FUD about HTTPS.
The bottom line is that "your mileage may vary". For some applications, SSL is trivial, and there is no excuse not to do it. For other scenarios it's a nightmare with all kinds of undocumented complications.
I'm currently working on providing SSL for a SaaS service with various client domains on AWS (i.e., with a limited number of IP-addresses). Doable, but far from trivial or inexpensive.
[+] [-] jes5199|13 years ago|reply
[+] [-] dangrossman|13 years ago|reply
[+] [-] robmcm|13 years ago|reply
[+] [-] mmahemoff|13 years ago|reply
I heard on the latest This Week In Security that Comcast was apparently not just injecting JS, but injecting bad JS. Meaning that closures weren't used, so name collisions could occur with the actual sites users were visiting.
For this reason, I can see HTTPS becoming standard, even for public, non-logged in users. I'm in the process of updating my site to be all-HTTPS and recently got confirmation (as much as one could ever expect) from Google there's no SEO penalty (http://goo.gl/sbtxq).
[+] [-] yk|13 years ago|reply
[+] [-] cheald|13 years ago|reply
The thought of an ISP having CA certs that are a part of default installs is unnerving.
[+] [-] jlgreco|13 years ago|reply
[+] [-] degenerate|13 years ago|reply
[+] [-] Uchikoma|13 years ago|reply
(1. don't listen to people telling you otherwise, it's an expensive experiment 2. redirects do not transfer all the juice, they count as links themselves, from my experience it's like not having external links to your site at all 3. If you do not depend on Google b/c you're SaaS, go for HTTPS only)
[+] [-] rocky1138|13 years ago|reply
[+] [-] nfm|13 years ago|reply
[+] [-] niggler|13 years ago|reply
[+] [-] deepblueocean|13 years ago|reply
But in fact the same features that make transparent caching easy make this kind of shenanigans easy. There are tons of companies in this space now. Not just people like NebuAd and R66T, but lots of "subscriber messaging systems" like FrontPorch (which I've heard sells messaging data for behavioral advertising) and PerfTech (which has assured me that they do no such thing).
This should be an easy way to push back one of the last "real" arguments against using HTTPS everywhere. There's no excuse not to be running your site on HTTPS all the time - it protects you and your users from all sorts of mischief for a minimal overhead.
[+] [-] RossM|13 years ago|reply
[+] [-] iy56|13 years ago|reply
[+] [-] _conehead|13 years ago|reply
I absolutely can not understate just how happy I am about this.
[+] [-] ben336|13 years ago|reply
[+] [-] bonaldi|13 years ago|reply
It amazes me that US Internet access has very little of either. All the drawbacks of monopoly Internet, with all the drawbacks of unregulated Internet.
[+] [-] laggyluke|13 years ago|reply
[+] [-] pellias|13 years ago|reply
[+] [-] sigzero|13 years ago|reply
[+] [-] cstrat|13 years ago|reply
This is to the internet what global warming is to the earth... well that might be too far, but this is high tech pollution at its worst.
[+] [-] 8ig8|13 years ago|reply
http://www.reddit.com/user/zmhenkel
[+] [-] kevinburke|13 years ago|reply
[+] [-] prodigal_erik|13 years ago|reply
[+] [-] danielweber|13 years ago|reply
I don't like "force HTTPS everywhere" but these jerks are forcing it. It sucks, but it sucks less than this.
[+] [-] oracuk|13 years ago|reply
http://www.bbc.co.uk/news/technology-13015194
There is a good chance that such practices may be found to be illegal in a UK court (Regulation of Investigatory Powers Act primarily with some discussion about applying the Data Protection Act or Computer Misuse Act) but they haven't been tested yet. Both companies very quickly stepped back from the 'trial' they were conducting when it became clear there might be public support for a test case.
[+] [-] SpikeDad|13 years ago|reply
However, a big FU to Arstechnica for prostituting the name of Apple to get more visits to the article. The headline did not need to imply that Apple.com was hacked or that Apple was somehow unaware of what's happening at their site.
It's sleazy journalism and beneath the usual ethics of Arstechnica
[+] [-] girlvinyl|13 years ago|reply
[+] [-] _conehead|13 years ago|reply
If anything, the Acceptable Use Policy change on the 4th was a sign that they'd be reluctant to change their stance on this issue at all. They honestly don't care.
[+] [-] crgt|13 years ago|reply
[+] [-] ck2|13 years ago|reply
The problem with such a bill is it will have a dozen riders for very horrible things.