top | item 13838412

Https hurts users far away from the server

121 points| antoinefink | 9 years ago |antoine.finkelstein.fr

125 comments

order
[+] jacquesm|9 years ago|reply
The biggest impact you can have on your users experience is to trim down the number of connections and the size of your pages. Long after that you can start worrying about round-trip-times to the server.

This blog post is a nice example: 30 requests (ublock origin blocked another 12, with those enabled the time to load increases to a whopping 28 seconds), 2.5M transferred, 7 seconds load time. And all that for 4K payload + some images.

[+] lclarkmichalek|9 years ago|reply
Number of connections isn't that relevant with HTTP2
[+] alvil|9 years ago|reply
There is also another problem on how much and how often is Googlebot indexing your site because your site speed is one of the factors of so called Google index budget. My users are in Germany so my VPS is also in Germany to be fast for local user (~130ms for http reply), but for US Googlebot is my site slow (~420ms for http reply). So you are penalized also for this.
[+] KabuseCha|9 years ago|reply
Hi - I know some tools report a slow site in this case, but these reports are not accurate - don't believe them! Google is not that stupid ! :D

I am currently working as a dev in an SEO-Agency (in Austria), and we never believed this hypothesis - so we tested this once with a bunch of our sites:

When moving sites with a German speaking audience to a VPS in America, your rankings at google.de/google.at will decrease (slightly - the effect is not that big) - the other way around your rankings will improve (slightly).

However - even if your rankings would improve when moving to America I would recommend keeping your sites hosted in Europe: The increase in rankings will not offset the decrease in user satisfaction and therefore the decline in your conversion rates.

[+] kijin|9 years ago|reply
Isn't this somewhat compensated for by the extra credit Google gives you for using SSL?

I would also assume that Google is smart enough to take the physical location of your server into account when calculating how much penalty to apply in which searches. Sites that load fast in Germany should have higher ranks in searches from Germany.

[+] zhte415|9 years ago|reply
Your speeds seem very slow.

Is this the initial handshake which understandably introduces latency?

After that, times should be similar. What could be killing users far away is requiring multiple handshakes because multiple things requiring handshakes are being introduced at the same time.

For reference, I'm physically located in China so requests have to go through a bunch of filtering-oriented routers, and get 150-180ms from US, 200ms Japan and 180ms Singapore (yay geography) and around 200-250ms from Europe - this is SSL requests and not from a connections hub like Shanghai or Shenzhen close to domestic exit-points. Double to triple these times for first handshake.

[+] jlebrech|9 years ago|reply
couldn't you make the indexable portion of your site fast and static and make it go to a relevant local server once users log in?
[+] mfontani|9 years ago|reply
Yup, and that's why for thereg we started using Cloudflare's Railgun… with it, the connection to the servers (hosted in the UK) is "bearable"… without, it's abysmal:

From a VPS in Sydney, with a Good Enough bandwidth:

    root@sydney:~# speedtest-cli 2>&1 | grep -e Download: -e Upload:
    Download: 721.20 Mbits/s
    Upload: 117.89 Mbits/s
… doing the request through Railgun is "quite bearable":

    root@sydney:~# ./rg-diag -json https://www.theregister.co.uk/ | grep -e elapsed_time -e cloudflare_time -e origin_response_time
    "elapsed_time": "0.539365s",
    "origin_response_time": "0.045138s",
    "cloudflare_time": "0.494227s",
Despite our "origin" server being quick enough, the main chunk of time is really "bytes having to travel half the world".

Why does railgun help? Because this is what a user would get otherwise; the "whitepapers" site is hosted in the UK, and doesn't use Cloudflare or Railgun – it only uses Cloudflare for DNS:

    ./rg-diag -json http://whitepapers.theregister.co.uk/ | grep elapsed_time
    "elapsed_time": "0.706277s",
… so that's ~200ms more, and _on http_.

How much would https add, if it were done without Cloudflare's https and Railgun? That's easy to check, as our the whitepapers site has TLS (although admittedly not http/2):

    root@sydney:~# ./rg-diag -json https://whitepapers.theregister.co.uk/ | grep elapsed_time
    "elapsed_time": "1.559860s",
that's quite a huge chunk of time that Cloudfalre HTTPS + Railgun just saves/shaves for us. Recommend it highly!
[+] pbarnes_1|9 years ago|reply
Did you try CloudFlare without Railgun?

That would be interesting.

[+] hannob|9 years ago|reply
There are a couple of other things you can do with existing TLS technology that can improve your latency, e.g. using OCSP stapling, use modern crypto so browsers may use TLS false start, avoid too many ciphers or unnecessary certs in the chain to make the handshake smaller.

It's a bit older, but here's some info, much of it is still valid: https://istlsfastyet.com/

[+] citrin_ru|9 years ago|reply
It is questionable if OCSP stapling reduces TLS handshake time.

Without OCSP browser makes slow request to CA, but caches results for a long time so slow request happens not often.

With OCSP stapling enabled more data is transferred between client and server on each TLS handshake.

Main proponents of OCSP stapling are CA, because it saves them bandwidth/hardware.

[+] hedora|9 years ago|reply
Presumably, cloudflare is up to its ears in NSL's, illegal wiretaps, etc. If you care at all about mass surveillance, censorship, oppressive governments (in the US, or the location of the cloudflare proxy) you probably should look elsewhere.

It's probably controversial, but I'd love to see a yellow security icon in browsers when sites are using well known https relays that can see plaintext (or are doing other obviously bad things, like running software with known zero day exploits, etc)

[+] beardog|9 years ago|reply
I've seen this argument made a lot lately, and I agree Cloudflare is bad for user privacy, however, adding this warning to browsers by default wouldn't make a lot of sense. Heres why:

Most websites are on virtual servers (hardware in general) that is not owned by them. For example, Amazon could easily let the NSA look into your AWS server directly. IMO, the url lock should just be an encryption auditor. The end website is using acceptable algorithms and has a currently valid certificate? That's good enough.

Almost any HTTPS site can be forged/"broken" (unless they're using preloaded HPKP), if the attacker has root certificates (or even just a bug in a CA website), which the NSA certainty does.

Nation state adversaries just aren't really within the typical TLS threat model. I do concede that it does make agencies jobs much harder if used correctly, however.

[+] jacquesm|9 years ago|reply
A mini audit along the lines of 'builtwith'.

Hm. Good idea, why not go a step further and turn the 'no server signatures' advice on it's head: full disclosure, server signatures on, in fact, list each and every component in the stack so that end users can (through some plug-in) determine whether or not a site is safe to use.

Of course nothing bad could ever come from that. /s

I'm all for making the use of for instance Cloudflare less transparent so that users know who they are really talking to, but I'm confused about how you'd want to establish what a site is running without giving a potential attacker a lot of valuable information.

[+] throwawaysed|9 years ago|reply
Cloudflare is unquestionably a source of pure, unencrypted traffic for the govt.

Does anyone remember a few years ago when Google found out through leaks that the govt was wiretapping it's private traffic between datacentres?

What makes you so naive to think that the govt isn't sniffing every single page on cloudflare?

[+] manigandham|9 years ago|reply
The entire internet is built upon thousands of layers. There are so many vectors of entry that no "default warning" would ever suffice.

If your risk profile is outside the boundaries of normal internet use then you likely already know what to do - and we now have a multitude of tools for more private communications.

[+] apeace|9 years ago|reply
> Presumably, cloudflare is up to its ears in NSL's, illegal wiretaps, etc. If you care at all about mass surveillance, censorship, oppressive governments ... you probably should look elsewhere.

This analysis seems flawed. If you care about mass surveillance, you want their top-tier security and legal teams working for you.

[+] sp332|9 years ago|reply
If you're worried about a proprietary solution, you could host your own cache server in Australia or wherever your customers are having trouble.
[+] problems|9 years ago|reply
Yeah, at a $200/mo cost, you could spin up a few VMs on DigitalOcean, Vultr or LightSail which have decent bandwidth and cache from there.

Nice part about cloudflare though is that they can use anycast to determine location and then send the closest server IPs. For sub-$200/mo, you're not able to do that, you'd have to find a provider that could do it for you, I'm not sure anyone offers country-based anycast DNS alone.

EDIT: Looks like easyDNS enterprise may be able to do it, https://fusion.easydns.com/Knowledgebase/Article/View/214/7/... for about $12.75/mo too. Might be a decent way to brew your own mini caching CDN for fairly cheap.

[+] Kiro|9 years ago|reply
I don't understand why I need to use https on a static marketing webpage. No login stuff, no JavaScript, nothing. Just straight up HTML and CSS. Right now I need to pay about $150 every year for something that's only used to satisfy Google PageRank (I can't use LetsEncrypt with my hosting provider). Why?
[+] eganist|9 years ago|reply
Keeping it extremely high level:

Among other reasons, not encrypting traffic gives an opportunity for bad actors to replace content in transit to your end users when your end users are on compromised connections, such as rogue "free" wifi networks in airports or coffee shops, or even legitimate networks which have in some way been compromised, e.g. the ISPs of the world who decide to inject other content e.g. their own ads into unencrypted traffic.

The next question is usually "what could they possibly do, change a few pictures?"

They could inject malicious payloads, and for all your users would know, it would appear to them that it came from your site.

> I can't use LetsEncrypt with my hosting provider

Consider switching. For a static site, consider Gitlab; they do a good job of permitting LetsEncrypt.

---

I sincerely appreciate the question, though. I have marketing people ask me this question all the time in private who hesitate to do so in public because quite a few security types berate them for not doing something "obviously" more secure. It's not at all obvious to most of the world's web designers and content creators that a static site should be TLS'd until it's framed (heh) in this manner. The fact that you asked brings about a massive educational moment.

Anyway, consider switching hosts. :)

[+] riobard|9 years ago|reply
Here's why: Many ISPs hijack HTTP connections and inject ads and tracking JS into the page. If you don't use HTTPS, your page is screwed.

The Internet is not a safe place. We should aim for HTTPS EVERYWHERE.

[+] Nullabillity|9 years ago|reply
Yeah, why are you using a bad hosting provider?
[+] mrkurt|9 years ago|reply
Most of the answers you're getting aren't all that big of deal for your site. You still might want https though.

You should think about https for sites like yours the way you think about vaccines. SSL everywhere makes everyone safer, even though it doesn't have a tremendous impact on your own site.

Also, shameless plug, if you want really easy SSL you can use our new startup: https://fly.io. I'm not sure what country you're in, but we have a bunch of servers all over to help make it fast. :)

[+] Gurrewe|9 years ago|reply
If you have a marketing webpage, you might have a link to signup or login pages. If you can hijack the index page you'll also be able to hijack the links.
[+] rocqua|9 years ago|reply
2 reasons. The first is practical: integrity. Https guarantees the site your visitors see is the site you sent them.

The second is more moral. Making https the default means more and more of the web will be encrypted and authenticated. This is a good thing.

[+] dalore|9 years ago|reply
Why use ssh over telnet?
[+] c0nfused|9 years ago|reply
It seems to me that it is worth considering that HTTPS is not always a panacea of goodness. We should think about two things.

First that almost every firewall out there right now supports https snooping via MITM. Example: https://www.paloaltonetworks.com/features/decryption

Second, I just got back from rural China where most unblocked american webpages take between 5-15 seconds to load on my mobile phone many of them take upwards of a minute to load fully. This seems to be a fun combo of network latency, smaller than expected bandwidth, and pages using javascript with a series of different load events to display content. That dompageloaded->xmlhttprequest -> onreadystatechanged chain can ad some serious time on a 500ms round trip, and that's without talking about the css, the images, and the javascript.

I forgot to pay me electric bill before I flew out and it took me nearly an hour to login, push pay my bill, accept the terms, and confirm payment. I was not a happy camper.

It seems to me that while https is a very good thing, in some cases http and low bandwidth solutions might be worth implementing. It seems to me that one might actually want to tailor this to your audience, no one in their right mind is going to waste 5 minutes loading your web page. If they are so desperate they need to wait, they are going to hate you every minute they do it.

[+] rocqua|9 years ago|reply
> First that almost every firewall out there right now supports https snooping via MITM. Example: https://www.paloaltonetworks.com/features/decryption

Seems prudent to mention that this requires cooperation of the client bein MitMed. Specifically, the client needs to install a root certificate.

[+] magicalist|9 years ago|reply
> I forgot to pay me electric bill before I flew out and it took me nearly an hour to login, push pay my bill, accept the terms, and confirm payment. I was not a happy camper.

That sucks but I don't see how having a site where you may have to enter payment information on an unsecured connection would be a solution.

[+] jacquesm|9 years ago|reply
> This seems to be a fun combo of network latency, smaller than expected bandwidth, and pages using javascript with a series of different load events to display content.

You forgot about the great firewall of China playing merry MITM with your connections.

[+] chatmasta|9 years ago|reply
Is there an easy way to pipeline those requests over one TCP connection? Or is that only possible with http/2?

I wonder if it would be lower latency to open a single websocket tunnel on page load and download assets over the tunnel. Although at that point I suppose you're just replicating the functionality of http/2.

[+] SEMW|9 years ago|reply
Funny coincidence, I was running into this exact issue earlier today. Had a customer complain about high response times from even our /time endpoint (which doesn't do anything except return server time) as measured by curl, and turns out it was just the TLS handshake:

    $ curl -o /dev/null -s -w "@time-format.txt" http://rest.ably.io/time
    time_namelookup:  0.012
       time_connect:  0.031
    time_appconnect:  0.000
    time_pretransfer: 0.031
         time_total:  0.053

    $ curl -o /dev/null -s -w "@time-format.txt" https://rest.ably.io/time
    time_namelookup:  0.012
       time_connect:  0.031
    time_appconnect:  0.216
    time_pretransfer: 0.216
         time_total:  0.237
(as measured from my home computer, in the UK, so connecting to the aws eu-west region)

Luckily not that much of an issue for us as when using an actual client library (unlike with curl) you get HTTP keep-alive, so at least the TCP connection doesn't need to be renewed for every request. And most customers who care about low latency are using a realtime library anyway, which just keeps a websocket, so sidesteps the whole issue. Certainly not enough to make us reconsider using TLS by default.

Still, a bit annoying when you get someone who thinks they've discovered with curl that latency from them to us is 4x slower than to Pubnub, just because the Pubnub docs show the http versions of their endpoints, wheras ours show https, even though we're basically both using the same set of AWS regions...

[+] andreareina|9 years ago|reply
One round trip over the course of the time that the user is using the same OS/browser installation isn't much.

The Cloudflare Railgun is an interesting solution, and one that could be implemented in the context of an SPA over a websockets connection. Or conceivably some other consumer of an API.

[+] nprescott|9 years ago|reply
I really enjoyed the coverage of the same topic in High Performance Browser Networking[0]. It effectively explains the key performance influencers across various networks without being boring.

[0]: https://hpbn.co/

[+] aanm1988|9 years ago|reply
> In our case at Hunter, users were waiting on average 270ms for the handshake to be finished. Considering requests are handled in about 60ms on average, this was clearly too much.

Why? Did it hurt user engagement? Were people complaining the site was slow?

[+] kuschku|9 years ago|reply
The question is "is this the best we can do?".

If it’s no, then clearly we should improve.

[+] chatmasta|9 years ago|reply
What's wrong with the cloudflare free plan? You can host a static site on github pages with a custom domain and use the free cloudflare SSL cert.
[+] amiraliakbari|9 years ago|reply
The "Railgun" feature mentioned in the article is only available in some paid plans. Using the free plan wouldn't keep an open connection between your servers and Cloudflare's. It does improve the situation by terminating users' handshakes early, using better links, warm DNS cache, etc. among servers. But the latency hard limit is still present between your server and CF. Skipping https between your server and CL is not an option either for any site transferring user data.