The biggest impact you can have on your users experience is to trim down the number of connections and the size of your pages. Long after that you can start worrying about round-trip-times to the server.
This blog post is a nice example: 30 requests (ublock origin blocked another 12, with those enabled the time to load increases to a whopping 28 seconds), 2.5M transferred, 7 seconds load time. And all that for 4K payload + some images.
There is also another problem on how much and how often is Googlebot indexing your site because your site speed is one of the factors of so called Google index budget. My users are in Germany so my VPS is also in Germany to be fast for local user (~130ms for http reply), but for US Googlebot is my site slow (~420ms for http reply). So you are penalized also for this.
Hi - I know some tools report a slow site in this case, but these reports are not accurate - don't believe them! Google is not that stupid ! :D
I am currently working as a dev in an SEO-Agency (in Austria), and we never believed this hypothesis - so we tested this once with a bunch of our sites:
When moving sites with a German speaking audience to a VPS in America, your rankings at google.de/google.at will decrease (slightly - the effect is not that big) - the other way around your rankings will improve (slightly).
However - even if your rankings would improve when moving to America I would recommend keeping your sites hosted in Europe: The increase in rankings will not offset the decrease in user satisfaction and therefore the decline in your conversion rates.
Isn't this somewhat compensated for by the extra credit Google gives you for using SSL?
I would also assume that Google is smart enough to take the physical location of your server into account when calculating how much penalty to apply in which searches. Sites that load fast in Germany should have higher ranks in searches from Germany.
Is this the initial handshake which understandably introduces latency?
After that, times should be similar. What could be killing users far away is requiring multiple handshakes because multiple things requiring handshakes are being introduced at the same time.
For reference, I'm physically located in China so requests have to go through a bunch of filtering-oriented routers, and get 150-180ms from US, 200ms Japan and 180ms Singapore (yay geography) and around 200-250ms from Europe - this is SSL requests and not from a connections hub like Shanghai or Shenzhen close to domestic exit-points. Double to triple these times for first handshake.
Yup, and that's why for thereg we started using Cloudflare's Railgun… with it, the connection to the servers (hosted in the UK) is "bearable"… without, it's abysmal:
From a VPS in Sydney, with a Good Enough bandwidth:
Despite our "origin" server being quick enough, the main chunk of time is really "bytes having to travel half the world".
Why does railgun help? Because this is what a user would get otherwise; the "whitepapers" site is hosted in the UK, and doesn't use Cloudflare or Railgun – it only uses Cloudflare for DNS:
How much would https add, if it were done without Cloudflare's https and Railgun? That's easy to check, as our the whitepapers site has TLS (although admittedly not http/2):
There are a couple of other things you can do with existing TLS technology that can improve your latency, e.g. using OCSP stapling, use modern crypto so browsers may use TLS false start, avoid too many ciphers or unnecessary certs in the chain to make the handshake smaller.
Presumably, cloudflare is up to its ears in NSL's, illegal wiretaps, etc. If you care at all about mass surveillance, censorship, oppressive governments (in the US, or the location of the cloudflare proxy) you probably should look elsewhere.
It's probably controversial, but I'd love to see a yellow security icon in browsers when sites are using well known https relays that can see plaintext (or are doing other obviously bad things, like running software with known zero day exploits, etc)
I've seen this argument made a lot lately, and I agree Cloudflare is bad for user privacy, however, adding this warning to browsers by default wouldn't make a lot of sense. Heres why:
Most websites are on virtual servers (hardware in general) that is not owned by them. For example, Amazon could easily let the NSA look into your AWS server directly. IMO, the url lock should just be an encryption auditor. The end website is using acceptable algorithms and has a currently valid certificate? That's good enough.
Almost any HTTPS site can be forged/"broken" (unless they're using preloaded HPKP), if the attacker has root certificates (or even just a bug in a CA website), which the NSA certainty does.
Nation state adversaries just aren't really within the typical TLS threat model. I do concede that it does make agencies jobs much harder if used correctly, however.
Hm. Good idea, why not go a step further and turn the 'no server signatures' advice on it's head: full disclosure, server signatures on, in fact, list each and every component in the stack so that end users can (through some plug-in) determine whether or not a site is safe to use.
Of course nothing bad could ever come from that. /s
I'm all for making the use of for instance Cloudflare less transparent so that users know who they are really talking to, but I'm confused about how you'd want to establish what a site is running without giving a potential attacker a lot of valuable information.
The entire internet is built upon thousands of layers. There are so many vectors of entry that no "default warning" would ever suffice.
If your risk profile is outside the boundaries of normal internet use then you likely already know what to do - and we now have a multitude of tools for more private communications.
> Presumably, cloudflare is up to its ears in NSL's, illegal wiretaps, etc. If you care at all about mass surveillance, censorship, oppressive governments ... you probably should look elsewhere.
This analysis seems flawed. If you care about mass surveillance, you want their top-tier security and legal teams working for you.
Yeah, at a $200/mo cost, you could spin up a few VMs on DigitalOcean, Vultr or LightSail which have decent bandwidth and cache from there.
Nice part about cloudflare though is that they can use anycast to determine location and then send the closest server IPs. For sub-$200/mo, you're not able to do that, you'd have to find a provider that could do it for you, I'm not sure anyone offers country-based anycast DNS alone.
I don't understand why I need to use https on a static marketing webpage. No login stuff, no JavaScript, nothing. Just straight up HTML and CSS. Right now I need to pay about $150 every year for something that's only used to satisfy Google PageRank (I can't use LetsEncrypt with my hosting provider). Why?
Among other reasons, not encrypting traffic gives an opportunity for bad actors to replace content in transit to your end users when your end users are on compromised connections, such as rogue "free" wifi networks in airports or coffee shops, or even legitimate networks which have in some way been compromised, e.g. the ISPs of the world who decide to inject other content e.g. their own ads into unencrypted traffic.
The next question is usually "what could they possibly do, change a few pictures?"
They could inject malicious payloads, and for all your users would know, it would appear to them that it came from your site.
> I can't use LetsEncrypt with my hosting provider
Consider switching. For a static site, consider Gitlab; they do a good job of permitting LetsEncrypt.
---
I sincerely appreciate the question, though. I have marketing people ask me this question all the time in private who hesitate to do so in public because quite a few security types berate them for not doing something "obviously" more secure. It's not at all obvious to most of the world's web designers and content creators that a static site should be TLS'd until it's framed (heh) in this manner. The fact that you asked brings about a massive educational moment.
Most of the answers you're getting aren't all that big of deal for your site. You still might want https though.
You should think about https for sites like yours the way you think about vaccines. SSL everywhere makes everyone safer, even though it doesn't have a tremendous impact on your own site.
Also, shameless plug, if you want really easy SSL you can use our new startup: https://fly.io. I'm not sure what country you're in, but we have a bunch of servers all over to help make it fast. :)
If you have a marketing webpage, you might have a link to signup or login pages. If you can hijack the index page you'll also be able to hijack the links.
Second, I just got back from rural China where most unblocked american webpages take between 5-15 seconds to load on my mobile phone many of them take upwards of a minute to load fully. This seems to be a fun combo of network latency, smaller than expected bandwidth, and pages using javascript with a series of different load events to display content. That dompageloaded->xmlhttprequest -> onreadystatechanged chain can ad some serious time on a 500ms round trip, and that's without talking about the css, the images, and the javascript.
I forgot to pay me electric bill before I flew out and it took me nearly an hour to login, push pay my bill, accept the terms, and confirm payment. I was not a happy camper.
It seems to me that while https is a very good thing, in some cases http and low bandwidth solutions might be worth implementing. It seems to me that one might actually want to tailor this to your audience, no one in their right mind is going to waste 5 minutes loading your web page. If they are so desperate they need to wait, they are going to hate you every minute they do it.
> I forgot to pay me electric bill before I flew out and it took me nearly an hour to login, push pay my bill, accept the terms, and confirm payment. I was not a happy camper.
That sucks but I don't see how having a site where you may have to enter payment information on an unsecured connection would be a solution.
> This seems to be a fun combo of network latency, smaller than expected bandwidth, and pages using javascript with a series of different load events to display content.
You forgot about the great firewall of China playing merry MITM with your connections.
Is there an easy way to pipeline those requests over one TCP connection? Or is that only possible with http/2?
I wonder if it would be lower latency to open a single websocket tunnel on page load and download assets over the tunnel. Although at that point I suppose you're just replicating the functionality of http/2.
Funny coincidence, I was running into this exact issue earlier today. Had a customer complain about high response times from even our /time endpoint (which doesn't do anything except return server time) as measured by curl, and turns out it was just the TLS handshake:
(as measured from my home computer, in the UK, so connecting to the aws eu-west region)
Luckily not that much of an issue for us as when using an actual client library (unlike with curl) you get HTTP keep-alive, so at least the TCP connection doesn't need to be renewed for every request. And most customers who care about low latency are using a realtime library anyway, which just keeps a websocket, so sidesteps the whole issue. Certainly not enough to make us reconsider using TLS by default.
Still, a bit annoying when you get someone who thinks they've discovered with curl that latency from them to us is 4x slower than to Pubnub, just because the Pubnub docs show the http versions of their endpoints, wheras ours show https, even though we're basically both using the same set of AWS regions...
One round trip over the course of the time that the user is using the same OS/browser installation isn't much.
The Cloudflare Railgun is an interesting solution, and one that could be implemented in the context of an SPA over a websockets connection. Or conceivably some other consumer of an API.
A related interesting topic is the possibility of secure cache servers that don't break the secure channel with "blind caches". Currently just a RFC draft and probably a long time from mass adoption, but nevertheless interesting.
I really enjoyed the coverage of the same topic in High Performance Browser Networking[0]. It effectively explains the key performance influencers across various networks without being boring.
> In our case at Hunter, users were waiting on average 270ms for the handshake to be finished. Considering requests are handled in about 60ms on average, this was clearly too much.
Why? Did it hurt user engagement? Were people complaining the site was slow?
The "Railgun" feature mentioned in the article is only available in some paid plans. Using the free plan wouldn't keep an open connection between your servers and Cloudflare's.
It does improve the situation by terminating users' handshakes early, using better links, warm DNS cache, etc. among servers. But the latency hard limit is still present between your server and CF. Skipping https between your server and CL is not an option either for any site transferring user data.
[+] [-] jacquesm|9 years ago|reply
This blog post is a nice example: 30 requests (ublock origin blocked another 12, with those enabled the time to load increases to a whopping 28 seconds), 2.5M transferred, 7 seconds load time. And all that for 4K payload + some images.
[+] [-] lclarkmichalek|9 years ago|reply
[+] [-] alvil|9 years ago|reply
[+] [-] KabuseCha|9 years ago|reply
I am currently working as a dev in an SEO-Agency (in Austria), and we never believed this hypothesis - so we tested this once with a bunch of our sites:
When moving sites with a German speaking audience to a VPS in America, your rankings at google.de/google.at will decrease (slightly - the effect is not that big) - the other way around your rankings will improve (slightly).
However - even if your rankings would improve when moving to America I would recommend keeping your sites hosted in Europe: The increase in rankings will not offset the decrease in user satisfaction and therefore the decline in your conversion rates.
[+] [-] hayd|9 years ago|reply
Although doesn't help for all types of requests, it has its uses.
[+] [-] kijin|9 years ago|reply
I would also assume that Google is smart enough to take the physical location of your server into account when calculating how much penalty to apply in which searches. Sites that load fast in Germany should have higher ranks in searches from Germany.
[+] [-] zhte415|9 years ago|reply
Is this the initial handshake which understandably introduces latency?
After that, times should be similar. What could be killing users far away is requiring multiple handshakes because multiple things requiring handshakes are being introduced at the same time.
For reference, I'm physically located in China so requests have to go through a bunch of filtering-oriented routers, and get 150-180ms from US, 200ms Japan and 180ms Singapore (yay geography) and around 200-250ms from Europe - this is SSL requests and not from a connections hub like Shanghai or Shenzhen close to domestic exit-points. Double to triple these times for first handshake.
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] jlebrech|9 years ago|reply
[+] [-] mfontani|9 years ago|reply
From a VPS in Sydney, with a Good Enough bandwidth:
… doing the request through Railgun is "quite bearable": Despite our "origin" server being quick enough, the main chunk of time is really "bytes having to travel half the world".Why does railgun help? Because this is what a user would get otherwise; the "whitepapers" site is hosted in the UK, and doesn't use Cloudflare or Railgun – it only uses Cloudflare for DNS:
… so that's ~200ms more, and _on http_.How much would https add, if it were done without Cloudflare's https and Railgun? That's easy to check, as our the whitepapers site has TLS (although admittedly not http/2):
that's quite a huge chunk of time that Cloudfalre HTTPS + Railgun just saves/shaves for us. Recommend it highly![+] [-] pbarnes_1|9 years ago|reply
That would be interesting.
[+] [-] hannob|9 years ago|reply
It's a bit older, but here's some info, much of it is still valid: https://istlsfastyet.com/
[+] [-] citrin_ru|9 years ago|reply
Without OCSP browser makes slow request to CA, but caches results for a long time so slow request happens not often.
With OCSP stapling enabled more data is transferred between client and server on each TLS handshake.
Main proponents of OCSP stapling are CA, because it saves them bandwidth/hardware.
[+] [-] hedora|9 years ago|reply
It's probably controversial, but I'd love to see a yellow security icon in browsers when sites are using well known https relays that can see plaintext (or are doing other obviously bad things, like running software with known zero day exploits, etc)
[+] [-] beardog|9 years ago|reply
Most websites are on virtual servers (hardware in general) that is not owned by them. For example, Amazon could easily let the NSA look into your AWS server directly. IMO, the url lock should just be an encryption auditor. The end website is using acceptable algorithms and has a currently valid certificate? That's good enough.
Almost any HTTPS site can be forged/"broken" (unless they're using preloaded HPKP), if the attacker has root certificates (or even just a bug in a CA website), which the NSA certainty does.
Nation state adversaries just aren't really within the typical TLS threat model. I do concede that it does make agencies jobs much harder if used correctly, however.
[+] [-] jacquesm|9 years ago|reply
Hm. Good idea, why not go a step further and turn the 'no server signatures' advice on it's head: full disclosure, server signatures on, in fact, list each and every component in the stack so that end users can (through some plug-in) determine whether or not a site is safe to use.
Of course nothing bad could ever come from that. /s
I'm all for making the use of for instance Cloudflare less transparent so that users know who they are really talking to, but I'm confused about how you'd want to establish what a site is running without giving a potential attacker a lot of valuable information.
[+] [-] throwawaysed|9 years ago|reply
Does anyone remember a few years ago when Google found out through leaks that the govt was wiretapping it's private traffic between datacentres?
What makes you so naive to think that the govt isn't sniffing every single page on cloudflare?
[+] [-] manigandham|9 years ago|reply
If your risk profile is outside the boundaries of normal internet use then you likely already know what to do - and we now have a multitude of tools for more private communications.
[+] [-] apeace|9 years ago|reply
This analysis seems flawed. If you care about mass surveillance, you want their top-tier security and legal teams working for you.
[+] [-] sp332|9 years ago|reply
[+] [-] problems|9 years ago|reply
Nice part about cloudflare though is that they can use anycast to determine location and then send the closest server IPs. For sub-$200/mo, you're not able to do that, you'd have to find a provider that could do it for you, I'm not sure anyone offers country-based anycast DNS alone.
EDIT: Looks like easyDNS enterprise may be able to do it, https://fusion.easydns.com/Knowledgebase/Article/View/214/7/... for about $12.75/mo too. Might be a decent way to brew your own mini caching CDN for fairly cheap.
[+] [-] Kiro|9 years ago|reply
[+] [-] eganist|9 years ago|reply
Among other reasons, not encrypting traffic gives an opportunity for bad actors to replace content in transit to your end users when your end users are on compromised connections, such as rogue "free" wifi networks in airports or coffee shops, or even legitimate networks which have in some way been compromised, e.g. the ISPs of the world who decide to inject other content e.g. their own ads into unencrypted traffic.
The next question is usually "what could they possibly do, change a few pictures?"
They could inject malicious payloads, and for all your users would know, it would appear to them that it came from your site.
> I can't use LetsEncrypt with my hosting provider
Consider switching. For a static site, consider Gitlab; they do a good job of permitting LetsEncrypt.
---
I sincerely appreciate the question, though. I have marketing people ask me this question all the time in private who hesitate to do so in public because quite a few security types berate them for not doing something "obviously" more secure. It's not at all obvious to most of the world's web designers and content creators that a static site should be TLS'd until it's framed (heh) in this manner. The fact that you asked brings about a massive educational moment.
Anyway, consider switching hosts. :)
[+] [-] riobard|9 years ago|reply
The Internet is not a safe place. We should aim for HTTPS EVERYWHERE.
[+] [-] Nullabillity|9 years ago|reply
[+] [-] mrkurt|9 years ago|reply
You should think about https for sites like yours the way you think about vaccines. SSL everywhere makes everyone safer, even though it doesn't have a tremendous impact on your own site.
Also, shameless plug, if you want really easy SSL you can use our new startup: https://fly.io. I'm not sure what country you're in, but we have a bunch of servers all over to help make it fast. :)
[+] [-] Gurrewe|9 years ago|reply
[+] [-] rocqua|9 years ago|reply
The second is more moral. Making https the default means more and more of the web will be encrypted and authenticated. This is a good thing.
[+] [-] dalore|9 years ago|reply
[+] [-] c0nfused|9 years ago|reply
First that almost every firewall out there right now supports https snooping via MITM. Example: https://www.paloaltonetworks.com/features/decryption
Second, I just got back from rural China where most unblocked american webpages take between 5-15 seconds to load on my mobile phone many of them take upwards of a minute to load fully. This seems to be a fun combo of network latency, smaller than expected bandwidth, and pages using javascript with a series of different load events to display content. That dompageloaded->xmlhttprequest -> onreadystatechanged chain can ad some serious time on a 500ms round trip, and that's without talking about the css, the images, and the javascript.
I forgot to pay me electric bill before I flew out and it took me nearly an hour to login, push pay my bill, accept the terms, and confirm payment. I was not a happy camper.
It seems to me that while https is a very good thing, in some cases http and low bandwidth solutions might be worth implementing. It seems to me that one might actually want to tailor this to your audience, no one in their right mind is going to waste 5 minutes loading your web page. If they are so desperate they need to wait, they are going to hate you every minute they do it.
[+] [-] rocqua|9 years ago|reply
Seems prudent to mention that this requires cooperation of the client bein MitMed. Specifically, the client needs to install a root certificate.
[+] [-] magicalist|9 years ago|reply
That sucks but I don't see how having a site where you may have to enter payment information on an unsecured connection would be a solution.
[+] [-] jacquesm|9 years ago|reply
You forgot about the great firewall of China playing merry MITM with your connections.
[+] [-] chatmasta|9 years ago|reply
I wonder if it would be lower latency to open a single websocket tunnel on page load and download assets over the tunnel. Although at that point I suppose you're just replicating the functionality of http/2.
[+] [-] SEMW|9 years ago|reply
Luckily not that much of an issue for us as when using an actual client library (unlike with curl) you get HTTP keep-alive, so at least the TCP connection doesn't need to be renewed for every request. And most customers who care about low latency are using a realtime library anyway, which just keeps a websocket, so sidesteps the whole issue. Certainly not enough to make us reconsider using TLS by default.
Still, a bit annoying when you get someone who thinks they've discovered with curl that latency from them to us is 4x slower than to Pubnub, just because the Pubnub docs show the http versions of their endpoints, wheras ours show https, even though we're basically both using the same set of AWS regions...
[+] [-] andreareina|9 years ago|reply
The Cloudflare Railgun is an interesting solution, and one that could be implemented in the context of an SPA over a websockets connection. Or conceivably some other consumer of an API.
[+] [-] filleokus|9 years ago|reply
https://tools.ietf.org/html/draft-thomson-http-bc-00, and Ericsson's article on it https://www.ericsson.com/thecompany/our_publications/ericsso...
[+] [-] nprescott|9 years ago|reply
[0]: https://hpbn.co/
[+] [-] aanm1988|9 years ago|reply
Why? Did it hurt user engagement? Were people complaining the site was slow?
[+] [-] kuschku|9 years ago|reply
If it’s no, then clearly we should improve.
[+] [-] chatmasta|9 years ago|reply
[+] [-] amiraliakbari|9 years ago|reply