Hacker News is one of the most responsive websites I know. And it is run on a single server somewhere in the USA. While I am in Europe.
If you have users in Sydney, Australia ...
... you are floored at 104ms of latency for your request
When I open AirBnB with a clean browser cache, it takes several seconds until I see something useful. Those 104ms of latency don't make a dent.
Reddit takes over 5 seconds until the cookie banner is ready for me to click it away.
Twitter takes 6 seconds to load the homepage and display the single tweet which fits on it.
preview images take a little longer to load
Preview images of what? Images usually should be routed through a CDN which caches them directly in the user's country. It's extremely cheap and easy to set up. Nothing compared to running an application in multiple datacenters.
HN front page isn't filled with garbage and loaded with a million lines of pointless javascript like all other sites you mentioned. HN front page is 27 kb in size while, for example, reddit is 943 kb.
Performance issues with websites are entirely a self-made problem. There are plenty of fast, lean and yet very functional web pages like HN that prove that prove it.
The reason HN loads in... 500ms in EU is that it takes exactly two roundtrips to load: One HTML blob, one render-blocking CSS. Rest is fast-ish abroad is because it takes exactly 2 somewhat lean roundtrips to load over the same TLS connection: One HTML blob, one CSS file, and the latter is cachable. There is JS, but it is loaded at the end. That is a hell of a lot better than the average website.
When the CSS is cached, this becomes ~200ms. Considering that the roundtrip is 168ms, this means that the vast majority of the pageload is just waiting for bytes to cross the atlantic. The roundtrip to eu-central is 30-50ms in comparison, less than 1/3rd. 3 times faster pageload is a significant UX difference.
Now, the reason we accept it with HN is that HN is just a list of direct anchor tags - the majority of the content is other websites. If the page had been more interactive, firing requests and dynamically fetching content, it would feel sluggish at best.
Same here. We run a very content/media-heavy SaaS SPA application completely out of a single location in Germany and have customers that are primarily located in the US, and also in places like Australia and Japan. We don't use any CDN, every request has to go to our origin in Germany. Yet customers regularly tell us how fast and snappy our application is. Why? While we do make dozens or even hundreds of requests per page navigation in the SPA (images, XHR), these are all fired off in parallel, and none of these are blocking anything. A CDN would probably improve things slightly, but currently we don't feel like we need it.
Browsing HN only needs one round-trip (fetch the HTML), maybe two if you don't have it cached (fetch the CSS).
Many apps need more round-trips, by loading assets sequentially. For example: fetch the HTML, then the JS, then the JS downloads its config, then the JS fetches some assets. Latency accumulates with every round-trip.
There's a few factors to keep in mind. HN is much smaller, uses less resources, less individual files; less roundtrips (of the hypothetical 104 ms) to get all the resources.
HTTP / TCP overhead and throttling is another factor; there's a few bounces back and forth before a request can be fulfilled.
However, the factor here is that with http/2 and other advancements, the number of roundtrips and the number of requests needed to get resources is reduced by a lot. HTTP 1 needed one request per resource, HTTP 2 and 3 can bundle them into the same connection. I'm not explaining it very well, there's good resources out there.
anyway, HN is an example of old internet style websites, where latency is compensated for by the webpages just being small and snappy. A lot of advances in things like latency and speed have been undone by heavier websites.
Latency isn't just a static number bolted on to baseline performance. While it may intuitively feel correct to say "if distance adds 100ms to every request, then a website with 200ms baseline performance would be 33% slower, but a website with 5000ms baseline performance would only be 2% slower"; its not. Its additive. A typical website like reddit will require hundreds if not thousands of requests to prop up the home page; many of these requests cascade in batches, their inputs being dependent on the output of the other; so, adding on 100ms to every request can very realistically mean the addition of 1-2s of load time, on a site that's already slow because its poorly engineered.
There are newer web technologies and methodologies to help get around some of these problems that request cascades have. React itself, on the other hand, in how it loads children, then children of children, oftentimes conditionally based on data availability, has made the problem worse. There's also CDNs, and more recently, edge-local compute like CF Workers. The emergence of all of these technologies to help triage the problems geographic centrality in service deployments creates should be all the evidence you need to change your mind that this is a real problem.
But, it will remain a problem, because much like passwords to passkeys: It takes a very long time to clear out the calcification that's been built up in our human and computer systems. Not only does it require a shift in how we think about systems design; it requires convincing people like you that this is, in fact, a problem; people who may have not left town for years, not experienced the internet from the other side of the globe. Ashburn VA is ~2500 miles from the west coast of the US; ~4000 miles from Europe, Hawaii, or Alaska; and ~11,000 miles from Perth, Australia or Jakarta. Yes; the US is that large, and the world is that large; your experience in Europe is barely different than what a person living in California would experience, on a site that centralizes itself in us-east-1. There's a lot more distance to cover once you move out of the West.
Lazy loading images drives me a little bit nuts. Previously with JS, now build into the browser. You scroll down and then you’re waiting for an image, regardless of CDN. The latency of an additional connection is notable. It’s particularly aggravating if you’re have opened a page in a background tabs minutes or hours ago and when you start reading it, the page isn’t really there as expected but still needs to lazy load.
Aren't there several requests to complete though? Assuming 5 synchronous requests before the first byte of your web page is rendered (a DNS lookup, a TLS connection to establish (which requires 2 requests), then a preflight request, and finally the actual request) that's a full half a second just on the flight time regardless of all the other costs. That's an extra 10% on top of the 5s it takes for Reddit to load. Subsequent requests would be faster but generally it's the first page load people really notice.
I'm in Melbourne and this is indeed one of the fastest websites I use. But most websites are bloated JavaScript apps with a gazillion assets so it's chalk and cheese really.
> Preview images of what? Images usually should be routed through a CDN which caches them directly in the user's country. It's extremely cheap and easy to set up. Nothing compared to running an application in multiple datacenters.
I don't think most service cache data that isn't normally used in that region. In my experience I noticed that Twitter, YouTube, and Facebook feel sluggish when you are viewing Japanese content from Singapore compared to viewing the same content in Japan, etc.
I live right in the middle of the Silicon Valley and I observe the same timings.
Hacker news loads to content faster than Google or Bing search.
In fact hacker news is pretty much the only website I can actually feel slower to load when I am in France. Because the latency is actually not lost into the noise for once.
The speed of light in a fiber optic cable is slower than light in a vacuum, about 2.14e8 m/s.
If you feel latency, it's probably not the one-direction or round-trip latency, but rather the MANY round trips that are typically required for an HTTP request. DNS is probably 2 round trips (CNAME then A), and that has to cross the ocean via your resolver of choice (8.8.8.8 or whatever) to get to the authoritative server if it's not already cached (or distributed; big DNS providers will serve your zone in many regions). Then you have to set up a TCP session, which is 1.5 round trips. Then you have to set up TLS, which varies, and make an HTTP request, and wait for the response. (I counted 5 total round trips until you see the response.)
So basically if you calculate the speed of light between two points, multiply that by 2*(2+5) = 14 in the worst case to see your time to first byte. Doing something 14 times is always going to be slow.
The underlying issue here is not so much the distance, but rather that TCP, TLS, and HTTP don't care about latency at all. (I'll ignore the application layer, which probably wants to redirect you to /verify-session-cookie and then /hey-you-logged-in for some reason. And yes, TLS1.3 has 0RTT handshakes now too, eliminating some trips.)
This is the problem that HTTP/3 aims to fix; one round trip replaces the TCP handshake, TLS handshake, and HTTP request. You shoot out a packet, you get back an HTTP response. (You still have to do the DNS lookup, so we'll call this 3 round trips total.)
Your post is a good read for everyone trying to calculate number of RTT solely on the HTTP layer, which is done so often, but always wrong.
To add to your post, don't forget TCP congestion window scaling, which will add some more roundtrips - this mostly depends on the size and bandwidth of the ressources, so smaller sites like HN have an advantage here. Especially if the initial ressources fit within the initcwnd (10*MSS, usually around 15kb). But this, like many of the parameters you mentioned, are highly flow- and also software specific, it becomes so hard to make meaningfull predictions.
DNS is typically a single round-trip for CNAME + A. And if you're using a resolver such as 8.8.8.8 or 1.1.1.1, it will route to a resolver close to you in most cases, and already have the results cached for any major site.
I've been doing this long enough that I remember when all the big web sites were hosted in California. In fact, my company had its web farm in Sunnyvale which we managed over frame relay from Atlanta.
Whenever I'd visit the west coast, I was shocked how much faster the web seemed.
So I sympathize with the sentiment.
Thing is though, the entire web feels pretty sluggish to me these days. And that's with us-east-1 less than 300 miles away from me. Because most web sites aren't slow due to where they're hosted, but rather because of how bloated with crap most of them have become.
> Thing is though, the entire web feels pretty sluggish to me these days. And that's with us-east-1 less than 300 miles away from me. Because most web sites aren't slow due to where they're hosted, but rather because of how bloated with crap most of them have become.
It doesn't seem much faster than it seemed in 1995 when I first got online. There's much more stuff, but the latency doesn't really seem much better.
It's probably commercial: people don't mind wasting 3-4 seconds at a time loading Reddit/FB/etc and in that time a whole bunch of code that's useful to the website operator is loaded. All the stuff that tracks what you're up to.
I've even seen sites that don't load properly or aren't usable before the cookie banner renders, which may be loaded inefficiently from somewhere else. It's tragic!
Good article! I always notice this same effect when I visit my parents in Argentina or I'm in Europe.
> Using a global CDN can help get your assets to your users quicker, and most companies by this point are using something like Cloudflare or Vercel, but many still only serve static or cached content this way. Very frequently the origin server will still be a centralized monolith deployed in only one location, or there will only be a single database cluster.
Notably: even if the source of truth is single-region, there's a lot that can be done to improve the experience by flushing parts of the page at the edge.
Check out https://how-is-this-not-illegal.vercel.app/ where the layout.tsx[1] file is edge-rendered right away with placeholders, and then the edge renderer streams the content when the single-region database responds.
Furthermore, consider that parts of the page (like the CMS content) can also be cached and pushed to the edge more easily than, say, a shipping estimate or personalized product recommendations, so you can have content as part of that initial placeholder flush. We have an e-commerce example that shows this[2].
Another way that a global cdn helps is that your HTTPS negotiation takes place closer by.
There are (in http 1.1 at least) many back and forth steps to negotiating the HTTPS connection, the encryption key, etc. A global cdn into a cloud service (CloudFront is the example I know best) lets the user do those back and forths with a server very close to them, then handle the long haul to where the request is handled in one round trip.
I think the peering agreements of the local ISP are likely to be a factor as well.
When I moved inside Europe I suddenly noticed slow connections to Github pages. I expected that it had something to the physical location of the Github pages servers. However, when I connected to the VPN of my previous location it all was snappy again. That eliminated the physical distance as a cause.
To counter the top comment at the moment, being from Sydney, Australia, I totally do buy it. It also works both ways, if you want to build something with global reach but host it locally you’re immediately going to be penalised by the perceptions that come with latency. Also, I might add that the latency builds up non linearly the more throughput you’re attempting to achieve (e.g. streaming video).
Disclaimer: I am currently working for a startup attempting to build a video creation and distribution platform with global reach.
As an Australian, I agree that I usually prefer when a service is hosted nearby. Yet… 200ms latency, that’s pretty good actually. For some real data, I just tried `ping ec2.us-east-1.amazonaws.com` and time is 240ms. That’s in Tasmania, NBN over Wifi. I’m happy with that!
But the problem, like many of the other commenters are saying, is for a single request us-east-1 is actually fine. But for a modern web app but many requests, that compounds real quick. I actually think living here is an advantage as a web developer because it’s like those athletes that only train at high altitudes — living in a high latency environment means you notice problems easily.
That puts things into perspective for us in South Africa*. My RTT to Europe is about 170-180ms. It used to be a bit better, not sure what happened. But the point is that it's just barely within what I would consider "fast" in relation to Europe.
(*) Similar to AU, we're also in the middle of "nowhere"
But tbh I think this is mainly a problem for apps that have a long chain of dependent requests. If you make 3 requests one after the other, it's probably fine. If the DOM isn't stable until after a series of 10 requests, any amount of latency is noticeable.
As a European, visiting the USA, you certainly find that most of the internet just works better.
However I think a bug chunk of the effect is that European mobile networks seem to take a second or two to 'connect' - ie. If you take your phone from your pocket and open an app, the first network request takes 2-3 seconds. Whereas for whatever reason, the same phone in the USA doesn't seem to have such a delay.
I usually get ~300ms ping from my home to us-east-1. You can absolutely feel the latency, especially on SPAs that performs many small xhr sequentially which compounds the latency even more. Apps that felt almost instant when used in network with <10ms latency are suddenly felt pretty sluggish.
Some of my worst experience was being forced to use SFTP to transfer thousands of small files to a server in us-east-1, which can take hours due to latency alone compared to transferring the same set of files using rsync / compressed archive which finish in minutes, and using RDP to access a remote windows machine behind a VPN, then run putty there to access a Linux server (the VPN only allows RDP traffics), and then I need to transfer thousands of files to the Linux server as well (my solution was to punch a hole through their shitty VPN via an intermediary bastion host I fully control, which allows me to use rsync).
Funny, I can pinpoint players location based on their pings pretty accurately too. 300ms + is Asia, 350+ is Australia, Americans are 120+, South America 170+.
Ping towards USA has lowered the most. This used to be 225ms in the earlier online days.
This article brought to mind a different but related scenario. I live on an island that was recently affected by a typhoon. Internet speeds are usually pretty good, but in the aftermath of the storm cable internet has been up-and-down depending on the day, and the cell towers are very spotty. I've found that most modern apps depend on a high-speed connection, and give a very poor experience otherwise. Of course this seems obvious in hindsight, but it's a different experience living through it.
AFAIK, there is no generic datastore that does multi-region, with moving around the leader for a given subset of data available. Something like what's written in the Spanner paper would be amazing (microshards, and moving around microshards based on user access) if it was accessible.
no mention, or realisation, of the storage and bandwidth requirements for hosting anything other than text. html, js and database queries are cheap to stick on a global CDN, but when it comes to larger multimedia files, such as images and videos, the costs soon skyrocket
I remember when we first moved to the cloud from a datacenter. It was in us-east-1, and literally the day after the switch over (before we started configuring multi-region) was the first time us-east-1 had its major outage.
The owners were pissed that it had gone down and it wasn't that it went down, it was that we were basically sitting around with our thumbs up our ass. When things went down in our DC, we just fixed them or at least we could give an ETA until things went back to normal. We had absolutely nothing. We couldn't access anything, and AWS was being slow in acknowledging an issue even existed.
That was a good lesson that day: the cloud is not necessarily better.
If you want a smooth experience that is easy to set up, you can provide a download link (gasp) and serve that over a CDN, and just have your app be native.
You'll only pay for backend queries, not for every single button style
It’s a solvable problem if you optimize for multiple regions from day 1 of the app but migrating an existing stack to multi-region after the fact is often a large enough undertaking that you pick the region of the majority of users and go with it.
The process of setting up an active passive region with the db is becoming more common but an active/active design is still relatively rare outside of apps designed for massive scale.
Even if you have gigabit in Australia, the latency when browsing Youtube and clicking through menus is a world of difference when you compare it to the US
The tab of browser devtools that let you simulate slow connections should probably add simulation of this kind of latency, as well as a 'simulate AWS outage' toggle if that's even possible. (don't know enough DNS to know how hard the latter is)
I guessed from the title that this would focus on redundancy, but I guess that's rarely noticable.
> In reality, the ping you’ll experience will be worse, at around 215ms (which is a pretty amazing feat in and of itself - all those factors above only double the time it takes to get from Sydney to the eastern US).
Isn't it double just because ping measures round trip time?
There’s also the device speed. It might provide a different reference point for different users.
If you’re opening a website on a low end smartphone with an outdated system, the network latency might be not noticeable (because the UX of the device is so slow anyway).
Ngl, a website where you input a URL and it checks the latency around the world would be interesting. Bonus points for tying to guess where it's hosted.
TekMol|2 years ago
Hacker News is one of the most responsive websites I know. And it is run on a single server somewhere in the USA. While I am in Europe.
When I open AirBnB with a clean browser cache, it takes several seconds until I see something useful. Those 104ms of latency don't make a dent.Reddit takes over 5 seconds until the cookie banner is ready for me to click it away.
Twitter takes 6 seconds to load the homepage and display the single tweet which fits on it.
Preview images of what? Images usually should be routed through a CDN which caches them directly in the user's country. It's extremely cheap and easy to set up. Nothing compared to running an application in multiple datacenters.vjk800|2 years ago
Performance issues with websites are entirely a self-made problem. There are plenty of fast, lean and yet very functional web pages like HN that prove that prove it.
arghwhat|2 years ago
When the CSS is cached, this becomes ~200ms. Considering that the roundtrip is 168ms, this means that the vast majority of the pageload is just waiting for bytes to cross the atlantic. The roundtrip to eu-central is 30-50ms in comparison, less than 1/3rd. 3 times faster pageload is a significant UX difference.
Now, the reason we accept it with HN is that HN is just a list of direct anchor tags - the majority of the content is other websites. If the page had been more interactive, firing requests and dynamically fetching content, it would feel sluggish at best.
The difference in UX caused by latency is huge.
heipei|2 years ago
progval|2 years ago
Many apps need more round-trips, by loading assets sequentially. For example: fetch the HTML, then the JS, then the JS downloads its config, then the JS fetches some assets. Latency accumulates with every round-trip.
rozenmd|2 years ago
https://onlineornot.com/do-i-need-a-cdn?url=https://news.yco...
ssss11|2 years ago
Cthulhu_|2 years ago
HTTP / TCP overhead and throttling is another factor; there's a few bounces back and forth before a request can be fulfilled.
However, the factor here is that with http/2 and other advancements, the number of roundtrips and the number of requests needed to get resources is reduced by a lot. HTTP 1 needed one request per resource, HTTP 2 and 3 can bundle them into the same connection. I'm not explaining it very well, there's good resources out there.
anyway, HN is an example of old internet style websites, where latency is compensated for by the webpages just being small and snappy. A lot of advances in things like latency and speed have been undone by heavier websites.
flagged24|2 years ago
015a|2 years ago
There are newer web technologies and methodologies to help get around some of these problems that request cascades have. React itself, on the other hand, in how it loads children, then children of children, oftentimes conditionally based on data availability, has made the problem worse. There's also CDNs, and more recently, edge-local compute like CF Workers. The emergence of all of these technologies to help triage the problems geographic centrality in service deployments creates should be all the evidence you need to change your mind that this is a real problem.
But, it will remain a problem, because much like passwords to passkeys: It takes a very long time to clear out the calcification that's been built up in our human and computer systems. Not only does it require a shift in how we think about systems design; it requires convincing people like you that this is, in fact, a problem; people who may have not left town for years, not experienced the internet from the other side of the globe. Ashburn VA is ~2500 miles from the west coast of the US; ~4000 miles from Europe, Hawaii, or Alaska; and ~11,000 miles from Perth, Australia or Jakarta. Yes; the US is that large, and the world is that large; your experience in Europe is barely different than what a person living in California would experience, on a site that centralizes itself in us-east-1. There's a lot more distance to cover once you move out of the West.
ttepasse|2 years ago
Lazy loading images drives me a little bit nuts. Previously with JS, now build into the browser. You scroll down and then you’re waiting for an image, regardless of CDN. The latency of an additional connection is notable. It’s particularly aggravating if you’re have opened a page in a background tabs minutes or hours ago and when you start reading it, the page isn’t really there as expected but still needs to lazy load.
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/im...
onion2k|2 years ago
Aren't there several requests to complete though? Assuming 5 synchronous requests before the first byte of your web page is rendered (a DNS lookup, a TLS connection to establish (which requires 2 requests), then a preflight request, and finally the actual request) that's a full half a second just on the flight time regardless of all the other costs. That's an extra 10% on top of the 5s it takes for Reddit to load. Subsequent requests would be faster but generally it's the first page load people really notice.
cdogl|2 years ago
innocenat|2 years ago
I don't think most service cache data that isn't normally used in that region. In my experience I noticed that Twitter, YouTube, and Facebook feel sluggish when you are viewing Japanese content from Singapore compared to viewing the same content in Japan, etc.
unknown|2 years ago
[deleted]
bombela|2 years ago
Hacker news loads to content faster than Google or Bing search.
In fact hacker news is pretty much the only website I can actually feel slower to load when I am in France. Because the latency is actually not lost into the noise for once.
bigbacaloa|2 years ago
aaron695|2 years ago
[deleted]
quickthrower2|2 years ago
jrockway|2 years ago
If you feel latency, it's probably not the one-direction or round-trip latency, but rather the MANY round trips that are typically required for an HTTP request. DNS is probably 2 round trips (CNAME then A), and that has to cross the ocean via your resolver of choice (8.8.8.8 or whatever) to get to the authoritative server if it's not already cached (or distributed; big DNS providers will serve your zone in many regions). Then you have to set up a TCP session, which is 1.5 round trips. Then you have to set up TLS, which varies, and make an HTTP request, and wait for the response. (I counted 5 total round trips until you see the response.)
So basically if you calculate the speed of light between two points, multiply that by 2*(2+5) = 14 in the worst case to see your time to first byte. Doing something 14 times is always going to be slow.
The underlying issue here is not so much the distance, but rather that TCP, TLS, and HTTP don't care about latency at all. (I'll ignore the application layer, which probably wants to redirect you to /verify-session-cookie and then /hey-you-logged-in for some reason. And yes, TLS1.3 has 0RTT handshakes now too, eliminating some trips.)
This is the problem that HTTP/3 aims to fix; one round trip replaces the TCP handshake, TLS handshake, and HTTP request. You shoot out a packet, you get back an HTTP response. (You still have to do the DNS lookup, so we'll call this 3 round trips total.)
littlecranky67|2 years ago
To add to your post, don't forget TCP congestion window scaling, which will add some more roundtrips - this mostly depends on the size and bandwidth of the ressources, so smaller sites like HN have an advantage here. Especially if the initial ressources fit within the initcwnd (10*MSS, usually around 15kb). But this, like many of the parameters you mentioned, are highly flow- and also software specific, it becomes so hard to make meaningfull predictions.
re-thc|2 years ago
I hope your DNS doesn't have to do that. Most anycast DNS should have lots of PoPs (regions) and are really fast.
CDNs usually solve a lot of the static asset issues.
The main issue is the database.
matharmin|2 years ago
js2|2 years ago
Whenever I'd visit the west coast, I was shocked how much faster the web seemed.
So I sympathize with the sentiment.
Thing is though, the entire web feels pretty sluggish to me these days. And that's with us-east-1 less than 300 miles away from me. Because most web sites aren't slow due to where they're hosted, but rather because of how bloated with crap most of them have become.
lordnacho|2 years ago
It doesn't seem much faster than it seemed in 1995 when I first got online. There's much more stuff, but the latency doesn't really seem much better.
It's probably commercial: people don't mind wasting 3-4 seconds at a time loading Reddit/FB/etc and in that time a whole bunch of code that's useful to the website operator is loaded. All the stuff that tracks what you're up to.
sgt|2 years ago
StanislavPetrov|2 years ago
https://www.retro-gaming.it/videogiochi_img/movie_wargames/w...
Rauchg|2 years ago
> Using a global CDN can help get your assets to your users quicker, and most companies by this point are using something like Cloudflare or Vercel, but many still only serve static or cached content this way. Very frequently the origin server will still be a centralized monolith deployed in only one location, or there will only be a single database cluster.
Notably: even if the source of truth is single-region, there's a lot that can be done to improve the experience by flushing parts of the page at the edge.
Check out https://how-is-this-not-illegal.vercel.app/ where the layout.tsx[1] file is edge-rendered right away with placeholders, and then the edge renderer streams the content when the single-region database responds.
Furthermore, consider that parts of the page (like the CMS content) can also be cached and pushed to the edge more easily than, say, a shipping estimate or personalized product recommendations, so you can have content as part of that initial placeholder flush. We have an e-commerce example that shows this[2].
[1] https://github.com/rauchg/how-is-this-not-illegal/blob/main/...
[2] https://app-router.vercel.app/streaming/edge/product/1
mabbo|2 years ago
There are (in http 1.1 at least) many back and forth steps to negotiating the HTTPS connection, the encryption key, etc. A global cdn into a cloud service (CloudFront is the example I know best) lets the user do those back and forths with a server very close to them, then handle the long haul to where the request is handled in one round trip.
Eg: putting CloudFront in front of your API calls can make them faster! Great video by slack on the topic: https://m.youtube.com/watch?v=oVaTiRl9-v0
landgenoot|2 years ago
When I moved inside Europe I suddenly noticed slow connections to Github pages. I expected that it had something to the physical location of the Github pages servers. However, when I connected to the VPN of my previous location it all was snappy again. That eliminated the physical distance as a cause.
chrismsimpson|2 years ago
Disclaimer: I am currently working for a startup attempting to build a video creation and distribution platform with global reach.
burntcaramel|2 years ago
But the problem, like many of the other commenters are saying, is for a single request us-east-1 is actually fine. But for a modern web app but many requests, that compounds real quick. I actually think living here is an advantage as a web developer because it’s like those athletes that only train at high altitudes — living in a high latency environment means you notice problems easily.
sgt|2 years ago
(*) Similar to AU, we're also in the middle of "nowhere"
rozenmd|2 years ago
The response time for Bitbucket for example is:
100ms from us-east
300ms from us-west
400ms from eu-central
600ms from tokyo
800ms from sydney
(numbers from OnlineOrNot)
sk0g|2 years ago
rozenmd|2 years ago
https://onlineornot.com/do-i-need-a-cdn?url=https://bitbucke...
rahimnathwani|2 years ago
Particularly the part quoted in this comment: https://news.ycombinator.com/item?id=36507013
But tbh I think this is mainly a problem for apps that have a long chain of dependent requests. If you make 3 requests one after the other, it's probably fine. If the DOM isn't stable until after a series of 10 requests, any amount of latency is noticeable.
londons_explore|2 years ago
However I think a bug chunk of the effect is that European mobile networks seem to take a second or two to 'connect' - ie. If you take your phone from your pocket and open an app, the first network request takes 2-3 seconds. Whereas for whatever reason, the same phone in the USA doesn't seem to have such a delay.
neurostimulant|2 years ago
Some of my worst experience was being forced to use SFTP to transfer thousands of small files to a server in us-east-1, which can take hours due to latency alone compared to transferring the same set of files using rsync / compressed archive which finish in minutes, and using RDP to access a remote windows machine behind a VPN, then run putty there to access a Linux server (the VPN only allows RDP traffics), and then I need to transfer thousands of files to the Linux server as well (my solution was to punch a hole through their shitty VPN via an intermediary bastion host I fully control, which allows me to use rsync).
the_mitsuhiko|2 years ago
rightbyte|2 years ago
dncornholio|2 years ago
Ping towards USA has lowered the most. This used to be 225ms in the earlier online days.
r24y|2 years ago
sargun|2 years ago
867-5309|2 years ago
withinboredom|2 years ago
The owners were pissed that it had gone down and it wasn't that it went down, it was that we were basically sitting around with our thumbs up our ass. When things went down in our DC, we just fixed them or at least we could give an ETA until things went back to normal. We had absolutely nothing. We couldn't access anything, and AWS was being slow in acknowledging an issue even existed.
That was a good lesson that day: the cloud is not necessarily better.
lionkor|2 years ago
You'll only pay for backend queries, not for every single button style
mduggles|2 years ago
The process of setting up an active passive region with the db is becoming more common but an active/active design is still relatively rare outside of apps designed for massive scale.
Jamie9912|2 years ago
marcyb5st|2 years ago
[1] https://peering.google.com/#/infrastructure -> Under the edge nodes part
throwawaymobule|2 years ago
I guessed from the title that this would focus on redundancy, but I guess that's rarely noticable.
makeworld|2 years ago
Isn't it double just because ping measures round trip time?
josephcsible|2 years ago
thih9|2 years ago
If you’re opening a website on a low end smartphone with an outdated system, the network latency might be not noticeable (because the UX of the device is so slow anyway).
jwally|2 years ago
I'd love to know how my sites behave in Frankfurt, Mumbai, SF, Sydney, etc.
RGBCube|2 years ago
preinheimer|2 years ago
I believe that’s the source he’s using.
SergeAx|2 years ago
reportgunner|2 years ago
unknown|2 years ago
[deleted]
shubhamgrg04|2 years ago
snailtrail|2 years ago
[deleted]
aerio|2 years ago
Better luck next time!