That feels like they used the ground work laid with AWS Outposts to enable such smaller auxiliary local zones as well: In both cases the control plane still resides in the "real" region, but the data plane for a local zone and an AWS Outpost is located somewhere else (customer data center or in the case of this local zone somewhere in LAX). I'd even bet that the hardware that they use to power AWS Outposts and such a Local Zone is the same. Of course obviously the scale and the offered services are different.
There's definitely similarities, but I could see it being totally different too. Outposts in only 16 racks max; I'd assume this is much more than that. Outposts is for single user and this is for everyone. Billing models presumably differ. Outposts goes in user's DC whereas I assume this is hard connected to AWS network backbone.
IDK if making these based on the same infra would be the right abstraction layer.
This could also be big for web UIs which leverage server-side rendering. I've been ramping our use of blazor server-side for web interfaces, and putting an application server in one of these local zones near where we all work/live could have a really positive impact on perceived performance.
Right now, I ping ~50ms out to us-east-1 and things feel "pretty good" in our server-side web UIs. If I could drop this by a factor of 10, we are getting into gaming monitor latency territory, and pure UI state changes could be resolved in timeframes that would be perceptually instantaneous for most users. I.e. things like clicking a button to pop a modal you wouldn't even worry about trying to make a client-side interaction anymore. You'd just wire it up using some trivial @if(showModal) inclusion block on the server-side html page template.
Granted, this imposes a pretty harsh geographic constraint if you have just the 1 server, but it is likely feasible to separate the view layer from your persistence/stateful layers, so you could host your view rendering services in multiple local zones, with all the business logic and state kept in one of the primary regions. Not all things can always be instantaneous, but if the UI is highly-responsive there are countless UX approaches for indicating to a user in a friendly way that they simply need to wait for a moment. Being able to build your web UI around blocking calls into business logic seems like a powerful place to be in terms of simplicity and control.
AWS seems to want to discourage the use of us-west-1, which is in San Francisco AFAIK? us-west-2 probably has a lot more capacity. Historically it's tended to be cheaper, too.
us-west-2 seems to be the anchor region on the west coast - more AZs/capacity than us-west-1, and nearly identical pricing and instance class availability as us-east-1.
I was wondering the same thing. Seems like an odd choice.
My hunch is that us-west-2 is more popular (literally every company for which I've worked that used or - currently - uses AWS chose us-west-2 over us-west-1, even when us-west-1 is geographically closer).
Probably because for the non latency- sensitive parts of your app you would want to move them to a large cheap region anyway, and us-west-2 is both larger and cheaper.
Who's excited about this? What's your use case that just became viable because of it? Definitely don't mean these questions in a condescending way, just want to get a read on the pulse from the folks here that will use it :)
All of my Asian bandwidth comes through LA or Vancouver. Big, cheap peering with other telecoms to cross the ocean. We already have a decent colo presence in LA, this will allow us to consolidate some of that, and move other VMs.
My understanding is that things like online games could take advantage of it, for the latency. Anything that has high latency concerns would be made better by having a closer endpoint.
I used to work for smilebooth, a portable photo/videobooth thing. I could see this in theory being used for realtime video processing like greenscreen? That said, it's not so hard to build greenscreen into the device itself (we did).
But, maybe it needs to do super high quality realtime 4K greenscreen that would overload an embedded CPU, and there's a built-in AWS library that does it, and the bandwidth is high enough, and you don't want to buy dedicated greenscreen HW that sits next to your booth (which is the other thing we did) then maybe.
An AWS region consists of several availability zones (AZ's) which consist of several data centers running AWS' hardware. Each region is designed in a way that services provided by it can tolerate the loss of a availability zone. A local zone is now something like an additional availability zone with the important difference that it only runs a subset of services of a regular availability zone and doesn't feature its own control plane (which are the services AWS needs to run all this infrastructure including API endpoints, etc.). Instead the control plane of the local zone just runs the so called data plane which is what contains the services used by their customers.
The blog post contains a list. Quote: “Services – We are launching with support for seven EC2 instance types (T3, C5, M5, R5, R5d, I3en, and G4), two EBS volume types (io1 and gp2), Amazon FSx for Windows File Server, Amazon FSx for Lustre, Application Load Balancer, and Amazon Virtual Private Cloud. Single-Zone RDS is on the near-term roadmap, and other services will come later based on customer demand. Applications running in a Local Zone can also make use of services in the parent region.”
This would be exciting but I'm already having trouble managing subnets and this just makes it worse. If these new Local Zones support IPv6 then that makes it much easier to plan and support new networks.
It’s extremely intuitive, it’s part of the ‘us-west-2’ region, but an extension in ‘lax’ and it’s the first availability zone there ‘1a’.
Your suggestion makes no sense, it implies an entirely new region ‘us-west-3’ but with an entirely different naming scheme (no regions have an ‘a’ on the end of them) and the ‘local’ provides zero context about it with no expansion capability.
[+] [-] Dunedan|6 years ago|reply
[+] [-] jedberg|6 years ago|reply
[+] [-] someotherone|6 years ago|reply
IDK if making these based on the same infra would be the right abstraction layer.
[+] [-] bob1029|6 years ago|reply
Right now, I ping ~50ms out to us-east-1 and things feel "pretty good" in our server-side web UIs. If I could drop this by a factor of 10, we are getting into gaming monitor latency territory, and pure UI state changes could be resolved in timeframes that would be perceptually instantaneous for most users. I.e. things like clicking a button to pop a modal you wouldn't even worry about trying to make a client-side interaction anymore. You'd just wire it up using some trivial @if(showModal) inclusion block on the server-side html page template.
Granted, this imposes a pretty harsh geographic constraint if you have just the 1 server, but it is likely feasible to separate the view layer from your persistence/stateful layers, so you could host your view rendering services in multiple local zones, with all the business logic and state kept in one of the primary regions. Not all things can always be instantaneous, but if the UI is highly-responsive there are countless UX approaches for indicating to a user in a friendly way that they simply need to wait for a moment. Being able to build your web UI around blocking calls into business logic seems like a powerful place to be in terms of simplicity and control.
[+] [-] AaronFriel|6 years ago|reply
[+] [-] eropple|6 years ago|reply
[+] [-] jread|6 years ago|reply
[+] [-] yellowapple|6 years ago|reply
My hunch is that us-west-2 is more popular (literally every company for which I've worked that used or - currently - uses AWS chose us-west-2 over us-west-1, even when us-west-1 is geographically closer).
[+] [-] dgemm|6 years ago|reply
[+] [-] manacit|6 years ago|reply
[+] [-] kitteh|6 years ago|reply
It's on here.
[+] [-] keithyjohnson|6 years ago|reply
[+] [-] blaser-waffle|6 years ago|reply
All of my Asian bandwidth comes through LA or Vancouver. Big, cheap peering with other telecoms to cross the ocean. We already have a decent colo presence in LA, this will allow us to consolidate some of that, and move other VMs.
[+] [-] ismitley|6 years ago|reply
[+] [-] nemothekid|6 years ago|reply
[+] [-] faitswulff|6 years ago|reply
[+] [-] zdraft|6 years ago|reply
But, maybe it needs to do super high quality realtime 4K greenscreen that would overload an embedded CPU, and there's a built-in AWS library that does it, and the bandwidth is high enough, and you don't want to buy dedicated greenscreen HW that sits next to your booth (which is the other thing we did) then maybe.
[+] [-] dpcx|6 years ago|reply
[+] [-] bl00djack|6 years ago|reply
[+] [-] ppierald|6 years ago|reply
if 'us-west-2' in region: // do it
[+] [-] zknz|6 years ago|reply
[+] [-] bithavoc|6 years ago|reply
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] julianozen|6 years ago|reply
[+] [-] Dunedan|6 years ago|reply
[+] [-] tus88|6 years ago|reply
[+] [-] btgeekboy|6 years ago|reply
[+] [-] Scramblejams|6 years ago|reply
[+] [-] rsync|6 years ago|reply
That is why our primary location is in Denver. No earthquakes, no terrorism, no floods.
rsync.net customers all over California happily keep their backups in boring old Denver ...
(with a one hop route into the he.net core in Fremont ...)
[+] [-] NDizzle|6 years ago|reply
[+] [-] AdamN|6 years ago|reply
https://noc.net.internet2.edu/
Which would you choose?
[+] [-] luhn|6 years ago|reply
[+] [-] Corrado|6 years ago|reply
[+] [-] ckdarby|6 years ago|reply
[+] [-] nnx|6 years ago|reply
[+] [-] nodesocket|6 years ago|reply
Wow, yet more obscure non intuitive naming from AWS. Why not just us-west-3a-local?
[+] [-] jsjohnst|6 years ago|reply
Your suggestion makes no sense, it implies an entirely new region ‘us-west-3’ but with an entirely different naming scheme (no regions have an ‘a’ on the end of them) and the ‘local’ provides zero context about it with no expansion capability.
[+] [-] Maxious|6 years ago|reply