There's quite a few things you've missed that are significant and should have been included, maybe one for part two:
* Network ACLs, which describe the ruleset (consider it like a stateless firewall) for subnets and their respective routes. Whilst they are optional, having a default set it straightens out a lot of duplication that may end up in Security Groups (which are more stateful in nature).
* Elastic (public) IPs. NAT instances/gateways require their use, and there is dance to be done around their allocation in account, and attaching to instance interfaces.
* IPv6 components. Egress-only Internet Gateways operate differently to IGWs, as there is no NAT they need a route applied across all subnets both public and private. IPv6 CIDR which allocates the VPCs /56 (and thus each subnet gets a /64, and each instance's interface thus gets a /128 which is bananas, but IPv6 is a second class citizen on AWS). Finally updating the subnets so automatic IPv6 address assignment happens.
* VPC Gateways - these are broken into two types, the older type that support S3/DynamoDB and effectively allow traffic in a public/private subnet to bypass NAT. These enabled can have significant advantages to access and throughput. The newer "PrivateLink" services are different and having pricing costs associated with them.
* DNS and DHCP: It's a rule in the VPC that the delegated resolver lives on ".2" of the VPC's CIDR, and operates in dual-horizon - EC2 hostnames setup accordingly resolved by instances inside the VPC will get the private VPC CIDR address, not any Elastic IP.
Network ACLs have been pointed out as missing from this before but quite a few people said that they were right not be included. I didn't put them in because I've never used them so didn't fall under 'need to know' from my perspective.
IPv6 is another point of contention but again it's not something I've ever used and so, apart from any other controversies with it ("...IPv6 which is only marginally better than IPv4 and which offers no tangible benefit...", https://varnish-cache.org/docs/trunk/phk/http20.html), I'm not qualified to write about it.
EIPs and ENI should probably have been in there but I don't tend to use those that often either so they didn't occur to me.
I'm not sure that VPC Gateways, DNS or DHCP are necessarily need to know things either. VPC Gateways are for a specific routing optimisation which not everyone is going to need. I didn't know the details of the DNS set up for a VPC so thank you for that.
Thank you for the feedback - I really appreciate you taking the time.
I would also add VPC PrivateLinks to the list, which let you establish private connections between systems in different VPCs without having to either peer them or connect them in other ways. PrivateLinks allow you to relieve the pressure that you might otherwise feel to build a lot of systems in the same VPC.
Another useful concept (not VPC-specific) is using the Infrastructure-as-Code paradigm (e.g., CloudFormation, Terraform) to capture all of your networking configuration in source control, along with who made any changes and the reasons or design documentation for them.
Network ACLs [...] Whilst they are optional, having a default set it straightens out a lot of duplication that may end up in Security Groups (which are more stateful in nature).
I inherited an infrastructure that had NetACLs and security groups with duplicate entrypoints and policies, years of accumulated cruft because it was poorly designed and the documentation was even worse (read: nonexistent), security groups all the way down. That one threw me through a hard and annoying mental loop for a couple of hours until picking through with the finest tooth comb revealed what was going on.
The fun part is going to be rebuilding our routing in a new VPC such that it doesn't make the next guy want to put his head in a black hole.
I'd be lying if I said it wasn't a fun challenge in a sordid kind of way, though.
How do folks use Network ACLs? I haven't used them personally - relying more on security groups and segmenting subnets to specific tasks (e.g. attached to public network via IGW, or private network only)
Network ACLS are quite tricky to debug. For one of my connections, network calls were failing because esp was blocked at acl layer, ACL blocks all non tcp traffic by default. Funnily, network calls with same data-center was working but was failing when calling to another data-center. I had to look at VPC flow logs to figure that non tcp protocals were being blocked.
Let me summarize further.
If you come from 20 years of application development and
network design/administration in 'real' LAN and IGRP networks with 'real' hardware you are going to be learning everything again.
These cloud end user environments are fake eggs and saccharine sweetener.
Off topic, but as a network guy by heart I've always been fairly happy with how AWS implements the network side of things, especially in comparison to something like Azure.
AWS you have the same basic concepts of a network, and the terminology aligns enough that you can make sense of it fairly quick if you're in the network realm. Azure however takes all of that 'network' stuff and turns it into this abstraction where you have to carefully follow one of their guides to realize it's out of date, or the UI doesn't show the appropriate information etc. Also you have Azure network portions that block ICMP because of 'security'.
This is all anecdotal from my experience of course, but it's why I keep referring to Azure as the "Excel spreadsheet of the cloud" because the entire design of it is in your face and non intuitive.
For instance if I wanted to make a direct connection like DirectConnect to multiple VPC's in AWS, I'd use the Transit Gateway, connect to it from on-prem, add the VPC and the route, and be done.
In Azure, I'd use expressroute, add the Expressroute circuit to a Subscription, add a gateway for that, and then an additional gateway for each VPC equivalent, create an authorization key for each 'VPC' equivalent and sync them, and then define routing per gateway. Then when you go in to trace the network path ICMP is blocked.
I know AWS is more mature than Azure, so it's not entirely fair to criticize them, but every time I touch Azure I miss AWS, or even GCP. Perhaps it's just me not being familiar enough with Azure. ¯\_(ツ)_/¯
NAT gateways are one of the things that blindsided me on the whole "serverless" idea for hobby projects. To have a Lambda function with access to the outside world and your private network resources your $0.01/month function becomes a $35/month+ expense if you don't want to manage your own t2 NAT instance (and required patches, upgrades, scaling, monitoring, etc).
There are a bunch of microcharges like this that pop up, but reading your thread are you sure AWS is right for your application? You essentially can't afford it and want a free tier and near-free access? That seems a bit unrealistic. Maybe lambda isn't the right solution?
Can't you simply decouple your lambda project into two different parts where you have public lambda(s) calling your private/VPC lambda function(s) when required?
Public Lambdas can invoke VPC Lambdas (AFAIK, the reverse is not possible without a VPC endpoint).
I taught some courses on AWS for a year and a half. The networking piece is something that is trivial for any network engineer, but for any developer (which is my background) working through the network piece is crucial. It takes a while and this looks like a good reference. However, it's best to also check out the AWS docs https://docs.aws.amazon.com/vpc/latest/userguide/what-is-ama... . They are not always the easiest read, but I find them to be pretty authoritative.
I want to add a few notes useful for packet crafting. AWS, Google Cloud, and Azure don't work at layer 2 (Ethernet) as expected since they provide services at layer 3 and up.
For example, if you modify the MAC destination address it will not work in AWS. To be able to do that you should disable source/destination checks as is specified in [1].
The last time I checked you cannot do that in Google Cloud or Microsoft Azure.
When we experienced this issue, Reddit was the best resource for answers. I put the Reddit threads as they can help others working in projects requiring packet crafting:
As long as IPv6 is a second-class citizen, things are going to continue to be painful in AWS.
I did the whole "here is your /56, now segment it yourself" thing. Its crude. It should not be neccessary, if V6 was central to the model, you'd be assigned /64 from your covering prefix automatically, as you deploy regional nodes.
In my opinion, the most annoying thing about AWS networking - and some other services - is that they often use IDs and do not show labels which forces me to remember them partly, go back and forth or have multiple windows open. The AWS console is not the best UX piece on the web, but this part is especially error prone.
Having to reference security groups by id instead of name in cloudformation stacks, terraform, and a variety of other places is one of the most infuriating things. Makes everything much more difficult to configure and maintain because the IDs are so opaque and unique. Somebody has to do the grunt work of looking them and and copy/pasting them or writing scripts to propagate configuration forward. What a waste of time and effort.
I’ve been banging my head against the wall for a week trying to set up a site to site VPN in AWS with a Cisco ASA. The auto generated config file have a lot of missing info.
If anyone knows of a good resource on the subject it would be greatly appreciated.
Something that's missing from this (otherwise great!) guide, that has puzzled me for a while - what's the point? What does this configuration actually gain you/AWS? My best guess is that private subnets are for DDOS protection, but that seems like something that would be better handled by throttling. Given the amount of complaints I've heard about how difficult VPC/Subnet setup is, why bother with it at all? Staving off IP address exhaustion?
Or, to ask it another way - what would be the downside of all your resources being in 1 single-Subnet VPC, spread evenly across AZs?
Not sure I'm qualified to answer all of your questions on this but, from a networking perspective..
Private subnets will allow you to reduce your exposure to the Internet, also can reduce costs with something like a NAT gateway. It's useful for things that don't need to be public facing. Generally things on the private subnet can go outbound directly but not have anything come direct into that subnet, you'd need a solution that interfaces with the public side to facilitate that, or manually create a public IP association per instance.
You generally don't want one big subnet in general, it's a broadcast domain and it can be quite chatty when you get a lot of devices on it. Alongside that if you're doing multi-AZ and spanning layer-2 you end up with a lot of additional complexity to get that network to span and be highly available over multiple AZ's, while another subnet can be mostly independent. I know of some weird edge cases where you'd have to span layer-2, but if you're doing anything cloud-native you should be able to build around it.
Tangential question: This guy's blog has fantastic content but I don't see an RSS feed or any other way of subscribing (apart from a much broader Twitter feed). What's the best way to keep up?
Been recently putting together a homelab and it's done wonders to help make some of these more abstract things like routing tables and CIDR mentioned a lot more concrete.
Are you able to share any details? I started putting together a more complex setup and ended up flattening things out because I couldn't get routing between e.g. 10.0.0.1 and 10.1.0.1 working.
Would be really good to go into managed VPNs and VPC peering. These are some of the amazing things that the VPCs provide you that took me a while to figure out.
The part that irks me is that if you’re doing any VPC design that’s going to even potentially include peering you need to carefully understand the limitations first. This means that within the same region you can refer to security groups in rules as if they’re in the same VPC, but you can’t do that if you’re peering across regions. Then add in some DNS restrictions (like not being able to directly resolve a peer VPC’s entries, somewhat solvable by use of VPC private zones to serve as DNS across regions) and it can be real awkward. Then there’s overlapping VPC CIDR issues (VPC Transit Gateways can only sorta help this)
The primary caveats beyond basic networks that impact designs is that multicasting is not enabled by the network layers but at the network interface (ENI) layer and you need to carefully look at how security groups really work (they’re attached to an ENI fundamentally, which is how you can route between networks with a single instance as long as it’s within the same AZ)
All of this I’ve found was completely disregarded / unknown by almost every company outside the F500 or high end tech start-ups when they first started with AWS and I’ve spent a lot of my career having to migrate production environments between VPCs so that we can get enough room to grow adequately. Making subnets as small as possible is not what you should be doing in AWS, folks. In fact, making them real small means you spent a fair bit of effort which means you decided to put in a lot of effort without stopping to read the documentation in earnest for a couple hours. And using a default VPC CIDR repeatedly from the console is a pretty grand way to make sure you can never let two VPCs communicate with each other via anything other than a third intermediate VPC that you’ll have to migrate to eventually.
Some of the overly-cautious networking approaches I’ve seen include making a VPC for every single application / service, using a NACL for every application (multiplied by every AZ used to isolate each subnet and cutting off cross-AZ routing thereby, of course), creating your own NAT instance that doesn’t do anything better than a NAT gateway, NAT gateways in every AZ (for a whole $1 of traffic / mo each). The story of problems in AWS infrastructure is the same - trying to plan too far ahead for the wrong things and not realizing the limitations of the right things that are not flexible anymore. This is much more common when companies hire traditionally experienced network engineers that have just a little too much confidence.
The title says "everything you need to know about networking on AWS". I wish it were this simple.
The article is well written, but it simply represents maybe 1% of what you need to know here. I would have called it "A simple introduction to networking on AWS".
I'd contest that this is everything that you need to know. Or perhaps more accurately, everything I need to know.
It's certainly not intended to be exhaustive and I am definitely not a network engineer but I think you could operate at a reasonable scale within a single well laid out VPC. Of course there'll be a point for peering etc. but you might not need that.
AWS security groups and ACLs are the most worthless things. you cant treat them like a real firewall. you end up just allowing anything outbound or inbound. they dont let you be detailed enough
Ugh! That's the most complex explanation of AWS I've ever seen.
He just described a NETWORK, not AWS.
AWS has renamed lots of things, but all the scary text configs that used to be the domain of wizened sysadmins have been replaced with very simple single-page-app GUI controls. LIke routers and gateways: those terms are largely gone from the AWS vocabulary.
No need to get into subnets and route tables I think.
The majority of clients I've worked with use AWS for web hosting with an ELB load balancer (the most important part), an EC2 instance policy & image (for handling traffic fluctuations), an RDS (database), an S3 and Route53 (external DNS entries)
Point the load balancer to the outside world and then let it spin up instances. That's the most common model I've encountered.
IT's almost cartoonishly simple compared to what the OP wrote here. Almost. Having an understanding of network architecture helps, but not THAT much.
Having just gone through the process of using EKS, which requires a VPC, I think the article is quite applicable to anyone doing anything of average complexity. I found myself quite often wondering things like "do I have to run an Internet Gateway in every availability zone?" (no) and "do I attach my NAT Gateway to the public subnet or the private subnet in each AZ?" (public, and then add an entry for the gateway in the private subnet).
Amazon does not clearly document any of this. (I think if you read enough you'll eventually figure it out, but experimentation was the most straightforward procedure here).
As for just using a single EC2 instance and RDS... that is something you can do, but not everyone's workload is so simple that they can run it on one machine. And not everyone can afford do be down simply because one AZ is down. Hence, multi-AZ VPC setups.
As someone who has worked full time putting companies into AWS for almost 4 years as a cloud consultant, I've experienced only a couple occasions where your description is accurate and only for a small portion of the customer's portfolio.
Even when beanstalk is used, all of the stuff mentioned in the OP and more are usually required.
Simple weekend projects maybe, but only small enterprise services work in your model.
It depends on how complex the environment you're working in is. If you're at a large enterprise that wants to build a platform capable of scaling to thousands of apps, you most definitely do need to care about everything written here, plus a lot more networking specific things not mentioned.
[+] [-] becauseiam|7 years ago|reply
* Network ACLs, which describe the ruleset (consider it like a stateless firewall) for subnets and their respective routes. Whilst they are optional, having a default set it straightens out a lot of duplication that may end up in Security Groups (which are more stateful in nature).
* Elastic (public) IPs. NAT instances/gateways require their use, and there is dance to be done around their allocation in account, and attaching to instance interfaces.
* IPv6 components. Egress-only Internet Gateways operate differently to IGWs, as there is no NAT they need a route applied across all subnets both public and private. IPv6 CIDR which allocates the VPCs /56 (and thus each subnet gets a /64, and each instance's interface thus gets a /128 which is bananas, but IPv6 is a second class citizen on AWS). Finally updating the subnets so automatic IPv6 address assignment happens.
* VPC Gateways - these are broken into two types, the older type that support S3/DynamoDB and effectively allow traffic in a public/private subnet to bypass NAT. These enabled can have significant advantages to access and throughput. The newer "PrivateLink" services are different and having pricing costs associated with them.
* DNS and DHCP: It's a rule in the VPC that the delegated resolver lives on ".2" of the VPC's CIDR, and operates in dual-horizon - EC2 hostnames setup accordingly resolved by instances inside the VPC will get the private VPC CIDR address, not any Elastic IP.
[+] [-] cure|7 years ago|reply
[+] [-] grahamlyons|7 years ago|reply
IPv6 is another point of contention but again it's not something I've ever used and so, apart from any other controversies with it ("...IPv6 which is only marginally better than IPv4 and which offers no tangible benefit...", https://varnish-cache.org/docs/trunk/phk/http20.html), I'm not qualified to write about it.
EIPs and ENI should probably have been in there but I don't tend to use those that often either so they didn't occur to me.
I'm not sure that VPC Gateways, DNS or DHCP are necessarily need to know things either. VPC Gateways are for a specific routing optimisation which not everyone is going to need. I didn't know the details of the DNS set up for a VPC so thank you for that.
Thank you for the feedback - I really appreciate you taking the time.
[+] [-] jcrites|7 years ago|reply
Another useful concept (not VPC-specific) is using the Infrastructure-as-Code paradigm (e.g., CloudFormation, Terraform) to capture all of your networking configuration in source control, along with who made any changes and the reasons or design documentation for them.
[+] [-] dvtrn|7 years ago|reply
I inherited an infrastructure that had NetACLs and security groups with duplicate entrypoints and policies, years of accumulated cruft because it was poorly designed and the documentation was even worse (read: nonexistent), security groups all the way down. That one threw me through a hard and annoying mental loop for a couple of hours until picking through with the finest tooth comb revealed what was going on.
The fun part is going to be rebuilding our routing in a new VPC such that it doesn't make the next guy want to put his head in a black hole.
I'd be lying if I said it wasn't a fun challenge in a sordid kind of way, though.
[+] [-] fideloper|7 years ago|reply
I'd love to hear your use cases for Network ACLs.
[+] [-] himangshuj|7 years ago|reply
[+] [-] romeisendcoming|7 years ago|reply
These cloud end user environments are fake eggs and saccharine sweetener.
[+] [-] mz1290|7 years ago|reply
I think the post does a good job covering the high-level material. NACLs, EIP, and perhaps peering routes would also be good to mention.
[+] [-] cxmcc|7 years ago|reply
[+] [-] llama052|7 years ago|reply
AWS you have the same basic concepts of a network, and the terminology aligns enough that you can make sense of it fairly quick if you're in the network realm. Azure however takes all of that 'network' stuff and turns it into this abstraction where you have to carefully follow one of their guides to realize it's out of date, or the UI doesn't show the appropriate information etc. Also you have Azure network portions that block ICMP because of 'security'.
This is all anecdotal from my experience of course, but it's why I keep referring to Azure as the "Excel spreadsheet of the cloud" because the entire design of it is in your face and non intuitive.
For instance if I wanted to make a direct connection like DirectConnect to multiple VPC's in AWS, I'd use the Transit Gateway, connect to it from on-prem, add the VPC and the route, and be done.
In Azure, I'd use expressroute, add the Expressroute circuit to a Subscription, add a gateway for that, and then an additional gateway for each VPC equivalent, create an authorization key for each 'VPC' equivalent and sync them, and then define routing per gateway. Then when you go in to trace the network path ICMP is blocked.
I know AWS is more mature than Azure, so it's not entirely fair to criticize them, but every time I touch Azure I miss AWS, or even GCP. Perhaps it's just me not being familiar enough with Azure. ¯\_(ツ)_/¯
[+] [-] benmanns|7 years ago|reply
See https://forums.aws.amazon.com/thread.jspa?threadID=234959
[+] [-] iheartpotatoes|7 years ago|reply
[+] [-] ceejayoz|7 years ago|reply
edit: Apparently not. See below, my mistake.
[+] [-] athrun|7 years ago|reply
Public Lambdas can invoke VPC Lambdas (AFAIK, the reverse is not possible without a VPC endpoint).
[+] [-] staticassertion|7 years ago|reply
A free tier for NAT gateways would go a very long way. I wonder why they wouldn't have one.
[+] [-] mooreds|7 years ago|reply
I also like this video https://www.oreilly.com/library/view/amazon-web-services/978... (part of http://shop.oreilly.com/product/0636920040415.do ). Full disclaimer, I used to work with Jon.
[+] [-] p4lindromica|7 years ago|reply
[+] [-] dvtrn|7 years ago|reply
[+] [-] wslh|7 years ago|reply
For example, if you modify the MAC destination address it will not work in AWS. To be able to do that you should disable source/destination checks as is specified in [1].
The last time I checked you cannot do that in Google Cloud or Microsoft Azure.
When we experienced this issue, Reddit was the best resource for answers. I put the Reddit threads as they can help others working in projects requiring packet crafting:
https://www.reddit.com/r/sysadmin/comments/51xypj/vpc_amazon...
https://www.reddit.com/r/networking/comments/51y52n/aws_vpc_...
https://www.reddit.com/r/sysadmin/comments/533e14/google_com...
[1] https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Ins...
[+] [-] CloudNetworking|7 years ago|reply
[+] [-] ninetax|7 years ago|reply
https://start.jcolemorrison.com/aws-vpc-core-concepts-analog...
[+] [-] ggm|7 years ago|reply
I did the whole "here is your /56, now segment it yourself" thing. Its crude. It should not be neccessary, if V6 was central to the model, you'd be assigned /64 from your covering prefix automatically, as you deploy regional nodes.
[+] [-] cygned|7 years ago|reply
[+] [-] bmurphy1976|7 years ago|reply
[+] [-] iheartpotatoes|7 years ago|reply
[+] [-] tambourine_man|7 years ago|reply
If anyone knows of a good resource on the subject it would be greatly appreciated.
[+] [-] mattbillenstein|7 years ago|reply
https://openvpn.net/vpn-server-resources/site-to-site-routin...
[+] [-] scubbo|7 years ago|reply
Or, to ask it another way - what would be the downside of all your resources being in 1 single-Subnet VPC, spread evenly across AZs?
[+] [-] athrun|7 years ago|reply
Auditors will want to know which isolation mechanisms you have put in place, and private subnets should be part of your isolation strategies.
Other use-cases:
- Legacy (or third-party) apps whose security model assumes they are behind some sort of private firewall.
- Hybrid deployment where you need to bridge on-premises (or other clouds) address space(s) with your VPC.
> Or, to ask it another way - what would be the downside of all your resources being in 1 single-Subnet VPC, spread evenly across AZs?
Note that a subnet cannot spread across AZs. So, even if you only need/want public subnets, you will want to deploy at least 1 public subnet per AZ.
[+] [-] llama052|7 years ago|reply
Private subnets will allow you to reduce your exposure to the Internet, also can reduce costs with something like a NAT gateway. It's useful for things that don't need to be public facing. Generally things on the private subnet can go outbound directly but not have anything come direct into that subnet, you'd need a solution that interfaces with the public side to facilitate that, or manually create a public IP association per instance.
You generally don't want one big subnet in general, it's a broadcast domain and it can be quite chatty when you get a lot of devices on it. Alongside that if you're doing multi-AZ and spanning layer-2 you end up with a lot of additional complexity to get that network to span and be highly available over multiple AZ's, while another subnet can be mostly independent. I know of some weird edge cases where you'd have to span layer-2, but if you're doing anything cloud-native you should be able to build around it.
[+] [-] grahamlyons|7 years ago|reply
[+] [-] abalone|7 years ago|reply
[+] [-] grahamlyons|7 years ago|reply
[+] [-] make3|7 years ago|reply
[+] [-] vvanders|7 years ago|reply
[+] [-] voltagex_|7 years ago|reply
[+] [-] gravypod|7 years ago|reply
[+] [-] devonkim|7 years ago|reply
The primary caveats beyond basic networks that impact designs is that multicasting is not enabled by the network layers but at the network interface (ENI) layer and you need to carefully look at how security groups really work (they’re attached to an ENI fundamentally, which is how you can route between networks with a single instance as long as it’s within the same AZ)
All of this I’ve found was completely disregarded / unknown by almost every company outside the F500 or high end tech start-ups when they first started with AWS and I’ve spent a lot of my career having to migrate production environments between VPCs so that we can get enough room to grow adequately. Making subnets as small as possible is not what you should be doing in AWS, folks. In fact, making them real small means you spent a fair bit of effort which means you decided to put in a lot of effort without stopping to read the documentation in earnest for a couple hours. And using a default VPC CIDR repeatedly from the console is a pretty grand way to make sure you can never let two VPCs communicate with each other via anything other than a third intermediate VPC that you’ll have to migrate to eventually.
Some of the overly-cautious networking approaches I’ve seen include making a VPC for every single application / service, using a NACL for every application (multiplied by every AZ used to isolate each subnet and cutting off cross-AZ routing thereby, of course), creating your own NAT instance that doesn’t do anything better than a NAT gateway, NAT gateways in every AZ (for a whole $1 of traffic / mo each). The story of problems in AWS infrastructure is the same - trying to plan too far ahead for the wrong things and not realizing the limitations of the right things that are not flexible anymore. This is much more common when companies hire traditionally experienced network engineers that have just a little too much confidence.
[+] [-] mvanbaak|7 years ago|reply
Besides that: nice article
[+] [-] grahamlyons|7 years ago|reply
[+] [-] simonebrunozzi|7 years ago|reply
The article is well written, but it simply represents maybe 1% of what you need to know here. I would have called it "A simple introduction to networking on AWS".
[+] [-] grahamlyons|7 years ago|reply
It's certainly not intended to be exhaustive and I am definitely not a network engineer but I think you could operate at a reasonable scale within a single well laid out VPC. Of course there'll be a point for peering etc. but you might not need that.
[+] [-] rkangel|7 years ago|reply
[+] [-] Gelob|7 years ago|reply
[+] [-] iheartpotatoes|7 years ago|reply
He just described a NETWORK, not AWS.
AWS has renamed lots of things, but all the scary text configs that used to be the domain of wizened sysadmins have been replaced with very simple single-page-app GUI controls. LIke routers and gateways: those terms are largely gone from the AWS vocabulary.
No need to get into subnets and route tables I think.
The majority of clients I've worked with use AWS for web hosting with an ELB load balancer (the most important part), an EC2 instance policy & image (for handling traffic fluctuations), an RDS (database), an S3 and Route53 (external DNS entries)
Point the load balancer to the outside world and then let it spin up instances. That's the most common model I've encountered.
IT's almost cartoonishly simple compared to what the OP wrote here. Almost. Having an understanding of network architecture helps, but not THAT much.
[+] [-] jrockway|7 years ago|reply
Amazon does not clearly document any of this. (I think if you read enough you'll eventually figure it out, but experimentation was the most straightforward procedure here).
As for just using a single EC2 instance and RDS... that is something you can do, but not everyone's workload is so simple that they can run it on one machine. And not everyone can afford do be down simply because one AZ is down. Hence, multi-AZ VPC setups.
[+] [-] sl1ck731|7 years ago|reply
Even when beanstalk is used, all of the stuff mentioned in the OP and more are usually required.
Simple weekend projects maybe, but only small enterprise services work in your model.
[+] [-] baseballMan|7 years ago|reply
[+] [-] CloudNetworking|7 years ago|reply
Op's article is great IMHO.