Although there are a ton of AWS servers, there's only a few core services that I recommend:
EC2 - You need a server.
RDS - You need a database.
S3 - You need to store files.
Lambda - You are building an API with short lived requests.
These services are all very high quality and are excellent at what they do. Once you get outside of these core services, the quality quickly drops. You're probably better off using the non-AWS versions of those services.
For a few quick examples, you should be using Datadog over CloudWatch, Snowflake over Redshift or Athena, and Terraform over CloudFormation.
Why would you ever use Terraform over CloudFormation? There are so many parts of AWS that use CF and that you can modify from the getting started templates like CodeStar and exporting a SAM template from your lambda template.
Before someone comments on how TF is “cross platform”, all of the provisioners are vendor specific.
As far as what other services to use, if you are hosting your own services on AWS instead of using AWS manager services, you’re kind of missing the point of AWS.
But a few other services we use all of the time are CodeBuild, ElasticCache (hosted Redis), ElasticSearch, Route 53, load balancers, autoscaling groups, SSM (managing the few “pets” until we can kill them), ECS, ECR, Fargate, SNS, SQS, DynamoDB, SFTP, CloudTrail, Microsoft AD, we are experimenting with the recently announced Device Farm/Selenium service, step functions, Athena, Secrets Manager, and a few more I’m probably forgetting.
Depending on the market segment you exist in, I'd recommend AWS Fargate and AWS Lightsail (container-runner; Digital Ocean/Linode/VPS competitor) over EC2. There's absolutely a segment for which EC2 is appropriate, but just like most data isn't "big", I doubt that most EC2 customers wouldn't be better served by Lightsail. If you've got several hundred or several thousand EC2 instances with bespoke code/config for many different ASGs, then Lightsail isn't for you, but (my impression is) that's not most people.
> If you receive 1-minute metrics with CloudWatch, then their availability delay is about 2 minutes—so total latency to view your metrics may be ~10-12 minutes.
If an alert delayed by 10min matters to you, DD is not viable for alerting (could be still used for dashboards).
At what scale would you want to use RDS rather than using an EC2 instance with Postgres installed?
Assuming that the operator has the skills to manage Postgres.
It's not like RDS does something complex like Geodistribution, right?
Also what is the scaling like? Is it automatic? How quickly can you handle more connections? Because my understanding was that it was slow.
I did have a play with their RDS Postgres nonths before, and I managed somehow to crash it requiring a restore from snapshot. Also their smallest instance was quite expensive for the performance.
I'd ask the opposite question - at what scale would you want to have your own custom setup rather than RDS? Managing your own database infrastructure for workloads other than "a few queries a second" is hard work with a lot of pitfalls, and you better be at a size that there's some benefit (high levels of customization, use case specific tuning, economies of scale, etc). As a person who does exactly this for a living, I'd rather shell out for RDS or a similar offering than my own setup most of the time. Especially at first, before you discover what exactly you /don't/ like about it or what you'd want different.
RDS can scale read replicas and fail over to master, but aurora and it’s serverless option is much better for auto-scaling. Behind the scenes storage is decoupled from compute with makes scaling fundamentally easier.
EC2 is your only choice if you want a database that AWS doesn’t support, such as Rethink or Cassandra (they just recently launched a managed Cassandra service though). EC2 is also your only choice if you need full control of the DB, such as using many Postgres extensions and foreign data wrappers. Even some triggers and UDFs are limited.
A self-managed, auto-scaling, cross-AZ replicated DB setup is no small matter with EC2. Not to mention logging, metrics, patching of the DB and underlying OS. It’s 100% doable, but one should only proceed with that course with understanding of the human costs.
Personally, I’ve been choosing FaunaDB these days when possible. It’s a no-ops managed service and has on-prem/VPC options. I just write graphQL clients and move on with my life, the rest just works.
RDS (postgres, mysql, maria) is basically just a managed ec2 instance. The instance cost is about ~2x the comparable price for an ec2 instance, which seems reasonable to me. Storage costs are pretty comparable to EBS. You can do push-button upgrades to increase the capacity, but it's slow. The main benefits I think you get over self-managed is automated / on-demand backups, pretty seamless software upgrades/patches, you can quickly spin-up a duplicate instance from a snapshot for testing/distributing workloads/etc, and replication works pretty seamlessly.
Provisioned IOPS is one area that can get expensive very quick, but people often don't realize that you get 3 PIOPS included with every 1GB of allocated storage, so you really don't need to pay for provisioned IO if you have a decent amount of storage.
If you want auto-scaling you need to look at Aurora or Redshift, which are quite different and significantly more expensive. I've not used those.
It isn't a matter of getting to a certain scale before you use RDS, if that's what you're asking. The value of RDS is streamlining and automating db administrative tasks. If you want to update from Postgres 11.5 to 11.6, for example, you just change that setting on your RDS instance and it happens, either immediately or at some scheduled maintenance window. I f you want a hot standby in a separate availability zone, it's trivial to add one. Read replica? Trivial to add.
In general, I've found it makes sense to pay the premium for RDS and spend my and my team's time on more valuable work than db admin tasks.
I've previously administrated a cluster Postgres instances with a total of about 1PB of data. My recommendation is that you should use RDS unless you have a reason not to.
RDS takes care of tons of administrative tasks such as backups, replication, failover, and database upgrades. Yes you can setup backups yourself, but the on going maintenance is going to be a pain. You need to deal with what happens when a backup fails, have a playbook for restoring from backup, cleanup your old backups, etc. These are tasks that are extremely dangerous if you get something wrong and they are completely taking care of for you by RDS.
I'll take a shot at this. There is always an asterisk under every one of these. Every company and situation is different.
If you're a tiny startup or hobby with literally no money, it might make sense for you to manage it yourself because you have no choice.
Once you have some money and a viable business, then your value is no longer your ability to spend your time running Postgres, ensuring backups and restores work, creating replicas, upgrading software, and setting up all of the monitoring tools. You provide MUCH more value spending your time and abilities building things that are core to the business that let you make money and grow.
No doubt you can do it all and save some cash. But you have to do it regularly if you want confidence that everything you have built still works. With RDS, you pay them some extra for a near guarantee that it will all just work 100% of the time.
Once you become a large company with tons of engineers and you start to bump into limits of RDS, then it might make sense to run it yourself again. It is a significant burden to do it correctly 100% of the time. Your entire business can fail if you don't do your job right.
RDS/Aurora does do autoscaling, backups, georeplication, encryption etc. It's more a matter of time & convenience rather than skill. Sure you could do all of it yourself on an EC2 instance, but at some point it becomes a big chunk of your job, and you would rather be spending your time on other things.
You would move to RDS the minute you know you're committed to AWS and know that you don't want to
worry about things like backups, upgrades or clustering. The disadvantage to doing so is that you
lose a lot of administrative privileges to the database server itself and you don't get access to
the filesystem or underlying OS. We had trouble migrating a sizable SQL Server installation onto RDS
because it had a ton of triggers and stored procedures (SPs) that relied on files in the filesystem.
There are advantages when using RDS other than scaling.
The performance dashboard is especially nice.
The reusable sets of configuration is convenient and the UI makes it easy to compare original vs. changed values.
The instance upgrade is not seamless, though, it is normally scheduled for the next maintenance window, unless you decide to apply and reboot immediately.
I agree the service is expensive, but setting up a db server for production takes a lot of time and expertise.
one thing that doesn't get talked about RDS is that network cost for replication of data for RDS multi A-Z deployments is free. Depending on how much you write to RDS, this cost can dominate cpu/memory costs on non-RDS installations.
We run a 2TB database with 30-60 days of data (only keeps 30-60 days on-hand, and we're roughly ingesting 50GB/day.) We've been using Aurora Postgres since it came out, and it's been pretty good. (Good enough to the point that it's never crossed my mind to think about moving to something else.)
Source: Microsoft SQL Server performance tuner who runs an app that centralizes SQL Server performance data from thousands of servers. You'd think I would be running MSSQL on the back end, but the licensing costs just didn't make sense compared to Aurora Postgres.
Using Aurora MySQL for over a year now in prod, purrs like a kittycat.
Just don't use the AWS Database Migration Service if you can help it, that thing has a couple of badly documented pitfalls. (Fe. tables can't have ENUM fields)
Used it a year at my last job and now almost a year at my current job. Never had an issue. It just runs no matter how much I throw at it. Only have had to change the instance sizes to deal with data ingest.
"Modern" architectures can get quite complex and fast at scale & in complex cases. This is merely a simple introduction to simple components of modern cloud architecture.
I do agree with you, and I was lured into reading it because of that. However, this seems like a nice introduction for beginners. Maybe it should be tagged as such.
As someone who has spent a fair amount of time working with AWS. I appreciate how approachable this tutorial is, as the official docs are usually way more arcane.
malisper|6 years ago
For a few quick examples, you should be using Datadog over CloudWatch, Snowflake over Redshift or Athena, and Terraform over CloudFormation.
scarface74|6 years ago
Before someone comments on how TF is “cross platform”, all of the provisioners are vendor specific.
As far as what other services to use, if you are hosting your own services on AWS instead of using AWS manager services, you’re kind of missing the point of AWS.
But a few other services we use all of the time are CodeBuild, ElasticCache (hosted Redis), ElasticSearch, Route 53, load balancers, autoscaling groups, SSM (managing the few “pets” until we can kill them), ECS, ECR, Fargate, SNS, SQS, DynamoDB, SFTP, CloudTrail, Microsoft AD, we are experimenting with the recently announced Device Farm/Selenium service, step functions, Athena, Secrets Manager, and a few more I’m probably forgetting.
fragmede|6 years ago
viraptor|6 years ago
DataDog is great, but the way it polls data means you can't rely on it being available for a long time: https://docs.datadoghq.com/integrations/faq/cloud-metric-del...
> If you receive 1-minute metrics with CloudWatch, then their availability delay is about 2 minutes—so total latency to view your metrics may be ~10-12 minutes.
If an alert delayed by 10min matters to you, DD is not viable for alerting (could be still used for dashboards).
reese_john|6 years ago
Jonnax|6 years ago
Assuming that the operator has the skills to manage Postgres.
It's not like RDS does something complex like Geodistribution, right?
Also what is the scaling like? Is it automatic? How quickly can you handle more connections? Because my understanding was that it was slow.
I did have a play with their RDS Postgres nonths before, and I managed somehow to crash it requiring a restore from snapshot. Also their smallest instance was quite expensive for the performance.
makmanalp|6 years ago
reilly3000|6 years ago
EC2 is your only choice if you want a database that AWS doesn’t support, such as Rethink or Cassandra (they just recently launched a managed Cassandra service though). EC2 is also your only choice if you need full control of the DB, such as using many Postgres extensions and foreign data wrappers. Even some triggers and UDFs are limited.
A self-managed, auto-scaling, cross-AZ replicated DB setup is no small matter with EC2. Not to mention logging, metrics, patching of the DB and underlying OS. It’s 100% doable, but one should only proceed with that course with understanding of the human costs.
Personally, I’ve been choosing FaunaDB these days when possible. It’s a no-ops managed service and has on-prem/VPC options. I just write graphQL clients and move on with my life, the rest just works.
meritt|6 years ago
Provisioned IOPS is one area that can get expensive very quick, but people often don't realize that you get 3 PIOPS included with every 1GB of allocated storage, so you really don't need to pay for provisioned IO if you have a decent amount of storage.
If you want auto-scaling you need to look at Aurora or Redshift, which are quite different and significantly more expensive. I've not used those.
zedpm|6 years ago
In general, I've found it makes sense to pay the premium for RDS and spend my and my team's time on more valuable work than db admin tasks.
malisper|6 years ago
RDS takes care of tons of administrative tasks such as backups, replication, failover, and database upgrades. Yes you can setup backups yourself, but the on going maintenance is going to be a pain. You need to deal with what happens when a backup fails, have a playbook for restoring from backup, cleanup your old backups, etc. These are tasks that are extremely dangerous if you get something wrong and they are completely taking care of for you by RDS.
mdeeks|6 years ago
If you're a tiny startup or hobby with literally no money, it might make sense for you to manage it yourself because you have no choice.
Once you have some money and a viable business, then your value is no longer your ability to spend your time running Postgres, ensuring backups and restores work, creating replicas, upgrading software, and setting up all of the monitoring tools. You provide MUCH more value spending your time and abilities building things that are core to the business that let you make money and grow.
No doubt you can do it all and save some cash. But you have to do it regularly if you want confidence that everything you have built still works. With RDS, you pay them some extra for a near guarantee that it will all just work 100% of the time.
Once you become a large company with tons of engineers and you start to bump into limits of RDS, then it might make sense to run it yourself again. It is a significant burden to do it correctly 100% of the time. Your entire business can fail if you don't do your job right.
paxy|6 years ago
nunez|6 years ago
pestaa|6 years ago
The performance dashboard is especially nice.
The reusable sets of configuration is convenient and the UI makes it easy to compare original vs. changed values.
The instance upgrade is not seamless, though, it is normally scheduled for the next maintenance window, unless you decide to apply and reboot immediately.
I agree the service is expensive, but setting up a db server for production takes a lot of time and expertise.
pdeva1|6 years ago
encoderer|6 years ago
kamilafsar|6 years ago
BrentOzar|6 years ago
Source: Microsoft SQL Server performance tuner who runs an app that centralizes SQL Server performance data from thousands of servers. You'd think I would be running MSSQL on the back end, but the licensing costs just didn't make sense compared to Aurora Postgres.
Roritharr|6 years ago
Just don't use the AWS Database Migration Service if you can help it, that thing has a couple of badly documented pitfalls. (Fe. tables can't have ENUM fields)
popotamonga|6 years ago
Trisell|6 years ago
vidar|6 years ago
ramoz|6 years ago
pm90|6 years ago
root-z|6 years ago