>While AWS and Azure are industry leaders, their advantages often only materialize at massive scales. [...]
Your comparisons are similar to many others out there that focus on measuring basic cpu and memory. This type of easy comparison where AWS/Azure/GCP is treated as a "dumb" datacenter is easy for alternatives like Hetzner or self-hosting to "win".
>Do you really need the advanced features of AWS and Azure right now? Or would a simple virtual machine at a reasonable price be sufficient? [...] There’s a growing movement among tech companies and startups to opt for more cost-effective hosting solutions like Hetzner. The high costs associated with AWS and Azure
Many (most?) YC startups are not using AWS as a low-level dumb data center with blank EC2 virtual machines and installing infrastructure software like Linux and PostgreSQL on it. Instead, they are using higher-level AWS managed services such as DynamoDB, Kinesis, SQS, etc :
Therefore, the more difficult comparison (that almost no blog post ever does) is the startup's costs for its employees to re-create/re-invent the set of higher-level AWS services that they need.
Sure, there's the "but you don't need to pay expensive AWS costs for DynamoDB when one can just install open-source Cassandra at Hetzner; and instead of AWS Kinesis, install your own Kafka, etc". Well, you add up more and more of those "just install and manage your own X,Y,Zs" and you can end up crossing the threshold where paying AWS cloud fees cost less than your staff maintaining it. The threshold for AWS isn't just massive scale of 100+ million users. The threshold can be the complexity and scope of higher-level services you need the cloud to take care of on your behalf so your small team can concentrate on the aspects of the business that are true differentiators. In other words, instead of employees installing Cassandra, they're adding features to the smartphone app.
If your company doesn't need any of the Big 3 clouds' higher-level platform services, it's easier to save money with alternatives.
As soon as your startup does get big, it starts to make more sense to try and migrate to 'dumb' machines and save on infrastructure costs, especially if your business is low margin and your infrastructure costs are high.
Unfortunately, it's a false dichotomy you present, it's not a binary choice of fully managed or entirely roll your own.
E.g., if you're running K8s (one thing I typically recommend you buy a managed one of), you can install your own Kafka in it, using an operator that does about 85% of what MSK does.
Sure, you'll need to dedicate person hours to support the operator, but is supporting that any more expensive than supporting AWS products? That you're already paying through the nose for?
Oftentimes, when you see someone proposing "just save 70% by installing open source XYZ", they are thinking like an individual and not a business. Fast-moving startups and medium businesses in areas with high cost of labor can save a ton by outsourcing labor to AWS/Azure if they are okay with the lock-in. Of course, each case is different and people shouldn't just blindly adopt AWS/Azure without thinking about it...
Honestly most of the stuff I do is internal facing tooling with usually less than 100 concurrent and 1k peak users. For those, managing a server or two, or god forbid, a small autoscaling cluster is not a hassle.
For high-scale operations, you need to think real hard about how you do things and usually simplicity is key, and trying to do a little as possible on the high throughput parts is useful.
The costs do add up when you have professionals maintaining your Cassadra/Kafka boxes, but the same degree of complexity exists on AWS, when you try to weave together a tapestry of EC2s, lambdas, various storage services, with all the delicious complexity of multiple VPCs and networking fineries while not blowing the budget.
Hear hear. I get this all the time. People just don’t get that what they are paying for, say, platform services (managed databases, indexing, all sorts of data handling) is vastly cheaper than reimplementing those particular wheels - or hiring the people to manage them - and that the hyperscalers provide redundancy, automated deployment, backups, the works.
Even storage in hyperscalers is inherently redundant—and I keep getting folk who ask about setting up their own RAID array, or using their own containers and job management when there’s a dozen zero-code alternatives in each individual hyperscaler.
I can run a 64x512GiB server in my home office loaded with NVMe drives for $80/mon (probably cheaper depending on how many years you amortize the server purchase over)!
This is what we're trying to address at Lithus[1]. We're offering both the raw compute resources, and also the DevOps time needed to setup and manage the services your engineering team needs.
depends on scale - at small scale, fully managed services are a godsend but at <x> scale (esp per-service) then it pays to self-manage or use low cost or FOSS mgmt tools.
I'm not sure what the cost difference is for using higher level services but I can easily imagine it 4x-10x'ing your costs again, or worse.
Part of me thinks, man, the engineers not afraid of setting up a p
Postgres or Redis really should be worth a lot more, given how absurd the prices can get. I guess the getting started costs for these services are usually manageable though; by the time the bill is big it's a "nice problem to have" because you have significant load now, and presumably customers & revenue to show for it.
More so, I think orgs are somewhat rightfully afraid of running infra because historically we have been bad at it. It's been every sys-op or devops for themselves in the world. Everyone making their own practices, assembling their own stack of networking setup, init scripts, db procedures, monitoring, alerting, resilience/reliability. This stuff has a lot of dimensions of care to it.
And even when you go the extra mile to document everything, it's still rough to hand-off ownership. A new gal joins; how long does it take to get comfortable? And how much will her style & preferences mesh with whats been string up so far? Or worse, what happens when someone quits? How load bearing were they?
And this is why I'm so humungouely excited about Kubernetes. Fleet was pretty sweet & cool & direct in the past, RIP, but like so many of the "way to run containers" option it was just that: a way to run containers. Having an extensible system, where operators keep networking, storage, databases running, where tasks like backups and migrations and high availability are built in to well tested controllers: it cuts out so so so many things that operators had to discover, socialize, and test test test test test test before. There's such incredibly good load bearing systems-that-maintain-systems (i.g. autonomic) available, that compete very much with the paid for/managed services that have done likewise for us for so long.
And it's a consistent paradigm, for whatever you are up to. Write a manifest with what you want, send it to api-server, wait for operator to make it so. Instead of having different dimensions or concerns have different operational paradigms & styles, there's a unified extensible Desired State Management that does a damn good job.
It felt like running services was in a dark ages for so long, that each.shop was fractured & alone with their infrastructure, and it was obvious why managed services were winning. But today there's a hope that we can run services, well, in a way that will be very clear & explicit if it ever needs to be handed off.
To add, if you every want to get ISO/PCIDSS etc certification done then good luck implementing gazillion check list items which Azure/AWS/GCP have already taken care of.
Also note: traffic costs. On Hetzner, it's almost impossible to pay for traffic. Even their tiniest machine has 20 TB outgoing traffic (and unlimited incoming). If you used it up (you most probably wont), that's another 1,792 USD of costs saved by your tiny 4$/month VM compared to AWS. (At least if I was able to use the AWS cost calculator correctly).
They will have object storage soon, but dont hold your breath for one-click kubernetes etc. So the fancier you infrastructure, the more you your startup would need to invest in time and money to use Hetzner and thus make it "not worth it".
Additionally, go for the dedicated servers from Hetzner and you get a unmetered connection (eg: don't pay per GB ingress/egress at all). Not affiliated, but been happy with them since day 1.
Most cloud customers don't pay on-demand retail prices. For example, Azure VM Reservations or Savings Plans typically provide a 50-65% discount. AWS has similar plans.
For example, instead of the ancient F8 series used in the article, a modern D8as_v5 Azure instance under a 3-year Savings Plan is $115/mo.
Also, the article compares CPX41 to EC2 and Azure VMs with dedicated cores, not shared cores. The CCX33 Hetzner model is closer to the normal clouds, and costs $50/mo, so now we're at 2x the price instead of 10x the price. (Conversely, the B8als_v2 size uses shared cores and is also 2x the price of CPX41 at $74/mo)
For that 2x cost you get a lot more features, first-party and third-party support, more locations, faster networking, etc... That's worth it for most large enterprises that care about ticking checkboxes on audit reports more than absolute cost. Or to put it this way: the annual price difference is just $600, which is the same cost to an org as half a day of engineer-time or less. If Hetzner is the slightest bit more difficult than a large public cloud VM for anything, ever, then it's not cheaper. This could be patching, maintenance, migrations, backup, recovery, automation, encryption, or just about anything else.
There are other differences as well. Hetzner has a separate charge for load balancers and IP addresses, whereas with Azure they're included in the price of the VM.
The biggest cost difference is that the public clouds charge eyewatering amounts for Internet egress traffic. Azure is about 100x as expensive as Hetzner, which is just crazy.
I certainly can't dispute the other points...but to me the AWS Savings plan always felt like vendor lock-in...and sort of like a virtual "on prem", in that i have to commit to something for X amount of time (like old school provisioning hardware and have it live for X time), and then i lose the flexibility of what i thought *the cloud* in general was supposed provide: that is, freedom to scale up, down or *out*, etc. I won't fault AWS and others for making their money; this is capitalism after all regardless of the vendor. I guess maybe the cloud sort of lost its shine, and it doesn't feel as liberating as maybe it once did, and both cost and complexity are overblown, maybe?
On GCP and Azure, most folks would be better off running serverless containers via Cloud Run or Container Apps (AWS has no direct equivalent that scales to 0 and incurs no cost).
Both of these scale to zero and offer 180k vCPU/s free per month, 360k GB/s free per month. You incur billing only against the active execution time. Cloud Run Jobs has a whole separate free monthly grant as well.
You can run A LOT for free within those constraints. Certainly a blog or website. To prevent cold starts, just set up Cloud Scheduler (also free for this purpose) to ping the container every few minutes.
Use Supabase for a DB or one of the serverless options (if it works for your data use case) like Firestore, CosmosDB and you can run workloads for a few cents per month with an architecture that will scale easily if you need it to.
Those per-request models usually don’t pan out well. They’re conceptually simple, but you soon realize that you need at least a couple of 24/7 always on boxes and that you only really should use Cloud Run-like services for burstable workloads.
PaaS services or even VM scaling sets with volatile instances can still be stupefyingly cheaper, but that point is really hard to make to architecture astronauts.
Worth noting us large AWS customers get huge discounts, huge credit, actual real engineers on hand 24/7 on Slack, contractual service guarantees that last years and a large market of people we can leverage to build stuff in there. And a lot of the services are low to zero cost that would be expensive to run on Hetzner or don't exist and you have to build out.
Yep, most corporations have 2-3 _named_ account reps who are available on the company Slack and will visit your office 1-2 times a year to sync up that everything is working as it should.
And they're not just salespeople, they've actually said multiple times if a feature doesn't work for us without trying to hold it wrong in a dangerous (and expensive) way.
> Do you really need the advanced features of AWS and Azure right now? Or would a simple virtual machine at a reasonable price be sufficient? That’s the main question here.
This is one of the more important points and why the point "The learning curve of a single server isn't so big, especially when compared to AWS" is sitting a bit wrong with me.
Sure, if you talk about 1 VM, I agree. And I wouldn't second guess doing this, at all. It would be my initial plan as well as long as I don't have to make any strong availability guarantees. And for this use case, I'd call AWS a bad choice. It's not a simple VM provider.
But once you start running e.g. a redundant postgres cluster for updates without downtime, the amount of stuff to know also grows, a lot. Suddenly you also need backups, tests of backups. And this is where AWS/the cloud allows you to save time, and treadmill time.
The article was originally intended at manufacturing companies, not at IT startups, that currently go "all-in" on AWS and Azure with all of their managed services, when actually 95% of their workloads are in virtual machines, and the remaining stuff could easily be handled on a single VM. Or maybe a couple of VMs and a managed postgres somewhere (e.g., maybe even at AWS or Azure).
Would probably give them way more budget in actually building applications than running the infrastructure.
Maybe I'll extend the article to include the point of using a managed postgres at AWS / Azure / fly.io, whatever, in combination with Hetzner VMs.
I have a single VM for my personal stuff, but I use Azure’s backup and automated fail-over mechanisms as well as managed services for database and data processing for this very reason.
I believe that I've saved millions thanks to the fact that I stumbled on Hetzner back in the days and started using it for the company I was working on. Not saying it is a perfect service, but I very much like my money, and seeing on what kind of invoices are racked up by using these cloud services, I'm pretty confident that the alternative costs would have been 4-5x more.
This matches my experience. I ran one of my side-projects on AWS for a couple of years before switching to Hetzner - AWS was around £35 a month while Hetzner was around £7 a month, so Hetzner was around 80% cheaper for an equivalent service[0]. The other big thing was all the little costs in AWS - it took 2 months to get the AWS bill down to £0 due to all the hidden extras like backups and Elastic IP address.
It's not the same product, even if you consider just virtual machines rather than higher level services that others commenters are referring to. Sure public cloud is more expensive but you pay for the reliability of not being bound to physical hardware. When you buy a dedicated machine from OVH or Hetzner, you get a great deal for the compute power, but if something goes wrong with the hardware, you're stuck waiting for a technician to fix it.
Take the recent Lichess downtime, for example. Their main server had a hardware issue that required physical intervention. This meant the site was down for over 10 hours, and there wasn't much they could do except wait for OVH to send a tech.
If Lichess had been on AWS, the provider would have automatically moved their workload to a functioning server, and the outage would have been much shorter or possibly avoided altogether.
For Lichess, a non-profit, this tradeoff still make sense. Their service, while important to its users, isn't critical. Nobody dies if Lichess is down and the cost savings help them keep running. But if your business can't afford downtime, the extra guarantees from a public cloud provider can definitely be worth paying for.
>Take the recent Lichess downtime, for example. Their main server had a hardware issue that required physical intervention. This meant the site was down for over 10 hours, and there wasn't much they could do except wait for OVH to send a tech.
If you not a HN person with systemadmin skills yes. But is NOT that hard to have in house RADI hd setup, with failover server. Or failover NAT gateway. AWS and cloud provider are just a rip off.
The offering from Hetzner I find especially appealing are the consumer grade hardware ones. No I wouldn't host business critical services on one, but I don't have those so easy win for me price wise.
I would probably host even some business-critical services on Hetzner's infra. I'm thinking of "worker"-type workloads, where each machine is 100% stateless and just serves to do some compute-intensive work. With that configuration, single-node data loss doesn't really affect you, and the CPU is plentiful and cheap with Hetzner bare metal (e.g. AX101 AMD machines).
Been using their VLE-2 offer over what Hetzner gives because it's basically the same price, but unlimited bandwidth and they use AMD Epyc CPUs, which can't be a bad thing benchmark wise (especially memory bandwidth wise).
AWS is like the tool I know. I pay roughly 30$ for a CaptainRover install running on Lightsail.
Hetzner starts at 50 Euro, only has servers and Europe and is going to require a ton more work.
AWS has the right idea, they give everyone who asks nicely thousands in free credits to get started. Then 2 years in your hooked. I don't want to learn a new system.
There's a seemingly endless supply of small to medium-sized companies doing exactly that. That's why there's consultants who offer to migrate you off EC2 onto 2-3 bare-metal hosts.
This just in: use the tool that is most cost effective for your specific use case. There is no one-size-fits-all. More to come after this advertisement
Do people want to use VMs? Imo they're much more annoying to manage than higher-level managed services. The last few places I've worked we spent our time trying to get rid of VMs and replace them with equivalent managed services.
Even with automation tools like Ansible or immutable server images, packing as Docker images and running on a container orchestrator have always been much easier.
Depends if you want to pay more in money(a lot more) indefinitely, or pay more in time up front to set up automation. If you have immutable images, I don’t see how there is much difference at all. There are many container orchestrators available.
Azure/AWS provide much more base services (multiple regions/AZs, DynamoDB, S3, SQS, etc) that are pennies to operate and aren't really targeting the cheap low end that Hetzner is.
Well that is true, but I don't use AWS or Azure because I want to run servers. If you treat a public cloud like a datacenter, you're likely to have a bad time.
Hetzner doesn't have the services AWS provides, that's the reason most companies I know use AWS for.
If we could run our crap on any server, we would, but managed services are still cost-effective vs hiring our own 24/7/365 rotation of on-call ops people.
These types of articles always read like “yeah you could buy a Land Rover, but this Kia hatchback over here still gets you from A to B and is only a fraction of the cost.”
It seems lost on the authors that yes that might work for some folks just fine, but others really do want the Land Rover and all its additional baked in features beyond getting you from A to B.
jasode|1 year ago
Your comparisons are similar to many others out there that focus on measuring basic cpu and memory. This type of easy comparison where AWS/Azure/GCP is treated as a "dumb" datacenter is easy for alternatives like Hetzner or self-hosting to "win".
>Do you really need the advanced features of AWS and Azure right now? Or would a simple virtual machine at a reasonable price be sufficient? [...] There’s a growing movement among tech companies and startups to opt for more cost-effective hosting solutions like Hetzner. The high costs associated with AWS and Azure
Many (most?) YC startups are not using AWS as a low-level dumb data center with blank EC2 virtual machines and installing infrastructure software like Linux and PostgreSQL on it. Instead, they are using higher-level AWS managed services such as DynamoDB, Kinesis, SQS, etc :
Therefore, the more difficult comparison (that almost no blog post ever does) is the startup's costs for its employees to re-create/re-invent the set of higher-level AWS services that they need.
Sure, there's the "but you don't need to pay expensive AWS costs for DynamoDB when one can just install open-source Cassandra at Hetzner; and instead of AWS Kinesis, install your own Kafka, etc". Well, you add up more and more of those "just install and manage your own X,Y,Zs" and you can end up crossing the threshold where paying AWS cloud fees cost less than your staff maintaining it. The threshold for AWS isn't just massive scale of 100+ million users. The threshold can be the complexity and scope of higher-level services you need the cloud to take care of on your behalf so your small team can concentrate on the aspects of the business that are true differentiators. In other words, instead of employees installing Cassandra, they're adding features to the smartphone app.
If your company doesn't need any of the Big 3 clouds' higher-level platform services, it's easier to save money with alternatives.
londons_explore|1 year ago
As soon as your startup does get big, it starts to make more sense to try and migrate to 'dumb' machines and save on infrastructure costs, especially if your business is low margin and your infrastructure costs are high.
taberiand|1 year ago
EdwardDiego|1 year ago
E.g., if you're running K8s (one thing I typically recommend you buy a managed one of), you can install your own Kafka in it, using an operator that does about 85% of what MSK does.
Sure, you'll need to dedicate person hours to support the operator, but is supporting that any more expensive than supporting AWS products? That you're already paying through the nose for?
gxd|1 year ago
torginus|1 year ago
For high-scale operations, you need to think real hard about how you do things and usually simplicity is key, and trying to do a little as possible on the high throughput parts is useful.
The costs do add up when you have professionals maintaining your Cassadra/Kafka boxes, but the same degree of complexity exists on AWS, when you try to weave together a tapestry of EC2s, lambdas, various storage services, with all the delicious complexity of multiple VPCs and networking fineries while not blowing the budget.
It's a different skillset, but not less work.
rcarmo|1 year ago
Even storage in hyperscalers is inherently redundant—and I keep getting folk who ask about setting up their own RAID array, or using their own containers and job management when there’s a dozen zero-code alternatives in each individual hyperscaler.
nijave|1 year ago
adamcharnock|1 year ago
[1] https://lithus.eu
asah|1 year ago
jauntywundrkind|1 year ago
Part of me thinks, man, the engineers not afraid of setting up a p Postgres or Redis really should be worth a lot more, given how absurd the prices can get. I guess the getting started costs for these services are usually manageable though; by the time the bill is big it's a "nice problem to have" because you have significant load now, and presumably customers & revenue to show for it.
More so, I think orgs are somewhat rightfully afraid of running infra because historically we have been bad at it. It's been every sys-op or devops for themselves in the world. Everyone making their own practices, assembling their own stack of networking setup, init scripts, db procedures, monitoring, alerting, resilience/reliability. This stuff has a lot of dimensions of care to it.
And even when you go the extra mile to document everything, it's still rough to hand-off ownership. A new gal joins; how long does it take to get comfortable? And how much will her style & preferences mesh with whats been string up so far? Or worse, what happens when someone quits? How load bearing were they?
And this is why I'm so humungouely excited about Kubernetes. Fleet was pretty sweet & cool & direct in the past, RIP, but like so many of the "way to run containers" option it was just that: a way to run containers. Having an extensible system, where operators keep networking, storage, databases running, where tasks like backups and migrations and high availability are built in to well tested controllers: it cuts out so so so many things that operators had to discover, socialize, and test test test test test test before. There's such incredibly good load bearing systems-that-maintain-systems (i.g. autonomic) available, that compete very much with the paid for/managed services that have done likewise for us for so long.
And it's a consistent paradigm, for whatever you are up to. Write a manifest with what you want, send it to api-server, wait for operator to make it so. Instead of having different dimensions or concerns have different operational paradigms & styles, there's a unified extensible Desired State Management that does a damn good job.
It felt like running services was in a dark ages for so long, that each.shop was fractured & alone with their infrastructure, and it was obvious why managed services were winning. But today there's a hope that we can run services, well, in a way that will be very clear & explicit if it ever needs to be handed off.
the_real_cher|1 year ago
vishnugupta|1 year ago
To add, if you every want to get ISO/PCIDSS etc certification done then good luck implementing gazillion check list items which Azure/AWS/GCP have already taken care of.
Kostarrr|1 year ago
They will have object storage soon, but dont hold your breath for one-click kubernetes etc. So the fancier you infrastructure, the more you your startup would need to invest in time and money to use Hetzner and thus make it "not worth it".
puterich123|1 year ago
There is also a gpt that you can use that will genereate you the module block based on your requirements.
diggan|1 year ago
jiggawatts|1 year ago
For example, instead of the ancient F8 series used in the article, a modern D8as_v5 Azure instance under a 3-year Savings Plan is $115/mo.
Also, the article compares CPX41 to EC2 and Azure VMs with dedicated cores, not shared cores. The CCX33 Hetzner model is closer to the normal clouds, and costs $50/mo, so now we're at 2x the price instead of 10x the price. (Conversely, the B8als_v2 size uses shared cores and is also 2x the price of CPX41 at $74/mo)
For that 2x cost you get a lot more features, first-party and third-party support, more locations, faster networking, etc... That's worth it for most large enterprises that care about ticking checkboxes on audit reports more than absolute cost. Or to put it this way: the annual price difference is just $600, which is the same cost to an org as half a day of engineer-time or less. If Hetzner is the slightest bit more difficult than a large public cloud VM for anything, ever, then it's not cheaper. This could be patching, maintenance, migrations, backup, recovery, automation, encryption, or just about anything else.
There are other differences as well. Hetzner has a separate charge for load balancers and IP addresses, whereas with Azure they're included in the price of the VM.
The biggest cost difference is that the public clouds charge eyewatering amounts for Internet egress traffic. Azure is about 100x as expensive as Hetzner, which is just crazy.
mxuribe|1 year ago
CharlieDigital|1 year ago
Both of these scale to zero and offer 180k vCPU/s free per month, 360k GB/s free per month. You incur billing only against the active execution time. Cloud Run Jobs has a whole separate free monthly grant as well.
You can run A LOT for free within those constraints. Certainly a blog or website. To prevent cold starts, just set up Cloud Scheduler (also free for this purpose) to ping the container every few minutes.
Use Supabase for a DB or one of the serverless options (if it works for your data use case) like Firestore, CosmosDB and you can run workloads for a few cents per month with an architecture that will scale easily if you need it to.
6 min video showing the receipts and how easy this is: https://youtu.be/GlnEm7JyvyY
obenn|1 year ago
rcarmo|1 year ago
PaaS services or even VM scaling sets with volatile instances can still be stupefyingly cheaper, but that point is really hard to make to architecture astronauts.
E_Bfx|1 year ago
Most people think it is easier to use EC2 than FarGate since the first is the most famous one. But actually, it is the other way around!
unknown|1 year ago
[deleted]
hggigg|1 year ago
YMMV but all costs aren't instance costs.
theshrike79|1 year ago
And they're not just salespeople, they've actually said multiple times if a feature doesn't work for us without trying to hold it wrong in a dangerous (and expensive) way.
tetha|1 year ago
This is one of the more important points and why the point "The learning curve of a single server isn't so big, especially when compared to AWS" is sitting a bit wrong with me.
Sure, if you talk about 1 VM, I agree. And I wouldn't second guess doing this, at all. It would be my initial plan as well as long as I don't have to make any strong availability guarantees. And for this use case, I'd call AWS a bad choice. It's not a simple VM provider.
But once you start running e.g. a redundant postgres cluster for updates without downtime, the amount of stuff to know also grows, a lot. Suddenly you also need backups, tests of backups. And this is where AWS/the cloud allows you to save time, and treadmill time.
JeremyTheo|1 year ago
Would probably give them way more budget in actually building applications than running the infrastructure.
Maybe I'll extend the article to include the point of using a managed postgres at AWS / Azure / fly.io, whatever, in combination with Hetzner VMs.
rcarmo|1 year ago
infocollector|1 year ago
The pricing is more on par with Digital Ocean/Linode.
nik736|1 year ago
anonzzzies|1 year ago
theshrike79|1 year ago
If you're looking for a cheap one-off server, the server auction has some very good deals.
earnesti|1 year ago
m-i-l|1 year ago
[0] Full details at https://blog.searchmysite.net/posts/migrating-off-aws-has-re...
lelag|1 year ago
Take the recent Lichess downtime, for example. Their main server had a hardware issue that required physical intervention. This meant the site was down for over 10 hours, and there wasn't much they could do except wait for OVH to send a tech.
If Lichess had been on AWS, the provider would have automatically moved their workload to a functioning server, and the outage would have been much shorter or possibly avoided altogether.
For Lichess, a non-profit, this tradeoff still make sense. Their service, while important to its users, isn't critical. Nobody dies if Lichess is down and the cost savings help them keep running. But if your business can't afford downtime, the extra guarantees from a public cloud provider can definitely be worth paying for.
factormeta|1 year ago
If you not a HN person with systemadmin skills yes. But is NOT that hard to have in house RADI hd setup, with failover server. Or failover NAT gateway. AWS and cloud provider are just a rip off.
jabwd|1 year ago
nik736|1 year ago
thegeomaster|1 year ago
Scarjit|1 year ago
6c696e7578|1 year ago
CalRobert|1 year ago
a-french-anon|1 year ago
asah|1 year ago
They're leaving other things on AWS, i.e. partial migration is quite doable.
999900000999|1 year ago
Hetzner starts at 50 Euro, only has servers and Europe and is going to require a ton more work.
AWS has the right idea, they give everyone who asks nicely thousands in free credits to get started. Then 2 years in your hooked. I don't want to learn a new system.
sgarland|1 year ago
It will take slightly more effort than Lightsail, yes.
JeremyTheo|1 year ago
miyuru|1 year ago
I have only stumbled on one service that do it. its a datadog alternative, so the bar is not that high for pricing.
Havoc|1 year ago
fxtentacle|1 year ago
code_runner|1 year ago
nijave|1 year ago
Even with automation tools like Ansible or immutable server images, packing as Docker images and running on a container orchestrator have always been much easier.
sgarland|1 year ago
alex_35|1 year ago
CyanLite2|1 year ago
beaviskhan|1 year ago
alberth|1 year ago
theshrike79|1 year ago
Hetzner doesn't have the services AWS provides, that's the reason most companies I know use AWS for.
If we could run our crap on any server, we would, but managed services are still cost-effective vs hiring our own 24/7/365 rotation of on-call ops people.
FiberBundle|1 year ago
rcarmo|1 year ago
timeon|1 year ago
Yeah if people had less shaky stacks. But it is always easier to pay someone to run the hack.
jojobas|1 year ago
JCM9|1 year ago
It seems lost on the authors that yes that might work for some folks just fine, but others really do want the Land Rover and all its additional baked in features beyond getting you from A to B.
timeon|1 year ago
rao13|1 year ago
unknown|1 year ago
[deleted]