top | item 45745281

AWS to bare metal two years later: Answering your questions about leaving AWS

727 points| ndhandala | 4 months ago |oneuptime.com | reply

491 comments

order
[+] bilekas|4 months ago|reply
I'm so surprised there is so much pushback against this.. AWS is extremely expensive. The use cases for setting up your system or service entirely in AWS are more rare than people seem to realise. Maybe I'm just the old man screaming at cloud (no pun intended) but when did people forget how to run a baremetal server ?

> We have 730+ days with 99.993% measured availability and we also escaped AWS region wide downtime that happened a week ago.

This is a very nice brag. Given they are using their ddos protection ingress via CloudFlare there is that dependancy, but in that case I can 100% agree than DNS and ingress can absolutely be a full time job. Running some microservices and a database absolutely is not. If your teams are constantly monitoring and adjusting them such as scaling, then the problem is the design. Not the hosting.

Unless you're a small company serving up billions of heavy requests an hour, I would put money on the bet AWS is overcharging you.

[+] fulafel|4 months ago|reply
The direct cost is the easy part. The more insidious part is that you're now cultivating a growing staff of technologists whose careers depend on doing things the AWS way, getting AWS certified to ensure they build your systems the AWS Well Architected Way instead of thinking themselves, and can upsell you on AWS lock-in solutions using AWS provided soundbites and sales arguments.

("Shall we make the app very resilient to failure? Yes running on multiple regions makes the AWS bill bigger but you'll get much fewer outages, look at all this technobabble that proves it")

And of course AWS lock-in services are priced to look cheaper compared to their overpricing of standard stuff[1] - if you just spend the engineering effort and IaC coding effort to move onto them, this "savings" can be put to more AWS cloud engineering effort which again makes your cloud eng org bigger and more important.

[1] (For example implementing your app off containers to Lambda, or the db off PostgreSQL to DynamoDB etc)

[+] esskay|4 months ago|reply
> I'm so surprised there is so much pushback against this

I'm not. It seems to be happening a lot. Any time a topic about not using AWS comes up here, or on Reddit there a sudden surge of people appearing out of nowhere shouting down anyone who suggests other options. It's honestly starting to feel like paid shilling.

[+] steelegbr|4 months ago|reply
AWS may be overcharging but it's a balancing act. Going on-prem (well, shared DC) will be cheaper but comes with requirements for either jack of all trades sysadmins or a bunch of specialists. It can work well if your product is simple and scalable. A lot of places quietly achieve this.

That said, I've seen real world scenarios where complexity is up the wazoo and an opex cost focus means you're hiring under skilled staff to manage offerings built on components with low sticker prices. Throw in a bit of the old NIH mindset (DIY all the things!) and it's large blast radii with expensive service credits being dished out to customers regularly. On a human factors front your team will be seeing countless middle of the night conference calls.

While I'm not 100% happy with the AWS/Azure/GCP world, the reality is that on-prem skillsets are becoming rarer and more specialist. Hiring good people can be either really expensive or a bit of a unicorn hunt.

[+] Aurornis|4 months ago|reply
> I'm so surprised there is so much pushback against this.. AWS is extremely expensive.

I see more comments in favor than pushing back.

The problem I have with these stories is the confirmation bias that comes with them. Going self-hosted or on-premises does make sense in some carefully selected use cases, but I have dozens of stories of startup teams spinning their wheels with self-hosting strategies that turn into a big waste of time and headcount that they should have been using to grow their businesses instead.

The shared theme of all of the failure stories is missing the true cost of self-hosting: The hours spent getting the servers just right, managing the hosting, debating the best way to run things, and dealing with little issues add up but are easily lost in the noise if you’re not looking closely. Everyone goes through a honeymoon phase where the servers arrive and your software is up and running and you’re busy patting yourselves on the back about how you’re saving money. The real test comes 12 months later when the person who last set up the servers has left for a new job and the team is trying to do forensics to understand why the documentation they wrote doesn’t actually match what’s happening on the servers, or your project managers look back at the sprints and realize that the average time spent on self-hosting related tasks and ideas has added up to a lot more than anyone would have guessed.

Those stories aren’t shared as often. When they are, they’re not upvoted. A lot of people in my local startup scene have sheepish stories about how they finally threw in the towel on self-hosting and went to AWS and got back to focusing on their core product. Few people are writing blog posts about that because it’s not a story people want to hear. We like the heroic stories where someone sets up some servers and everything just works perfectly and there are no downsides.

You really need to weigh the tradeoffs, but many people are not equipped to do that. They just think their chosen solution will be perfect and the other side will be the bad one.

[+] vb-8448|4 months ago|reply
> Maybe I'm just the old man screaming at cloud (no pun intended) but when did people forget how to run a baremetal server ?

It's a way to "commoditize" engineers. You can run on premise or mixed infra better and cheaper, but only if you know what you are doing. This requires experienced guys and doesn't work with new grad hired by big cons and sold ad "cloud experts".

[+] fabian2k|4 months ago|reply
A large part of the different views on this topic are due to the way people estimate the amount of saved effort and money because you're pushing some admin duties to the cloud provider instead of doing this yourself. And people come to vastly different conclusions on this aspect.

It's also that the requirements vary a lot, discussions here on HN often seem to assume that you need HA and lots of scaling options. That isn't universally true.

[+] yomismoaqui|4 months ago|reply
> Maybe I'm just the old man screaming at cloud (no pun intended) but when did people forget how to run a baremetal server ?

We should coin the term "Cloud Learned Helplessness"

[+] izacus|4 months ago|reply
A lot of people here have built their whole professional careers around knowing AWS and deploying to it.

Moving away is an existential issue for them - this is why there's such pushback. A huge % of new developer and devops generation doesn't know anything about deploying software on bare metal or even other clouds and they're terrified about being unemployed.

[+] rdtsc|4 months ago|reply
> I'm so surprised there is so much pushback against this.. AWS is extremely expensive.

Basic rationalization. People will go to extraordinary lengths to justify and defend the choices they made. It's a defense mechanism: if they spent millions on AWS they are not going to sit idly while HN discusses saving hundreds of thousands with everyone nodding and agreeing. It's important for their own sanity to defend the choice they made.

[+] JCM9|4 months ago|reply
As the author points out AWS can provide a few things that you wouldn’t want to try and replicate (like CloudFront) but for most other things you’re very much correct. AWS is ultimately very expensive for what it is. The complicated billing that’s full of surprises also makes cost management a head-banging experience.
[+] maccard|4 months ago|reply
I work for a small company owned by a huge company. We are entirely independent except for purchasing, IT, and budget approval. We run our CI on AWS, and it’s slow and flaky for a variety of reasons (compiling large c++ projects combined with instance type pressure). It’s also expensive.

We planned a migration to move from 4OD instances to one on prem machine and we guessed we’d save $1000/mo, our builds would be faster and we’d have less failures due to capacity issues. We even had a spare workstation and a rack in the office that so the capex was 0.

I plugged the machine into the rack and no internet connectivity. Put in an IT ticket which took 2 days for a reply, only to be told that this was an unauthorised machine and needed to be imaged by IT. The back and forth took 4 weeks, multiple meetings and multiple approvals. My guess is that 4 people spent probably 10 hours arguing whether we should do this or not.

On AWS I can write a python script and have a running windows instance in 15 minutes.

[+] kelnos|4 months ago|reply
> when did people forget how to run a baremetal server ?

I don't think people have forgotten, but I think Amazon has done an amazingly excellent job of marketing and developer relations over the years, to the point that they've convinced most developers that doing your own thing is 1) a lot of expensive, specialized work, and 2) actively dangerous for your business, whether it's because of security, uptime, or some other ops bogeyman of the day.

Note that I said "developers". Most developers are not sysadmins or IT operations people. Most of them have never set up Linux on a desktop or laptop, let alone on real server hardware (that they've also set up themselves). Most of them never had the chance to forget how to run a bare-metal server; they never knew how in the first place. (Hell, I've been running desktop Linux for 25+ years, and I don't think I've ever set up Linux on actual server hardware. Closest I've come is bare-metal Solaris, but that was like 25 years ago.)

"DevOps" today usually means that you know how to run a CLI tool or drive a web interface to deploy your automatically-built container artifact to some cloud-based production system that someone else manages, hiding the details from you. (This bit also can be true for shops that run on bare metal, depending on how advanced their own sysadmin/ops team is.) While developers are often not decision-makers in a larger org, they can be at smaller orgs, and once those developers get on the cloud, you probably will stay on the cloud (companies like OneUptime are the exception, not the rule), even if you've gotten much larger and it's stupid expensive to continue running that way.

[+] _ea1k|4 months ago|reply
> I'm so surprised there is so much pushback against this..

Same, this trend towards "AWS all the things" has really amazed me.

We've all mocked small companies copying big companies by trying to make their app super-duper scalable from the very start. After all, everyone things they are the next google, despite their 5 total users right now.

But this is really the opposite. AWS is phenomenal for the startup that would readily trade high opex for lower capex. Servers aren't the cheapest things in the world to buy and they depreciate. It makes total sense for startups to start this way.

But why are big companies, with an actual budget for staff, copying the behavior of their favorite startups?

[+] mberning|4 months ago|reply
It’s expensive and the “design” of the services, if you could call it that, is such that you are forced to pay a lot, or play a lot of games to get around it. If you are going to spend your engineering time working around their ridiculous pricing schemes, you might as well spend the money on building things out yourself.

Perfect example - MSK. The brokers are config locked at certain partition counts, even if your CPU is 5%. But their MSK replicator is capped on topic count. So now I have to work around topic counts at the cluster level, and partition counts at the broker level. Neither of which are inherent limits in the underlying technologies (kafka and mirrormaker)

[+] UltraSane|4 months ago|reply
I'm not going to argue that AWS can be expensive but in my experience its biggest advantage is SPEED. In every company I worked for that ran their own data centers ever damn thing took FOREVER. new servers took months to buy and rack. any network change like a new VLAN took days to weeks. It was so annoying. But in AWS almost anything is just an API call and a few minutes at most from being enabled. It is so much more productive.
[+] vidarh|4 months ago|reply
There is this belief that it is not extremely expensive and/or that the ops cost of bare metal will outpace it. It is a belief, and it is very rarely supported by facts.

Having done consulting in this space for a decade, and worked with containerised systems since before AWS existed, my experience is that managing an AWS system is consistently more expensive and that in fact the devops cost is part of what makes AWS an expensive option.

[+] mk89|4 months ago|reply
How would you do multi-region deployments with your own DC?

This is an issue for several companies that start small and within 5 years they find the need to expand abroad. Be it for data sovereignty or so, which is becoming more important than ever in the last 10 years.

Duplicating a region is "a few clicks away" on AWS. This is what the provider enables you to do.

This and a lot of other things. And for such things, yes, you gotta pay.

[+] neves|4 months ago|reply
It's always nice to remember that AWS is responsible for 70% of Amazon profits.
[+] SJC_Hacker|4 months ago|reply
> I'm so surprised there is so much pushback against this.. AWS is extremely expensive. The use cases for setting up your system or service entirely in AWS are more rare than people seem to realise. Maybe I'm just the old man screaming at cloud (no pun intended) but when did people forget how to run a baremetal server ?

Long term yes you can save money rolling your own.

But with cloud you can get something up and running within maybe a few days, sometimes even faster. Often with built in scalability.

This is a much easier sell to the non-tech (i.e., money) people.

If the project continues, the path of least resistance is often to just continue with the cloud solution. At a certain point, there will be so much tech debt that any savings from long term costs from the traditional on-premises, co-location or managed hosting, are vastly by the cost of migration.

[+] speleding|4 months ago|reply
The complexity of AWS versus bare metal depends on what you are doing. Setting up an apache app server: just as easy on bare metal. Setting up high availability MySQL with hot failover: much easier on AWS. And a lot of businesses need a highly available database.
[+] rasjani|4 months ago|reply
> when did people forget how to run a baremetal server ?

My opinion on this: docker sort of changed the game here. It sort of enabled a lot of people to get a "new and fresh" level of abstraction to not bother about bare metal.

As an example, I work in company where most consultants are doing DevOps and k8 is big part of that.

What made me consider that? I've been told multiple times that "you know your stuff" when I mention some kernel or userland feature that container approach provides.

[+] comprev|4 months ago|reply
I'm on a Platform team of <8 people and only 3 of us (most experienced too) come from sysadmin backgrounds. The rest have only ever known containers/cloud and never touched (both figuratively and literally :-) bare metal servers in their careers.

They've never used tools like Ansible (or Anaconda) or been in situations where they couldn't destroy the container and start afresh instantly.

[+] hoppp|4 months ago|reply
AWS spends a lot in educating developers to use their service and it's a working strategy.

People simply don't believe bare metal is better because of this conditioning towards the cloud.

Every company should strive for self-sufficiency, but this idea is not that widespread in the software industry.

[+] dumbledoren|4 months ago|reply
> when did people forget how to run a baremetal server ?

Bigger question: When did people forget that doing that is much easier than AWS...

[+] j45|4 months ago|reply
The cloud is incredibly profitable for the efficiencies and improvements its introduced and held onto.

Easy to push back against what is now the unknown (bare metal), when the layers extending bare metal to cloud service have become better and better, as well as more accessible.

[+] eek2121|4 months ago|reply
I once moved a small site from AWS to Digital Ocean + Cloudflare.

$100-$300 on AWS -> $35/mo for DO + CF. Coincidentally, AWS had an outage soon after, which was avoided thanks to the move.

I have used DO for both clients and myself, and have not had any huge problems with them.

[+] realitysballs|4 months ago|reply
For my org. I don’t have budget for a dedicated in-house opsec team, so if I on-prem it triggers additional salary burden for security . How would I overcome this?
[+] ownagefool|4 months ago|reply
The consequence of running ingress and DNS poorly is downtime.

The consequence of running a database poorly is lost data.

At the end of the day they're all just processes on a machine somewhere, none of it is particularly difficult, but storing, protecting, and traversing state is pretty much _the_ job and I can't really see how you'd think ingress and DNS would be more work than the datastores done right.

Now with AWS, I have a SaaS that makes 6 figures and the AWS bill is <$1000 a month. I'm entirely capable of doing this on-prem, but the vast majority of the bill is s3 state, so what we're actually talking about is me being on-call for an object store and a database, and the potential consequences of doing so.

With all that said, there's definitely a price point and staffing point where I will consider doing that, and I'm pretty down for the whole on-prem movement generally.

[+] jagged-chisel|4 months ago|reply
Forget? You have to hire people for that. We are a software organization. We build software. If we rent in the cloud, there is less HR hassle - hiring, raises, bonuses, benefits, firing … none of that headache involved with the cloud.

Technically? Totally doable. But the owners prefer renting in the cloud over the people-related issues of hiring.

[+] cs702|4 months ago|reply
In the early days of cloud service providers, they offered a handful of high-value services, all at great prices, making them cost-competitive with bare metal but much easier. That was then.

Things today are different. As cloud service providers have grown to become dominant, they now offer a vast, complicated tangle of services, microservices, control panels, etc., at prices that can spiral out of control if you are not constantly on top of them, making bare metal cheaper for many use cases.

[+] thelastgallon|4 months ago|reply
These are the features that AWS provides

(1) Massive expansion of budget (100 - 1000x) to support empire building. Instead of one minimum-wage sysadmin with 2 high-availability, maxed-out servers for 20K - 40K (and 4-hour response time from Dell/HPE), you can have 100M multi-cloud Kubernetes + Lambda + a mix-and-match of various locked-in cloud services (DB, etc.). And you can have a large army of SRE/DevOps. You get power and influence as a VP of Cloud this and that and 300 - 1000 people reporting to you.

(2) OpEx instead of CapEx

(3) All leaders are completely clueless about hiring the right people in tech. They hire their incompetent buddies who hire their cronies. Data centers can run at scale with 5-10 good people. However, they hire 3000 horrible, incompetent, and toxic people, and they build lots of paperwork, bureaucracy, and approvals around it. Before AWS, it was VMware's internal cloud that ran most companies. Getting bare metal or a VM will take months to years, and many, many meetings and escalations. With AWS, here is my credit card, pls gimme 2 Vms is the biggest feature.

[+] darkwater|4 months ago|reply
The core of this success is this, IMO:

  > Our workload is 24/7 steady. We were already at >90% reservation coverage; there was no idle burst capacity to “right size” away. If we had the kind of bursty compute profile many commenters referenced, the choice would be different.
Which TBH applies to many, many places, even if they are not aware of it.
[+] sondr3|4 months ago|reply
> Cloud makes sense when elasticity matters; bare metal wins when baseload dominates.

This really is the crux of the matter in my opinion, at least for applications (databases and so on is in my opinion more nuanced). I've only worked at one place where using cloud functions made sense (keeping it somewhat vague here): data ingestion from stations that could be EXTREMELY bursty. Usually we got data from the stations at roughly midnight every day, nothing a regular server couldn't handle, but occasionally a station would come back online after weeks or new stations got connected etc which produced incredible load for a very short amount of time when we fetched, parsed and handled each packet. Instead of queuing things for ages we could instead just horizontally scale it out to handle the pressure.

[+] ksec|4 months ago|reply
Many other points. When the Cloud Started, they offered great value in adjacent product and services. Scaling was painful, getting bare metal hardware have long lead time, provisioning takes time. DC was not of as high quality, Network wasn't as redundant. A lot of these today are much less of an issue.

In 2010 you could only get 64 Core Xeon CPU coming in 8 Sockets, or maximum or 8 Core per socket. And that is ignoring NUMA issues. Today you could get 256 Core per socket that is at least twice as fast per core. What used to be 64 Server could now be fitted into 1. And by 2030, it would be closer to 100 to 1 ratio. Not to mention Software on Server has gotten a lot faster compared to 2010. PHP, Python, Ruby, Java, ASP or even Perl. If we added up everything I wouldn't be surprised we are 200 or 300 to 1 ratio compared to 2010.

I am pretty sure there is some version of Oxide in the pipeline that will catch up to latest Zen CPU Core. If a server isn't enough, a few Oxide Rack should fit 99% of Internet companies usage.

[+] pingoo101010|4 months ago|reply
Many startups and companies couldn't exist if there was only AWS (or GCP / Azure) due to how much they overcharge.

For example, we couldn't offer free GeoIP downloads[0] if we were charged the outrageous $0.09 / GB, and the same is true for companies serving AI models or game assets.

But what makes me almost sick is how slow is the cloud. From network-attached disks to overcrowded CPUs, everything is so slooooow.

My experience is that the cloud is a good thing between 0-10,000 $ / month. But you should seriously consider renting bare-metal servers or owning your own after that. You can "over-provision" as much as you want when you get 10-20x (real numbers) the performance for 25% of the price.

[0] https://downloads.pingoo.io

[+] yanslookup|4 months ago|reply
FD: I work at Amazon, I also started my career in a time where I had to submit paper requests for servers that had turn around times measured in months.

I just don't see it. Given the nature of the services they offer it's just too risky not to use as much managed stuff with SLAs as possible. k8s alone is a very complicated control plane + a freaking database that is hard to keep happy if it's not completely static. In a prior life I went very deep on k8s, including self managing clusters and it's just too fragile, I literally had to contribute patches to etcd and I'm not a db engineer. I kept reading the post and seeing future failure point after future failure point.

The other aspect is there doesn't seem to be an honest assessment of the tradeoffs. It's all peaches and cream, no downsides, no tradeoffs, no risk assessment etc.

[+] insaneisnotfree|4 months ago|reply
“You lean heavily on managed services (Aurora Serverless, Kinesis, Step Functions) where the operational load is the value prop.”

Not viable even when your core belongs to AWS. Why? Ask prime video

[+] cornfieldlabs|4 months ago|reply
> Equinix Metal got the closest, but bare metal on-demand still carried a 25-30% premium over our CapEx plan. Their global footprint is tempting; we may still use them for short-lived expansion.

> The Equinix Metal service will be sunset on June 30, 2026.

https://docs.equinix.com/metal/

[+] rossdavidh|4 months ago|reply
I had a problem figuring out why the place I was working wanted to move from in-house to AWS; their workload was easily handled by a few servers, they had no big bursts of traffic, and they didn't need any of the specialized features of AWS.

Eventually, I realized that it was because the devs wanted to put "AWS" on their resumes. I wondered how long it would take management to catch on that they were being used as a place to spruce up your resume before moving on to catch bigger fish.

But not long after, I realized that the management was doing the same thing. "Led a team migration to AWS" looked good on their resume, also, and they also intended to move on/up. Shortly after I left, the place got bought and the building it was in is empty now.

I wonder, now that Amazon is having layoffs and Big Tech generally is not as many people's target employer, will "migrated off of AWS to in-house servers" be what devs (and management) want on their resume?

[+] nik736|4 months ago|reply
It's an interesting article, thanks for that.

What people forget about the OVH or Hetzner comparison is that for those entry servers they are known for, think the Advance line with OVH or AX with Hetzner. Those boxes come with some drawbacks.

The OVH Advance line for example comes without ECC memory, in a server, that might host databases. It's a disaster waiting to happen. There is no option to add ECC memory with the Advance line, so you have to use Scale or High Grade servers, which are far from "affordable".

Hetzner per default comes with a single PSU, a single uplink. Yes, if nothing happens this is probably fine, but if you need a reliable private network or 10G this will cost extra.

[+] TYPE_FASTER|4 months ago|reply
> It depends on your workload.

Very much this.

Small team in a large company who has an enterprise agreement (discount) with a cloud provider? The cloud can be very empowering, in that teams who own their infra in the cloud can make changes that benefit the product in a fraction of the time it would take to work those changes through the org on prem. This depends on having a team that has enough of an understanding of database, network and systems administration to own their infrastructure. If you have more than one team like this, it also pays to have a central cloud enablement team who provides common config and controls to make sure teams have room to work without accidentally overrunning a budget or creating a potential security vulnerability.

Startup who wants to be able to scale? You can start in the cloud without tying yourself to the cloud or a provider if you are really careful. Or, at least design your system architecture in such a way that you can migrate in the future if/when it makes sense.

[+] pjdesno|4 months ago|reply
I'm involved in a fairly large academic cloud deployment, sited in a 15MW data center built and shared by a few large universities.

There are huge advantages of scale to computer operations in a few areas:

- facility: the capital and running cost of a purpose-built datacenter is far cheaper per rack than putting machines in existing office-class buildings, as long as it's a reasonable size - ours is ~1000 racks, but you might get decent scale at a quarter of that. (also one fat network pipe instead of a bunch of slow ones)

- purchasing: unlike consumer PCs, low-volume prices for major vendor servers are wildly inflated, and you don't get decent prices until you buy quite a few of them.

- operations: people come in integer units, and (assuming your salary ranges are bounded) are only competent in small number of technical areas each. Whether you have one machine or 1000s you need someone who can handle each technology your deployment depends on, from Kubernetes to network ops; multiply 4x for those requiring 24/7 coverage, or accept long response times for off-hours failures.

That last one is probably the kicker. To keep salary costs below 50% of your total, assuming US pay rates and 5-year depreciation since machines aren't getting faster as quickly as they used to, you probably need to be running tens of millions of dollars in hardware.

Note that a tiny deployment of a few machines in a tech company is an exception, since you have existing technical staff who can run them in their spare time. (and you have other interesting work for them to do, so recruiting and retention isn't the same problem as if their only job was to babysit a micro-deployment)

That's why it can be simultaneously true that (a) profit margins on AWS-like services are very high, and (b) AWS is cheaper than running your own machines for a large number of companies.

[+] seidleroni|4 months ago|reply
As someone who works with firmware, it is funny how different our definitions of "bare metal" is.
[+] electroly|4 months ago|reply
I put our company onto a hybrid AWS-colocation setup to attempt to get the best of both worlds. We have cheap fiddly/bursty things and expensive stable things and nothing in between. Obviously, put the fiddly/bursty things in AWS and put the stable things in colocation. Direct Connect keeps latency and egress costs down; we are 1 millisecond away from us-east-1 and for egress we pay 2¢/GB instead of the regular 9¢/GB. The database is on the colo side so database-to-AWS reads are all free ingress instead of egress, and database-to-server traffic on the colo side doesn't transit to AWS at all. The savings on the HA pair of SQL Server instances is shocking and pays for the entire colo setup, and then some. I'm surprised hybrids are not more common. We are able to manage it with our existing (small) staff, and in absolute terms we don't spend much time on it--that was the point of putting the fiddly stuff in AWS.

The biggest downside I see? We had to sign a 3 year contract with the colocation facility up front, and any time we want to change something they want a new commitment. On AWS you don't commit to spending until after you've got it working, and even then it's your choice.

[+] jcalvinowens|4 months ago|reply
I have seen multiple startups paying thousands of dollars a month in AWS bills to run a tiny service which could trivially run on an $800 desktop on a residential internet connection. It's absolutely tragic.
[+] dimitrios1|4 months ago|reply
One thing I can say definitively, as someone who is definitely not an AI zealot (more of an AI pragmatist): GPT language models have reduced the barrier of running your own bare metal server. AWS salesfolk have long often used the boogeyman of the costs (opportunity, actual, maintenance) of running your own server as the reason you should pick AWS (not realizing you are trading one set of boogeymen for another), but AI has reduced a lot of that burden.
[+] mythz|4 months ago|reply
Several years off AWS, the only thing I still prefer AWS for is SES, otherwise Cloudflare has the more cost effective managed services. For everything else we use Hetzner US Cloud VMs for hosting all App Servers and Server Software.

Our .NET Apps are still deployed as Docker Compose Apps which we use GitHub Actions and Kamal [1] to deploy. Most Apps use SQLite + Litestream with real-time replication to R2, but have switched to a local PostgreSQL for our Latest App with regular backups to R2.

Thanks to AI that can walk you through any hurdle and create whatever deployment, backup and automation scripts you need, it's never been easier to self-host.

[1] https://docs.servicestack.net/kamal-deploy

[+] aeve890|4 months ago|reply
>We're now moving to Talos. We PXE boot with Tinkerbell, image with Talos, manage configs through Flux and Terraform, and run conformance suites before each Kubernetes upgrade.

Gee, how hard is to find SE experts in that particular combination of available ops tools? While in AWS every AWS certified engineer would speak the same language, the DIY approach surely suffers from the lack of "one way" to do things. Change Flux with Argo for example (assuming the post is talking about that Flex and no another tool with the same name), and you have a almost completely different gitops workflow. How do they manage to settle with a specific set of tools?