There is a corollary to this: Do you really need cloud infrastructure?
Cattle not pets right?
Well, no. Have you seen amazons AWS margins? Its 30%.
After amazon buys hardware, pays people to run it, it still makes 30%. Not having hardware is someone else's profit.
That isnt cattle, its contract poultry farming.
Learn capacity planing. Learn to write cachable, scalable apps. Track your hardware spend per customer. Learn about assets vs liablity (hang out with the accountants, they are nerds too). Do some engineering dont just be a feature factory. And if you are going to build features, make fuckin sure that you build tracking into them and hold the product teams feet to the fire when the numbers dont add up (see: friends with accountants, and tracking money).
But it's also someone else's economies of scale. The chances of me getting datacenter space, hardware, bandwidth, and expert 24/7 staffing at the same volume discounts they do is... slim. Particularly for the small amounts I'd need.
If you are the owner who foots the bill and has the capability to run your own infra then nothing beats it. If you don't have the capability then cloud throws you the lifeline at a price of course. Pay for it and be happy.
If you are the guy who runs/manages the infra for someone: then there is no point in saving dollars. You peddle Kubernetes and go to the kubecon and post all about it on linkedin and establish yourself as a kubernetes expert. When the owner of current gig goes under, you will have a bunch of job offers to pick from.
Besides, kubernetes solves a problem very elegantly that most companies do not have. Not everyone is google and running apps on web scale with an expectation of 99.99% uptime...
Cloud is an abstraction over hardware. lLike any good abstraction, it makes certain tradeoffs.
Sometimes it makes sense to move to a lower level of abstraction, for performance, cost, or compatibility reasons. In this case, diving under the cloud and running your own servers could save 30% of your server costs.
Cloud adoption has shown that many (most?) companies prefer the convenience over cost savings. Maybe optimizing hardware spend is not the best way to optimize a business.
I fully agree with your last paragraph, but not sure what it has to do with cloud infrastructure specifically. AWS also leverages huge economies of scale into profit, it's not like you're going to realize equivalent margins by running your own little server in a colo somewhere. You certainly won't realize equivalent availability, scalability, security, support ecosystem, etc with it either. Cloud infra can make a lot of sense even with "pets" - you just gotta make sure to understand requirements, limitations, and use the right tools for the problem. For me personally, I'll reach for cloud infra these days as a reasonable default (similar to reaching for Postgres as a reasonable database default), especially if it's managed and affordable.
The biggest thing “the cloud” had gotten me is that I no longer have to call a sales rep and wait a week if I need a server. Yes this is a process problem, but for whatever reason it seems to stop being a problem (as much) with companies moving to the cloud. I have seen this 3+ times now. As an engineer, I’d much rather move off prem than fix bad management.
Fully agree. Cloud can be an easy way to get started since you don't have to pay as much up front, and even you need extremely elastic scaling you'll save a fortune in the long term by investing in at least some on-prem hardware to handle the off-peak workloads. If you have predictable stable loading, you can save even more!
If you need servers for the majority of your application, don't use AWS or any other mahor cloud provider. Their benefits come from economy of scale, so if you cannot be part of that economy, do something else.
And you can be part of it really only by doing cloud native stuff like Lambda, DynamoDB et al.
Your first full-time sysadmin is an expensive hire. So is your first DBA. And even if your database backups are working now, there's a good chance they'll silently break in the next several years.
The simplest thing you could do is to build a single-container application, and deploy it a Heroku-like system with a fully managed database. If this actually works for your use case, then definitely avoid Kubernetes.
But eventually you'll reach a point where you need to run a dozen different things, spread out across a bunch of servers. You'll need cron jobs and Grafana and maybe some centralized way to manage secrets. You'll need a bunch of other things. At this point, a managed Kuberentes cluster is no worse than any other option. It's lighter weight than 50 pages of Terraform. You won't need to worry about how to get customized init scripts into an autoscaling group.
The price is that you'll need to read an O'Reily book, you'll need to write a moderate amount of YAML, and you'll need to pay attention to the signs reading Here There Be Dragons.
Kuberentes isn't the only way to tackle problems at this scale. But I've used Terraform and ECS and Chef and even a custom RPM package repo. And none of these approaches were signficantly simpler than Kubernetes once you deployed a full, working system for a medium-sized organization.
I’m sceptical of this article. I’m an indy dev using K8s at vultr (VKS) and it’s absolutely simplified my life.
The article suggests just using EC2 instead of K8s, but if I do that, I now have to manage an entire operating system. I have to make sure the OS is up to date, and balance all the nuances this entails, especially balancing downtimes, and recovery from upgrades. Major OS upgrades are hard, and pretty much guarantee downtime unless you’re running multiple instances in which case how are you managing them?
Contrast to VKS where, with much less effort, OS upgrades are rolled out to nodes with no downtime to my app. Yes, getting to this point takes a little bit of effort, but not much. And yes, I have multiple redundant VPS, which is more expensive, but that’s a feature.
K8s is perhaps overly verbose, and like all technologies it has a learning curve, but im gonna go out on a limb here and say that I’ve found running a managed K8s service like VKS is way easier than managing even a single Debian server, and provides a pile of functionality that is difficult or impossible to achieve with a single VPS.
And the moment you have more than one VPS, it needs to be managed, so you’re back at needing some kind of orchestration.
The complexity of maintaining a unix system should not be underestimated just because you already know how to do it. K8s makes my life easier because it does not just abstract away the underlying node operating system, it obviates it. In doing so, it brings its own complexities, but there’s nothing I miss about managing operating systems. Nothing.
The main focus of the post is to highlight some of the long-term risks and consequences of standardizing around Kubernetes in an org. If you've done a proper evaluation, and still think Kubernetes makes sense for you, then it's probably a sound decision. But I think many skip the evaluation step or do it hastily. The post is more targeted towards organizations with at least a handful of employees. What works for an indy dev does not necessarily scale and work for SMBs or larger orgs - those are very different contexts.
> The article suggests just using EC2 instead of K8s
Not quite. I suggest strongly considering using managed services when it makes sense for your organization. The equivalent of k8s in terms of managed services would be Amazon Elastic Container Service (ECS) as the control plane, perhaps with AWS Fargate as the compute runtime.
(I wouldn't really call EC2 a managed service - it's more in the territory of Infrastructure as a Service)
There's some legit notions here, but overwhelmingly it uses insinuation & suggestion to sow Fear Uncertainty and Doubt.
> Despite its portability, Kubernetes also introduces a form of lock-in – not to a specific vendor, but to a paradigm that may have implications on your architecture and organizational structure. It can lead to tunnel vision where all solutions are made to fit into Kubernetes instead of using the right tool for the job.
This seems a bit absurd on a number of fronts. It doesn't shape architecture that much, in my view; it runs your stuff. Leading to tunnel vision, preventing the right tool for the job? That doesn't seem to be a particularly real issue; most big services have some kind of Kubernetes operator that seems to work just fine.
Kubernetes seems to do a pretty fine job of exposing platform, in a flexible and consistent fashion. If it was highly opinionated or specific, it probably wouldn't have gotten where it is.
I think the larger issue is how Kubernetes often is implemented in organizations - as part of internal developer platforms owned by central teams which on purpose or by accident can end up dictating how development teams should work. I think it's easy for such central teams to fall into the trap of trying to build smart, custom abstractions on top of Kubernetes to simplify things, but over time I believe these types of abstractions run a high risk of slowing down the rest of the org (good abstractions are really hard to come by!) and creating fuzzy responsibility boundaries between central and development teams. As an example, this can affect an organizational structure by (re-)introducing functional silos between development and operations. Can a development team really be fully responsible for what they build if they rely on high-level, custom abstractions that only someone else in the org really understands?
Furthermore, if everything in an org is containerized and runs on Kubernetes, it's really easy to have a strong bias towards containerized workloads, which in turn can affect the kind of systems you build and their architecture.
>>> This seems a bit absurd on a number of fronts. It doesn't shape architecture that much, in my view; it runs your stuff.
I mean, let's be candid.
There are plenty of times where "containers" are bags of shit software that were pushing into production and throwing hardware at them to keep things going. There are containers out there with out of date libraries that aren't getting updated cause the "work" and no one gives a shit.
If you can get away with that, what is the incentive to do highly integrated engineering that produces diagonal scalability? Why be WhatsApp when you can just throw money at bad software?
I don't think most companies can say why they're not at least isolating workloads with something like Kata Containers and why they're using only glorified cgroup jails, have an inventory of all the services they're running and why, can point to which machines have authoritative copies of data, how they back it up without replicating corruption, and how they'd do disaster recovery/BCP on it.
For small teams I also think Kubernetes often greatly complicates the per-service operational overhead by making it much more difficult for most engineers to manage their own deployments. You will inevitably reach a point that engineers need to collaborate with infra folks, but in my experience that point gets moved up a lot by using Kubernetes.
Hey now, I made a killing in AWS consulting to convince megacorps to get rid of their own hardware and avoid going the OpenStack route.
The problems of pre-IaaS and pre-K8s were manageability, flexibility, and capacity utilization. These problems still haven't really been solved to a standardized, interoperable, and uniform manner because stacks continue to mushroom in complexity. Oxide appears to be on the right track but there is much that can be done to reduce the amount of tinkering, redundant abstractions, and avoiding conventional lifecycle management and cross-cutting concerns that people don't want to think about whenever another new way comes along.
I found that just using CloudRun and similar technologies is simpler and easier to manage than kubernetes. You need auto scaling, fast startup, limit number of concurrent connections to each instance, and scale to zero functionality.
I agree that Cloud Run greatly simplifies deployments.
Unfortunately, it only auto-scales based on requests and, eventually, CPU. We are in the process of moving our Temporal workers from Cloud Run to GKE Autopilot, which is ~30% cheaper given we can use arm64 Scale-Out nodes.
That said, we are planning on doing a cloud exit in the future. I don't feel we need Kubernetes, but we do need to orchestrate containers. In our case, it's less scale, and more isolation.
zer00eyz|1 year ago
Cattle not pets right?
Well, no. Have you seen amazons AWS margins? Its 30%.
After amazon buys hardware, pays people to run it, it still makes 30%. Not having hardware is someone else's profit.
That isnt cattle, its contract poultry farming.
Learn capacity planing. Learn to write cachable, scalable apps. Track your hardware spend per customer. Learn about assets vs liablity (hang out with the accountants, they are nerds too). Do some engineering dont just be a feature factory. And if you are going to build features, make fuckin sure that you build tracking into them and hold the product teams feet to the fire when the numbers dont add up (see: friends with accountants, and tracking money).
ceejayoz|1 year ago
But it's also someone else's economies of scale. The chances of me getting datacenter space, hardware, bandwidth, and expert 24/7 staffing at the same volume discounts they do is... slim. Particularly for the small amounts I'd need.
kpandit|1 year ago
If you are the owner who foots the bill and has the capability to run your own infra then nothing beats it. If you don't have the capability then cloud throws you the lifeline at a price of course. Pay for it and be happy.
If you are the guy who runs/manages the infra for someone: then there is no point in saving dollars. You peddle Kubernetes and go to the kubecon and post all about it on linkedin and establish yourself as a kubernetes expert. When the owner of current gig goes under, you will have a bunch of job offers to pick from.
Besides, kubernetes solves a problem very elegantly that most companies do not have. Not everyone is google and running apps on web scale with an expectation of 99.99% uptime...
bigyikes|1 year ago
Sometimes it makes sense to move to a lower level of abstraction, for performance, cost, or compatibility reasons. In this case, diving under the cloud and running your own servers could save 30% of your server costs.
Cloud adoption has shown that many (most?) companies prefer the convenience over cost savings. Maybe optimizing hardware spend is not the best way to optimize a business.
romanhn|1 year ago
nova22033|1 year ago
I'm expected to write a service like s3?
beeeeerp|1 year ago
r14c|1 year ago
darthrupert|1 year ago
And you can be part of it really only by doing cloud native stuff like Lambda, DynamoDB et al.
eddd-ddde|1 year ago
Maybe some people do need a trailer, maybe some do not. As long as you don't blindly follow the cloud you can also profit from them.
ekidd|1 year ago
The simplest thing you could do is to build a single-container application, and deploy it a Heroku-like system with a fully managed database. If this actually works for your use case, then definitely avoid Kubernetes.
But eventually you'll reach a point where you need to run a dozen different things, spread out across a bunch of servers. You'll need cron jobs and Grafana and maybe some centralized way to manage secrets. You'll need a bunch of other things. At this point, a managed Kuberentes cluster is no worse than any other option. It's lighter weight than 50 pages of Terraform. You won't need to worry about how to get customized init scripts into an autoscaling group.
The price is that you'll need to read an O'Reily book, you'll need to write a moderate amount of YAML, and you'll need to pay attention to the signs reading Here There Be Dragons.
Kuberentes isn't the only way to tackle problems at this scale. But I've used Terraform and ECS and Chef and even a custom RPM package repo. And none of these approaches were signficantly simpler than Kubernetes once you deployed a full, working system for a medium-sized organization.
alex_lav|1 year ago
Except in terms of pricing...?
doctor_eval|1 year ago
The article suggests just using EC2 instead of K8s, but if I do that, I now have to manage an entire operating system. I have to make sure the OS is up to date, and balance all the nuances this entails, especially balancing downtimes, and recovery from upgrades. Major OS upgrades are hard, and pretty much guarantee downtime unless you’re running multiple instances in which case how are you managing them?
Contrast to VKS where, with much less effort, OS upgrades are rolled out to nodes with no downtime to my app. Yes, getting to this point takes a little bit of effort, but not much. And yes, I have multiple redundant VPS, which is more expensive, but that’s a feature.
K8s is perhaps overly verbose, and like all technologies it has a learning curve, but im gonna go out on a limb here and say that I’ve found running a managed K8s service like VKS is way easier than managing even a single Debian server, and provides a pile of functionality that is difficult or impossible to achieve with a single VPS.
And the moment you have more than one VPS, it needs to be managed, so you’re back at needing some kind of orchestration.
The complexity of maintaining a unix system should not be underestimated just because you already know how to do it. K8s makes my life easier because it does not just abstract away the underlying node operating system, it obviates it. In doing so, it brings its own complexities, but there’s nothing I miss about managing operating systems. Nothing.
stekern|1 year ago
The main focus of the post is to highlight some of the long-term risks and consequences of standardizing around Kubernetes in an org. If you've done a proper evaluation, and still think Kubernetes makes sense for you, then it's probably a sound decision. But I think many skip the evaluation step or do it hastily. The post is more targeted towards organizations with at least a handful of employees. What works for an indy dev does not necessarily scale and work for SMBs or larger orgs - those are very different contexts.
> The article suggests just using EC2 instead of K8s
Not quite. I suggest strongly considering using managed services when it makes sense for your organization. The equivalent of k8s in terms of managed services would be Amazon Elastic Container Service (ECS) as the control plane, perhaps with AWS Fargate as the compute runtime.
(I wouldn't really call EC2 a managed service - it's more in the territory of Infrastructure as a Service)
jauntywundrkind|1 year ago
> Despite its portability, Kubernetes also introduces a form of lock-in – not to a specific vendor, but to a paradigm that may have implications on your architecture and organizational structure. It can lead to tunnel vision where all solutions are made to fit into Kubernetes instead of using the right tool for the job.
This seems a bit absurd on a number of fronts. It doesn't shape architecture that much, in my view; it runs your stuff. Leading to tunnel vision, preventing the right tool for the job? That doesn't seem to be a particularly real issue; most big services have some kind of Kubernetes operator that seems to work just fine.
Kubernetes seems to do a pretty fine job of exposing platform, in a flexible and consistent fashion. If it was highly opinionated or specific, it probably wouldn't have gotten where it is.
stekern|1 year ago
Furthermore, if everything in an org is containerized and runs on Kubernetes, it's really easy to have a strong bias towards containerized workloads, which in turn can affect the kind of systems you build and their architecture.
zer00eyz|1 year ago
I mean, let's be candid.
There are plenty of times where "containers" are bags of shit software that were pushing into production and throwing hardware at them to keep things going. There are containers out there with out of date libraries that aren't getting updated cause the "work" and no one gives a shit.
If you can get away with that, what is the incentive to do highly integrated engineering that produces diagonal scalability? Why be WhatsApp when you can just throw money at bad software?
vrnvu|1 year ago
alex_lav|1 year ago
1letterunixname|1 year ago
autoexecbat|1 year ago
When your team has familiarity with something, it's a bit harder to suggest an alternative unless it's quite a lot better
linuxandrew|1 year ago
tredre3|1 year ago
ceejayoz|1 year ago
aeturnum|1 year ago
1letterunixname|1 year ago
The problems of pre-IaaS and pre-K8s were manageability, flexibility, and capacity utilization. These problems still haven't really been solved to a standardized, interoperable, and uniform manner because stacks continue to mushroom in complexity. Oxide appears to be on the right track but there is much that can be done to reduce the amount of tinkering, redundant abstractions, and avoiding conventional lifecycle management and cross-cutting concerns that people don't want to think about whenever another new way comes along.
bhouston|1 year ago
clintonb|1 year ago
Unfortunately, it only auto-scales based on requests and, eventually, CPU. We are in the process of moving our Temporal workers from Cloud Run to GKE Autopilot, which is ~30% cheaper given we can use arm64 Scale-Out nodes.
bdcravens|1 year ago
That said, we are planning on doing a cloud exit in the future. I don't feel we need Kubernetes, but we do need to orchestrate containers. In our case, it's less scale, and more isolation.
Temporary_31337|1 year ago
bdcravens|1 year ago
z3t4|1 year ago
unknown|1 year ago
[deleted]