I’m baffled to see so many anti-k8s sentiments on HN. Is it because most commenters are developers used to services like heroku, fly.io, render.com etc. Or run their apps on VM’s?
I think some are just pretty sick and tired of the explosion of needless complexity we've seen in the last decade or so in software, and rightly so. This is an industry-wide problem of deeply misaligned incentives (& some amount of ZIRP gold rush), not specific to this particular case - if this one is even a good example of this to begin with.
Honestly, as it stands, I think we'd be seen as pretty useless craftsmen in any other field due to an unhealthy obsession of our tooling and meta-work - consistently throwing any kind of sensible resource usage out of the window in favor of just getting to work with certain tooling. It's some kind of a "Temporarily embarrassed FAANG engineer" situation.
Fair point but I think the key point here is unnecessary complexity versus necessary complexity. Are zero-downtime deployments and load balancing unnecessary? Perhaps for a personal project, but for any company with a consistent userbase I'd argue these are a non-negotiable, or should be anyways. In a situation where this is the expectation, k8s seems like the simplest answer, or near enough to it.
I agree with this somewhat. The other day I was driving home and I saw a sprinkler head and broke on the side of the road and was spraying water everywhere. It made me think, why aren't sprinkler systems designed with HA in mind? Why aren't there dual water lines with dual sprinkler heads everywhere with an electronic component that detects a break in a line and automatically switches to the backup water line? It's because the downside of having the water spray everywhere, the grass become unhealthy or die is less than how much it would cost to deploy it HA.
In the software/tech industry it's common place to just accept that your app can't be down for any amount of time no matter what. No one checked to see how much more it would cost (engineering time & infra costs) to deploy the app so it would be HA, so no one checked to see if it would be worth it.
I blame this logic on the low interest rates for a decade. I could be wrong.
For me personally, I get a little bit salty about it due to imagined, theoretical business needs of being multi-cloud, or being able to deploy on-prem someday if needed. It's tough to explain just how much longer it'll take, how much more expertise is required, how much more fragile it'll be, and how much more money it'll take to build out on Kubernetes instead of your AWS deployment model of choice (VM images on EC2, or Elastic Beanstalk, or ECS / Fargate, or Lambda).
I don't want to set up or maintain my own ELK stack, or Prometheus. Or wrestle with CNI plugins. Or Kafka. Or high availability Postgres. Or Argo. Or Helm. Or control plane upgrades. I can get up and running with the AWS equivalent almost immediately, with almost no maintenance, and usually with linear costs starting near zero. I can solve business problems so, so much faster and more efficiently. It's the difference between me being able to blow away expectations and my whole team being quarters behind.
That said, when there is a genuine multi-cloud or on-prem requirement, I wouldn't want to do it with anything other than k8s. And it's probably not as bad if you do actually work at a company big enough to have a lot of skilled engineers that understand k8s--that just hasn't been the case anywhere I've worked.
Genuine question: how are you handling load balancing, log aggregation, failure restart + readiness checks, deployment pipelines, and machine maintenance schedules with these “simple” setups?
Because as annoying as getting the prometheus + loki + tempo + promtail stack going on k8s is —- I don’t really believe that writing it from scratch is easier.
I think you're just used to AWS services and don't see the complexity there. I tried running some stateful services on ECS once and it took me hours to have something _not_ working. In Kubernetes it takes me literally minutes to achieve the same task (+ automatic chart updates with renovatebot).
I hear a lot of comments that sound like people who used K8s years ago and not since. The clouds have made K8s management stupid simple at this point, you can absolutely get up and running immediately with no worry of upgrades on a modern provider like GKE
In some ways, it's nice to see companies move to use mostly open source infrastructure, a lot of it coming from CNCF (https://landscape.cncf.io), ASF and other organizations out there (on top of the random things on github).
For me it is about VMs. Feel uneasy knowing that any kernel vulnerability will allow a malicious code to escape the container and explore the kubernetes host
There are kata-containers I think, they might solve my angst and make me enjoy k8s
Overall... There's just nothing cool in kubernetes to me. Containers, load balancers, megabytes of yaml -- I've seen it all. Nothing feels interesting enough to try
vs the Application getting hacked and running lose on the VM?
If you have never dealt with, I have to run these 50 containers plus Nginx/CertBot while figuring out which node is best to run it, yea, I can see you not being thrilled about Kubernetes. For the rest of us though, Kubernetes helps out with that easily.
Kubernetes itself is built around mostly solid distributed system principles.
It's the ecosystem around it which turns things needlessly complex.
Just because you have kubernetes, you don't necessarily need istio, helm, Argo cd, cilium, and whatever half baked stuff is pushed by CNCF yesterday.
For example take a look at helm. Its templating is atrocious, and if I am still correct, it doesn't have a way to order resources properly except hooks. Sometimes resource A (deployment) depends on resource B (some CRD).
The culture around kubernetes dictates you bring in everything pushed by CNCF. And most of these stuff are half baked MVPs.
---
The word devops has created expectations that back end developer should be fighting kubernetes if something goes wrong.
---
Containerization is done poorly by many orgs, no care about security and image size. That's a rant for another day. I suspect this isn't a big reason for kubernetes hate here.
elktown|1 year ago
Honestly, as it stands, I think we'd be seen as pretty useless craftsmen in any other field due to an unhealthy obsession of our tooling and meta-work - consistently throwing any kind of sensible resource usage out of the window in favor of just getting to work with certain tooling. It's some kind of a "Temporarily embarrassed FAANG engineer" situation.
methodical|1 year ago
cwiggs|1 year ago
In the software/tech industry it's common place to just accept that your app can't be down for any amount of time no matter what. No one checked to see how much more it would cost (engineering time & infra costs) to deploy the app so it would be HA, so no one checked to see if it would be worth it.
I blame this logic on the low interest rates for a decade. I could be wrong.
darby_nine|1 year ago
FAANG engineers made the same mistake, too, even though the analogy implies comparative competency or value.
bobobob420|1 year ago
moduspol|1 year ago
I don't want to set up or maintain my own ELK stack, or Prometheus. Or wrestle with CNI plugins. Or Kafka. Or high availability Postgres. Or Argo. Or Helm. Or control plane upgrades. I can get up and running with the AWS equivalent almost immediately, with almost no maintenance, and usually with linear costs starting near zero. I can solve business problems so, so much faster and more efficiently. It's the difference between me being able to blow away expectations and my whole team being quarters behind.
That said, when there is a genuine multi-cloud or on-prem requirement, I wouldn't want to do it with anything other than k8s. And it's probably not as bad if you do actually work at a company big enough to have a lot of skilled engineers that understand k8s--that just hasn't been the case anywhere I've worked.
drawnwren|1 year ago
Because as annoying as getting the prometheus + loki + tempo + promtail stack going on k8s is —- I don’t really believe that writing it from scratch is easier.
angio|1 year ago
mountainriver|1 year ago
caniszczyk|1 year ago
In some ways, it's nice to see companies move to use mostly open source infrastructure, a lot of it coming from CNCF (https://landscape.cncf.io), ASF and other organizations out there (on top of the random things on github).
maayank|1 year ago
tryauuum|1 year ago
There are kata-containers I think, they might solve my angst and make me enjoy k8s
Overall... There's just nothing cool in kubernetes to me. Containers, load balancers, megabytes of yaml -- I've seen it all. Nothing feels interesting enough to try
stackskipton|1 year ago
If you have never dealt with, I have to run these 50 containers plus Nginx/CertBot while figuring out which node is best to run it, yea, I can see you not being thrilled about Kubernetes. For the rest of us though, Kubernetes helps out with that easily.
archenemybuntu|1 year ago
It's the ecosystem around it which turns things needlessly complex.
Just because you have kubernetes, you don't necessarily need istio, helm, Argo cd, cilium, and whatever half baked stuff is pushed by CNCF yesterday.
For example take a look at helm. Its templating is atrocious, and if I am still correct, it doesn't have a way to order resources properly except hooks. Sometimes resource A (deployment) depends on resource B (some CRD).
The culture around kubernetes dictates you bring in everything pushed by CNCF. And most of these stuff are half baked MVPs.
---
The word devops has created expectations that back end developer should be fighting kubernetes if something goes wrong.
---
Containerization is done poorly by many orgs, no care about security and image size. That's a rant for another day. I suspect this isn't a big reason for kubernetes hate here.