K3s + FluxCD. There's something nice about using git to add a helm repo, a helm release with a few values, then 'git push'. Shortly afterwards there's a new DNS record, TLS cert and I can hit https://mynewservice.example.com
Kubernetes solves real problems for the 1% who need it. The other 99% are paying a massive complexity tax for capabilities they never use, while 87% of their provisioned CPU sits idle.
is where the author is just wrong:
- abstracts away ssh - makes it pretty unnecessary
- rbac multi tenancy
- better automations
- orchestating more than one cluster
- better infra as code
- provisions are as good as you make them, if you don't want them only use limits.
- large mind share, bitnami (was) great
I use k3s for my home network because it's simple and easy, thinking that k8s is overengineered just plain wrong - it's just different especially if you compare different versions of k8s designed for different things where for ex: k3s bundles csi, cni, ctl, ingress for you.
I actually struggle with compose ('orchestration' alternative) significantly more since it usually has complicated workarounds to missing features.
I have been running 5 k8s-flavored clusters for more than half a decade between 1 to 40 nodes.
The author claimed cert-manager as inherent k8s overhead (its not) but then didn't mention certificate management with docker swarm at all. They lost me there.
> If you need granular control over every tiny aspect of your container orchestration — network policies, pod scheduling, resource quotas, multi-tenant isolation, custom admission controllers, autoscaling on custom metrics — Kubernetes gives you knobs for all of it.
> The problem is that 99% of teams don't need any of those knobs.
I keep hoping for a Docker Swarm revival. It's the right size for small-to-medium-size deployments with normal requirements.
Every enterprise team (at least who are in B2B business) needs this. The number of security clearances (zero trust boundary), security compliance is must. May be in B2C space where you might not need that depending upon how secure you wanna be based on what data you hold
ECS Fargate is basically this on AWS. It’s just not cloud agnostic. But Swarm itself while being cloud agnostic is a proprietary product as well, so you still get the lock in, just at a different layer
Can you control the docker swarm API from within a container that is running inside of it?
I think one of the killer features of k8s is how simple it is to write clients that manipulate the cluster itself, even when they’re running from inside of it. Give them the right role etc and you’re done. You don’t even have to write something as complete as an actual controller/operator - but that’s also an option too
You can. I think there's a couple approaches - bind mount the docker socket, or expose it on localhost, and use host networking for the consuming container, or there exist various proxy projects for the socket. There may be other ways, curious if anyone else knows more.
The author here repeatedly claims that teams would function identically on Swarm and are wasting resources using Kubernetes.
You don’t even need to be a mid-sized team to need stuff like RBAC, service mesh, multi-cluster networking, etc.
Claiming that kubernetes only “won” because of economic pressure is only true in the most basic of sense, and claiming it as a resume padder is flat out insulting to all its actual technical merits.
The multi-tenant nature and innate capabilities is part economics of it, but operators, extensibility, and platform portability across different environments are actual technical merits.
Claiming that autoscaling is optional and not required for most production environments is at best myopic.
It also greatly undersells the operational complexity that autoscaling actually solves, versus just the reactive script based solely on CPU. Metrics pipelines, cluster-level resource constraints, and pod disruption budgets.
As far as the repeated claim that it just “works”, great. Not working is more of a function of the application not the platform.
I dunno, this whole article frames kubernetes as a massive overhead and monolithic beast rather than the programmable infrastructure that it is.
It also tries to minimize many real world needs like multi-team isolation, extensibility, and ecosystem integrations
> I dunno, this whole article frames kubernetes as a massive overhead
Author describes his context being a setup with two $83/year VPS instances - a scale so incredibly minuscule compared to typical deployments, that any of his arguments against one of the core cloud technologies fall flat.
raffraffraff|13 days ago
frizlab|13 days ago
bakies|13 days ago
himata4113|13 days ago
is where the author is just wrong:
- abstracts away ssh - makes it pretty unnecessary
- rbac multi tenancy
- better automations
- orchestating more than one cluster
- better infra as code
- provisions are as good as you make them, if you don't want them only use limits.
- large mind share, bitnami (was) great
I use k3s for my home network because it's simple and easy, thinking that k8s is overengineered just plain wrong - it's just different especially if you compare different versions of k8s designed for different things where for ex: k3s bundles csi, cni, ctl, ingress for you.
I actually struggle with compose ('orchestration' alternative) significantly more since it usually has complicated workarounds to missing features.
I have been running 5 k8s-flavored clusters for more than half a decade between 1 to 40 nodes.
NewJazz|13 days ago
Taikonerd|13 days ago
> The problem is that 99% of teams don't need any of those knobs.
I keep hoping for a Docker Swarm revival. It's the right size for small-to-medium-size deployments with normal requirements.
nitinreddy88|13 days ago
SOLAR_FIELDS|13 days ago
dwroberts|13 days ago
I think one of the killer features of k8s is how simple it is to write clients that manipulate the cluster itself, even when they’re running from inside of it. Give them the right role etc and you’re done. You don’t even have to write something as complete as an actual controller/operator - but that’s also an option too
itintheory|13 days ago
k_roy|13 days ago
You don’t even need to be a mid-sized team to need stuff like RBAC, service mesh, multi-cluster networking, etc.
Claiming that kubernetes only “won” because of economic pressure is only true in the most basic of sense, and claiming it as a resume padder is flat out insulting to all its actual technical merits.
The multi-tenant nature and innate capabilities is part economics of it, but operators, extensibility, and platform portability across different environments are actual technical merits.
Claiming that autoscaling is optional and not required for most production environments is at best myopic.
It also greatly undersells the operational complexity that autoscaling actually solves, versus just the reactive script based solely on CPU. Metrics pipelines, cluster-level resource constraints, and pod disruption budgets.
As far as the repeated claim that it just “works”, great. Not working is more of a function of the application not the platform.
I dunno, this whole article frames kubernetes as a massive overhead and monolithic beast rather than the programmable infrastructure that it is.
It also tries to minimize many real world needs like multi-team isolation, extensibility, and ecosystem integrations
mystifyingpoi|13 days ago
Author describes his context being a setup with two $83/year VPS instances - a scale so incredibly minuscule compared to typical deployments, that any of his arguments against one of the core cloud technologies fall flat.
Of course he doesn't need Kubernetes. It's fine.
mzi|13 days ago
verdverm|13 days ago
Docker Swarm doesn't have the mindshare for effective hiring
autotune|13 days ago
johnfn|13 days ago