Anecdotal example, though I'm sure I'm not the only one:
We had a C++ service. It needed high availability, but didn't have super high resource requirements. Our setup was an ec2 instance (c5.xlarge) to build and release AMIs (a bash script using debootstrap to build the AMI which someone wrote 10 years ago probably), an autoscaling group of 3 t2.small instances spread across AZs, and an ALB.
The total cost of this was perhaps $200/mo, and we had incredibly good uptime.
So, what's the catch? Well, it took the service about 20 minutes to build, and about 35 minutes from clicking the "deploy new version" to it actually running. Someone higher up noticed, and a bright-eyed infra engineer said k8s would make the deploy cycle faster.
Fast forward a year. Our autoscaling group is now 3 c5.xlarge instances because the kubelet + docker + coredns + all this other k8s gunk I don't understand need significantly more CPU than our app does (and without giving them more CPU, deploy times were much slower since downloading and unpacking the image was so slow). We have a new logging system (our old logging setup wasn't cloud native apparently) that takes a gig more memory per node. A gig of memory per node to support our service, which peaks at 200MiB RSS. Building and deploying a new version still takes about 35 minutes because compiling C++ is the exact same speed in a dockerfile as it is on an ec2 instance.
It costs about $600/mo, and it has far more operational load. When it isn't having any issues, the p99 is identical.
> better performance or lesser costs
It seems like the opposite of what you'd expect. K8s is adding more components. It's adding more resource usage. Why wouldn't that be slower and cost more?
k8savu|3 years ago
We had a C++ service. It needed high availability, but didn't have super high resource requirements. Our setup was an ec2 instance (c5.xlarge) to build and release AMIs (a bash script using debootstrap to build the AMI which someone wrote 10 years ago probably), an autoscaling group of 3 t2.small instances spread across AZs, and an ALB.
The total cost of this was perhaps $200/mo, and we had incredibly good uptime.
So, what's the catch? Well, it took the service about 20 minutes to build, and about 35 minutes from clicking the "deploy new version" to it actually running. Someone higher up noticed, and a bright-eyed infra engineer said k8s would make the deploy cycle faster.
Fast forward a year. Our autoscaling group is now 3 c5.xlarge instances because the kubelet + docker + coredns + all this other k8s gunk I don't understand need significantly more CPU than our app does (and without giving them more CPU, deploy times were much slower since downloading and unpacking the image was so slow). We have a new logging system (our old logging setup wasn't cloud native apparently) that takes a gig more memory per node. A gig of memory per node to support our service, which peaks at 200MiB RSS. Building and deploying a new version still takes about 35 minutes because compiling C++ is the exact same speed in a dockerfile as it is on an ec2 instance.
It costs about $600/mo, and it has far more operational load. When it isn't having any issues, the p99 is identical.
> better performance or lesser costs
It seems like the opposite of what you'd expect. K8s is adding more components. It's adding more resource usage. Why wouldn't that be slower and cost more?