top | item 32477864

(no title)

tennix | 3 years ago

> This is particularly problematic for hosted Kubernetes services like Google Kubernetes Engine (GKE), which often limit the CPU and memory available to the API server. These services can gracefully scale the API server up when they predict it will require more resources - e.g. when more nodes are created. Unfortunately at the time of writing most don’t factor in more CRDs being created and won’t begin scaling until the API server is repeatedly “OOM killed” (terminated for exceeding its memory budget).

Yeah, that's the limitation of hosted kubernetes services. Internally we used crossplane a lot. Though we haven't reached this limitation yet, we used another approach by deploying a standalone etcd + k8s apiserver as k8s (eks/gke) pods. And register the CRDs into this apiserver. So we can scale the etcd and apiserver easily.

discuss

order

dmlittle|3 years ago

Depending on the size of your cluster, scaling out the apiservers is not necessarily cheap as the watch caches need to be initialized which puts load on etcd. Generally speaking, it's probably better to scale the apiservers vertically rather than horizontally.