(no title)
rsanders | 5 years ago
A container without either request or limit is twice-damned, and will be scheduled as BestEffort. The entire cgroup slice for all BestEffort pods is given a cpu.shares of 2 milliCPUs, and if the kernel scheduler is functioning well, no pod in there is going to disrupt the anything but other BestEffort pods with any amount of processor demand. Throw in a 64 thread busyloop and no Burstable or Guaranteed pods should notice much.
Of course that's the ideal. There is an observable difference between a process that relinquishes its scheduler slice and one that must be pre-empted. But I wouldn't call that a major disruption. Each pod will still be given its full requested share of CPU.
If that's not the case, I'd love to know!
Thaxll|5 years ago
rsanders|5 years ago
Prometheus scrapes of the kubelet have slowed down a bit, but are still under 400ms.
Prometheus scrape latency for the node kubelet has increased, but not it's still sub-500ms.
Note that this cluster (which is on EKS) does have system reserved resources.