top | item 14524253

Kubernetes Production Patterns and Anti-Patterns

343 points| twakefield | 8 years ago |github.com | reply

40 comments

order
[+] atombender|8 years ago|reply
Good news about zombies: Kubernetes will soon solve this by having the pause container (which is automatically included in every pod) automatically reap children. [1]

Note that this change depends on the shared PID namespace support, which a larger, still-ongoing endeavour [2].

[1] https://github.com/kubernetes/kubernetes/commit/81d27aa23969...

[2] https://github.com/kubernetes/kubernetes/issues/1615

[+] shykes|8 years ago|reply
The zombie reaping problem is fixed in Docker. You can simply use `docker run --init` with no argument, and it will spawn a tiny init that will reap children processes correctly. We don't enable the flag by default to respect backwards compatibility.
[+] lemoncucumber|8 years ago|reply
I believe shared pid namespaces are landing in kubernetes 1.7.0 and therefore should be available relatively soon, no?
[+] CurtMonash|8 years ago|reply
The zombie problem is solved by reaping children?

I guess Craster was right after all.

:)

[+] web007|8 years ago|reply
This is an excellent check-list of both kubernetes and docker gotchas to avoid.

Coming into the k8s ecosystem with very little container experience has been a steep learning curve, and simple, concrete suggestions like this go a LONG way to leveling it out.

[+] twakefield|8 years ago|reply
We've also published some other workshops for Docker and Kubernetes that we take customers through when onboarding (if needed): https://github.com/gravitational/workshop

Feel free to take them for a spin and feedback welcome and appreciated.

[+] mino|8 years ago|reply
Thanks, great work!

I browsed it and immediately bookmarked to have a ready "here, read this first" answer :)

[+] outworlder|8 years ago|reply
I would like to have seen more "patterns" regarding configuration.

Right now, we have a bunch of microservices. Most of them talk to our shared infrastructure. We started with single configuration file, which has grown to monstrous proportions, and is mounted on every pod as a config map.

What would be the correct approach? Multiple configmaps with redundant information are just as bad, if not worse.

[+] treffer|8 years ago|reply
Start by using done service discovery. That way service names stay the same but service implementations and locations can move.

Then ship the static list of names (should be short) and per-service credentials (highly highly recommended).

Another pattern is co-locating a proxy with your app. See e.g. linkerd on how to do that. This will also unify the handling of circuit breakers and connection pools across services - even without any shared code!

[+] bryanlarsen|8 years ago|reply
Have you tried out istio yet? It's the packaging of Lyft's Envoy that Google and IBM are putting together to handle your last two points, circuit breaking and rate limiting and much more.
[+] thesandlord|8 years ago|reply
Word of caution: Istio isn't production ready yet, still a lot of missing stuff and bugs. Definitely worth playing with though, once it's ready I think it's going to make managing microservices much easier.
[+] old-gregg|8 years ago|reply
Some background on these workshops: we (Gravitational) help SaaS companies package their applications into Kubernetes, this makes them deployable into on-premise environments [1]. This in itself is an unexpected and quite awesome benefit of adopting Kubernetes in your organization: your stack becomes one-click portable.

[1] http://gravitational.com/telekube

[+] the_common_man|8 years ago|reply
Interesting. What SaaS products are available in kubernetes today?
[+] nunez|8 years ago|reply
what are everyone's thoughts on building containers for running one time binaries? like building a container to run jq or awk or something like that.

i've seen this pattern before and it didnt make me feel very good. it reeks of unnecessary complexity.

[+] majewsky|8 years ago|reply
Depends on the tool in question. jq or awk are such common-place and light on dependencies that it's indeed unnecessarily complex.

The benefit is when a one-time tool is heavy on dependencies. For example, with OpenStack Swift (an S3-like object storage), a common one-time task is swift-ring-builder, which takes an inventory of available storage and creates a shared configuration file which describes how data is shared between storages. That's something you would run on a sysadmin's notebook, but it's included with Swift itself, so you would have to install a bunch of Python modules into a virtualenv.

In that case, it's probably easier to just use the Docker image for Swift that you have anyway, and run swift-ring-builder from there.

[+] m0rganic|8 years ago|reply
we use kubernetes, helm and gitlab.. runtime configuration lives in each repo next to code - values.yaml, dev.yaml, test.yaml, prod.yaml to store applications runtime configuration -- each environment is host to 40+ redundant services.. its working quite well but has required a pretty big upfront investment... surprised there was much discussion about monitoring- prometheus and grafana work well for that
[+] humanfromearth|8 years ago|reply
> Anti-Pattern: Direct Use Of Pods > Kubernetes Pod is a building block that itself is not durable.

Kind of.. but you can set `restartPolicy: Always` and will always restart in case of failure.

[+] GauntletWizard|8 years ago|reply
No. If the node the pod is scheduled to goes away, the pod will not be restarted. 'restartPolicy: Always' applies to the containers in the pod, not the pod itself. Deployments (Or daemonsets, jobs, replicasets or replication controllers) actively maintain the pods, to assure in the event that the pods are deleted they're replaced.
[+] outworlder|8 years ago|reply
Other posters have already commented on this but, if you are not using deployments (or at least, replication controllers), stop what you are doing right away and fix that. Otherwise you lose one of the biggest advantages of k8s.