top | item 18767478

(no title)

davidopp__ | 7 years ago

A pattern we're seeing a lot of recently is one cluster per "stage" per region, where a "stage" is something like dev/test, canary, and prod. (In some cases only prod is replicated across multiple regions.) I think this may end up being the "sweet spot" for Kubernetes multi-tenancy architecture. The number of clusters isn't quite at the "Kubesprawl" level (I love that phrase and am absolutely going to steal it) -- you can still treat them as pets. But you get good isolation; you can limit access to the prod clusters to only the small set of folks (and perhaps the CD system) authorized to push code there, you can canary Kubernetes upgrades on the canary cluster(s), etc.

As an aside, something that's useful when thinking about Kubernetes multi-tenancy is to understand the distinction between "control plane" multi-tenancy and "data plane" multi-tenancy. Data plane multi-tenancy is about making it safe to share a node (or network) among multiple untrusting users and/or workloads. Examples of existing features for data plane multi-tenancy are gVisor/Kata, PodSecurityPolicy, and NetworkPolicy. Control plane multi-tenancy is about making it safe to share the cluster control plane among multiple untrusting users and/or workloads. Examples of existing features for control plane multi-tenancy are RBAC, ResourceQuota (particularly quota on number of objects; quota on things like cpu and memory are arguably data plane), and the EventRateLimit admission controller.

There's active work in the Kubernetes community in both of these areas; if you'd like to participate (or lurk), please join the kubernetes-wg-multi-tenancy mailing list: http://groups.google.com/forum/#!forum/kubernetes-wg-multite...

Also, I gave a talk at KubeCon EU earlier this year that gives a rough overview of Kubernetes multi-tenancy, that might be of interest to some folks: https://kccnceu18.sched.com/event/Dqvb?iframe=no (links to the slides and YouTube video are near the bottom of the page)

Disclosure: I work at Google on Kubernetes and GKE.

discuss

order

georgebarnett|7 years ago

Your experience mirrors what I've seen.

Many teams use clusters for stages because they work on underlying cluster components and need to ensure they work together and upgrade processes work (e.g. terraform configs comes to mind). Theres no reason to separate accounts because the cluster constructs aren't there for security.

Considering it deeper (I haven't had to think about this for a while), I think multi tenancy would cover almost all of the use cases I've seen except for the platform dev where people use clusters for separation when testing cluster config-as-code changes.

ownagefool|7 years ago

I basically split the clusters into livedata, nolivedata, random untrusted code (ci), shared tooling.

The idea being that you have process around getting your code to run on the livedata cluster and this we add more stringent requirements for accessing each API.

This is for soft tenancy, and you want to write admission controllers to reject apps that haven't went through the defined process.

jacques_chester|7 years ago

The distinction is very helpful and gets at something I was struggling to articulate.

Edit: looking more in the thread, you clearly know this much better than I do. I'd like to get the chance to talk and improve my understanding, if you ever find some spare time.