Why is Kubernetes so absurdly complicated? The notion that you need an 'ingress' pod AND an external load-balancer, just to be able to respond to internet traffic - and not even all internet traffic (only HTTPS) - is truly staggering.
Is this honestly the best ops solution we can come up?
Setting up k8s reminds me of trying to configure Sendmail.
It's not Kubernetes that's complicated (although it's not exactly simple) the arcane and neolithic processes surrounding DNS and certificate management are the real issue IMO. Both technologies born from an era of static infrastructure and use cases that no longer apply.
No one has come up with alternatives, so we're left to deal with best we can
There's a lot of stuff in this thread. I've addressed some of them briefly by adding a Discussion section to the post. I'll go into a bit more depth here.
It is important to identity where the complexity is coming from. The post describes how to spin up various resources in a common IaaS provider. Does it appear complicated? Well, everything is relative. Is what is described in the post more or less complicated than purchasing physical servers, racking them, networking them, installing an operating system, configuring the servers, configuring the services running on those services, setting up and configuring a firewall, configuring a network proxy, managing access to theses components, etc.? We've not even gotten to anything Kubernetes-specific here. The complexity is there, we are really just talking about how and where the complexity manifests itself. If you aren't managing a lot of services with distinct requirements, you can avoid complexity by using a more "concierge" application service like Heroku, Elastic Beanstalk, App Engine, or Firebase. The trade-off is less control over the environment and increased costs at scale, i.e., running one service is cheaper on these app services, running 100 will be more expensive.
If you decide you do need flexibility or your scale is such that Kubernetes makes sense, then be aware that the complexity is not in Kubernetes requirements for running an application. The complexity comes from the inherent complexities in running an application and the overhead in mapping common application requirements to the Kubernetes application model. Things like load balancers, TLS certificates, and DNS are part of the former and will be part of any solution. You must mentally map how you have traditionally managed applications to the Kubernetes concepts of deployments, pods, services, and ingresses. This can be challenging and confusing. Other runtime platforms, e.g., Docker Swarm, Cloud Foundry, and Mesos, have similar concepts and require similar mental effort. The reason that Kubernetes has won the open source platform competition is because its application model solves a large number of use cases, its abstractions provide an excellent balance between granularity, complexity, & flexibility, and it provides a platform upon which higher-order, i.e., simplified or "rolled-up", solutions can be created. Your relative familiarity and comfort with other runtime solutions should not cloud your judgement about which is best, and any consideration of best must consider more than just perceived "complexity".
Last year I worked on deploying docker swarm clusters on prem for a client (their request). I loved how simple it was and yet how much you can tailor it, with additional services like traefik, prometheus, etc etc.
Months into the project and we learned how they are obsoleting it to focus on kube. I was devastated. We had a simple yet effective way of deploying clusters, and now we would need entire teams of CKA just to understand what is going on operationally.
I quit the project eventually, but I think they are just going back to VMs + ansible.
I see kube as providing a set of tools. These are plugable built-in functionalities that you can use to make a functioning application. I think the problem is that it is common to configure these directly, often with unpowerful languages like yaml.
I think they are a great set of tools. However it seems like there should be more framework on top of them to make it an easy user experience. You can argue that this is what Google has done with Cloud Run. They put a simple "run an image exposed to the internet" interface built ontop of the kube foundation.
It is complicated. This article doesn't explain things conceptually... it's more copy-and-paste and very GCP specific. It also doesn't explain how to use any of the stuff you set up, like cert-manager or the ingress. How about a sample app deployment?
Also, you don't "need" an ingress if you just want to expose a single service. A service can use a load balancer directly.
You don't need an "external" load balancer: you can use MetalLB.
> The notion that you need an 'ingress' pod AND an external load-balancer, just to be able to respond to internet traffic - and not even all internet traffic (only HTTPS) - is truly staggering.
If you are on a cloud provider, the load balancer abstraction is handled for you by AWS load balancer or Google Compute load balancer so really a non-factor.
I own and run a DevOps consulting company[1], and clients of course want to deploy their software on-prem which becomes significantly harder and more time consuming. Personally, I use MetalLB[2] by the folks at Google which creates a LoadBalancer abstraction and then use Traefik[3] in front of all deployments (containers). Traefik handles creating the SSL certificates using Let's Encrypt either via http or DNS challenge. The benefit here is only a single LoadBalancer is created pointing to Traefik, and then Traefik terminates https and routes to all the deployments (containers).
Thanks for the suggestion. I'll let the author of the post know. I think David was looking to update some documentation before doing any more distribution.
Question to those that use GCP/AWS and cert-manager....why not use a managed certificates service that both now offer? I found cert-manager to be the biggest pain point in my cluster (albeit this was 18 months ago)
I've added some information on why we made the choices we did in the Discussion section. Briefly, the GCP managed certs are only available with the native GKE load balancer solution. For costs reasons, we use nginx-ingress rather than the native solution.
ForHackernews|5 years ago
Is this honestly the best ops solution we can come up?
Setting up k8s reminds me of trying to configure Sendmail.
JoyrexJ9|5 years ago
No one has come up with alternatives, so we're left to deal with best we can
rpercy|5 years ago
But the abstractions are still relatively low level. The Ingress API in particular is a known issue and currently being simplified.
pddp11|5 years ago
It is important to identity where the complexity is coming from. The post describes how to spin up various resources in a common IaaS provider. Does it appear complicated? Well, everything is relative. Is what is described in the post more or less complicated than purchasing physical servers, racking them, networking them, installing an operating system, configuring the servers, configuring the services running on those services, setting up and configuring a firewall, configuring a network proxy, managing access to theses components, etc.? We've not even gotten to anything Kubernetes-specific here. The complexity is there, we are really just talking about how and where the complexity manifests itself. If you aren't managing a lot of services with distinct requirements, you can avoid complexity by using a more "concierge" application service like Heroku, Elastic Beanstalk, App Engine, or Firebase. The trade-off is less control over the environment and increased costs at scale, i.e., running one service is cheaper on these app services, running 100 will be more expensive.
If you decide you do need flexibility or your scale is such that Kubernetes makes sense, then be aware that the complexity is not in Kubernetes requirements for running an application. The complexity comes from the inherent complexities in running an application and the overhead in mapping common application requirements to the Kubernetes application model. Things like load balancers, TLS certificates, and DNS are part of the former and will be part of any solution. You must mentally map how you have traditionally managed applications to the Kubernetes concepts of deployments, pods, services, and ingresses. This can be challenging and confusing. Other runtime platforms, e.g., Docker Swarm, Cloud Foundry, and Mesos, have similar concepts and require similar mental effort. The reason that Kubernetes has won the open source platform competition is because its application model solves a large number of use cases, its abstractions provide an excellent balance between granularity, complexity, & flexibility, and it provides a platform upon which higher-order, i.e., simplified or "rolled-up", solutions can be created. Your relative familiarity and comfort with other runtime solutions should not cloud your judgement about which is best, and any consideration of best must consider more than just perceived "complexity".
bionsystem|5 years ago
Months into the project and we learned how they are obsoleting it to focus on kube. I was devastated. We had a simple yet effective way of deploying clusters, and now we would need entire teams of CKA just to understand what is going on operationally.
I quit the project eventually, but I think they are just going back to VMs + ansible.
kevincox|5 years ago
I think they are a great set of tools. However it seems like there should be more framework on top of them to make it an easy user experience. You can argue that this is what Google has done with Cloud Run. They put a simple "run an image exposed to the internet" interface built ontop of the kube foundation.
icedchai|5 years ago
Also, you don't "need" an ingress if you just want to expose a single service. A service can use a load balancer directly.
You don't need an "external" load balancer: you can use MetalLB.
nodesocket|5 years ago
If you are on a cloud provider, the load balancer abstraction is handled for you by AWS load balancer or Google Compute load balancer so really a non-factor.
I own and run a DevOps consulting company[1], and clients of course want to deploy their software on-prem which becomes significantly harder and more time consuming. Personally, I use MetalLB[2] by the folks at Google which creates a LoadBalancer abstraction and then use Traefik[3] in front of all deployments (containers). Traefik handles creating the SSL certificates using Let's Encrypt either via http or DNS challenge. The benefit here is only a single LoadBalancer is created pointing to Traefik, and then Traefik terminates https and routes to all the deployments (containers).
[1] https://elasticbyte.net [2] https://metallb.universe.tf/ [3] https://containo.us/traefik/
mleonard|5 years ago
pddp11|5 years ago
murban74|5 years ago
cridenour|5 years ago
Or at least document how to tear them down if someone is just following along to experiment.
pddp11|5 years ago
nujabe|5 years ago
pddp11|5 years ago