top | item 22491170

“Let’s use Kubernetes.” Now you have eight problems

719 points| signa11 | 6 years ago |pythonspeed.com

469 comments

order
[+] atonse|6 years ago|reply
The odd thing about having 20 years of experience (while simultaneously being wide-eyed about new tech), is that I now have enough confidence to read interesting posts (like any post on k8s) and not think "I HAVE to be doing this" – and rather think "good to know when I do need it."

Even for the highest scale app I've worked on (which was something like 20 requests per second, not silicon valley insane but more than average), we got by perfectly fine with 3 web servers behind a load balancer, hooked up to a hot-failover RDS instance. And we had 100% uptime in 3 years.

I feel things like Packer (allowing for deterministic construction of your server) and Terraform are a lot more necessary at any scale for generally good hygiene and disaster recovery.

[+] hinkley|6 years ago|reply
I have, at various times in my career, tried to convince others that there is an awful, awful lot of stuff you can get done with a few copies of nginx.

The first “service mesh” I ever did was just nginx as a forward proxy on dev boxes, so we could reroute a few endpoints to new code for debugging purposes. And the first time I ever heard of Consul was in the context of automatically updating nginx upstreams for servers coming and going.

There is someone at work trying to finish up a large raft of work, and if I hadn’t had my wires crossed about a certain feature set being in nginx versus nginx Plus, I probably would have stopped the whole thing and suggested we just use nginx for it.

I think I have said this at work a few times but might have here as well: if nginx or haproxy could natively talk to Consul for upstream data, I’m not sure how much of this other stuff would have ever been necessary. And I kind of feel like Hashicorp missed a big opportunity there. Their DNS solution, while interesting, doesn’t compose well with other things, like putting a cache between your web server and the services.

I think we tried to use that DNS solution a while back and found that the DNS lookups were adding a few milliseconds to each call. Which might not sound like much except we have some endpoints that average 10ms. And with fanout, those milliseconds start to pile up.

[+] lazyier|6 years ago|reply
Kubernetes has complexity for a reason. It's trying to solve complex problems in a standardized and mature manner.

If you don't need those problems solved then it's not going to benefit you a whole lot.

Of course if you are using docker already and are following best practices with containers then converting to Kubernetes really isn't that hard. So if you do end up needing more problems solved then you are willing to tackle on your own then switching over is going to be on the table.

The way I think about it is if you are struggling to deploy and manage the life cycle of your applications... fail overs, rolling updates, and you think you need some sort of session management like supervisord or something like to manage a cloud of processes and you find yourself trying to install and manage applications and services developed by third parties....

Then probably looking at Kubernetes is a good idea. Let K8s be your session manager, etc.

[+] sciyoshi|6 years ago|reply
There's always more than one way to do things, and it's good to be aware of the trade-offs that different solutions provide. I've worked with systems like you describe in the past, and in my experience you always end up needing more complexity than you might think. First you need to learn Packer, or Terraform, or Salt, or Ansible - how do you pick one? How do you track changes to server configurations and manage server access? How do you do a rolling deploy of a new version - custom SSH scripts, or Fabric/Capistrano, or...? What about rolling back, or doing canary deployments, or...? How do you ensure that dev and CI environments are similar to production so that you don't run into errors from missing/incompatible C dependencies when you deploy? And so on.

K8s for us provides a nice, well-documented abstraction over these problems. For sure, there was definitely a learning curve and non-trivial setup time. Could we have done everything without it? Perhaps. But it has had its benefits - for example, being able to spin up new isolated testing environments within a few minutes with just a few lines of code.

[+] closeparen|6 years ago|reply
Yes. Three web servers and a load balancer is fine. Three web servers and a load balancer, repeated 1,000 times across the enterprise in idiosyncratic ways and from scratch each time, is less fine. That’s where Kubernetes-shaped solutions (like Mesos that came before it) become appropriate.

You can get a lot done with a sailboat. For certain kinds of problems you might genuinely need an aircraft carrier. But then you’d better have a navy. Don’t just wander onto the bridge and start pressing buttons.

[+] moksly|6 years ago|reply
I think anyone considering these wild setups should read about how stackowerflow is hosted on a couple of IIS servers. It’s a sobering reminder that you often don’t need the new cool.
[+] FreeHugs|6 years ago|reply
A typical PHP application that does a bit of database updating per request, gets some new data from the DB and templates it should handle 20 requests per second on a single $20/month VM. And in my experience from the last years, VMs have uptime >99.99% these days.

What made you settle on a multi-machine setup instead? Was it to reach higher uptime or were you processing very heavy computations per request?

[+] fxtentacle|6 years ago|reply
The odd thing about having 10 years of experience as a consultant is that you know when to write "Kubernetes" into a project proposal, even though everyone agrees that it'll be a sub-optimal solution.

But both you and their tech lead want to be able to write "used Kubernetes" on your CV in the future, plus future-oriented initiatives inside your contact's company tend to get more budget allocated to them. So it's a sound decision for everyone and for the health of the project to just go with whatever tech is fancy enough, but won't get into the way too badly.

Enter Kubernetes, the fashionable Docker upgrade that you won't regret too badly ;)

[+] bitL|6 years ago|reply
I worked on a transacted 20-60k messages/s system and am not sure K8S wouldn't be a hindrance there... Imagine writing Kafka using K8S and microservices.
[+] cmhnn|6 years ago|reply
I don't know about "lot more necessary". The images are one part of the equation especially to meet various regulations. There is a ton to running a large scale service especially if you are the service that the people who are posting how wicked smart they are at k8 service runs on. Google found that out yesterday when they said "oh hey people expect support maybe we should charge". That is not new for grown ups.

The cloud existed before k8 and k8's creator has a far less mature cloud than AWS or Azure.

But this thread has convinced me of one thing. It's time to re-cloak and never post again because even though the community is a cut above some others at the end of the day it's still a bunch of marks and if you know the inside it is hard to bite your lip.

[+] JMTQp8lwXL|6 years ago|reply
Who is giving you 100% uptime? All major providers (AWS, GCP, Azure, etc) all have had outages in the past 3 years. And that level of infrastructural failure doesn't care about whether or not you're using k8s.
[+] echelon|6 years ago|reply
> Even for the highest scale app I've worked on (which was something like 20 requests per second,

Kubernetes is not for you. 5kQPS times a hundred or more services and Kubernetes fits the bill.

> And we had 100% uptime in 3 years.

Not a single request failed in that time serving at 20 QPS? I'm a little suspicious.

Regardless, if you were handling 10 or 100 times this volume to a single service, you'd want additional systems in place to assure hitless deploys.

[+] wpietri|6 years ago|reply
Same. I like trying out new things, so I have a feel for what they're good for. I tried setting up Kubernetes for my home services and pretty quickly got to "nope!" As the article says, it surely makes sense at Google's scale. But it has a large cognitive and operational burden. Too large, I'd say, for most one-team projects.
[+] Damogran6|6 years ago|reply
I'm in a similar boat, only my eyes are wide, glazed over, and I'm lost in the word salad...which only seems to be getting worse.
[+] jorams|6 years ago|reply
These kinds of posts always focus on the complexity of running k8s, the large amount of concepts it has, the lack of a need to scale, and that there is a "wide variety of tools" that can replace it, but the advice never seems to become more concrete.

We are running a relatively small system on k8s. The cluster contains just a few nodes, a couple of which are serving web traffic and a variable number of others that are running background workers. The number of background workers is scaled up based on the amount of work to be done, then scaled down once no longer necessary. Some cronjobs trigger every once in a while.

It runs on GKE.

All of this could run on anything that runs containers, and the scaling could probably be replaced by a single beefy server. In fact, we can run all of this on a single developer machine if there is no load.

The following k8s concepts are currently visible to us developers: Pod, Deployment, Job, CronJob, Service, Ingress, ConfigMap, Secret. The hardest one to understand is Ingress, because it is mapped to a GCE load balancer. All the rest is predictable and easy to grasp. I know k8s is a monster to run, but none of us have to deal with that part at all.

Running on GKE gives us the following things, in addition to just running it all, without any effort on our part: centralized logging, centralized monitoring with alerts, rolling deployments with easy rollbacks, automatic VM scaling, automatic VM upgrades.

How would we replace GKE in this equation? what would we have to give up? What new tools and concepts would we need to learn? How much of those would be vendor-specific?

If anyone has a solution that is actually simpler and just as easy to set up, I'm very much interested.

[+] sho|6 years ago|reply
This, and the other articles like it, should be required reading on any "how to startup" list. I personally know startups for whom I believe drinking the k8s/golang/microservices kool-aid has cost them 6-12 months of launch delay and hundreds of thousands of dollars in wasted engineering/devops time. For request loads one hundredth of what I was handling effortlessly with a monolithic Rails server in 2013.

It is the job of the CTO to steer excitable juniors away from the new hotness, and what might look best on their resumes, towards what is tried, true, and ultimately best for the business. k8s on day one at a startup is like a mom and pop grocery store buying SAP. It wouldn't be acceptable in any other industry, and can be a death sentence.

[+] rossdavidh|6 years ago|reply
Having been at a company that was starting to move things to Kubernetes, when it had absolutely no reason to, I can say that it was being done because: 1) the developers wanted to be able to say they knew how to use Kubernetes, when they applied for their next job (perhaps at a company big enough to need it) 2) the managers didn't really understand much about what it was, to evaluate if it was necessary, but 3) some of the managers wanted to say they had managed teams that used Kubernetes, for the same reason as the developers

Which is not to say that it should never be used. But we have a recurring pattern of really, really large companies (like FAANG) developing technologies that make sense for them, and then it gets used at lots of other companies that will never, ever be big enough to have it pay off. On the other hand, they now need 2-3x the developers they used to, because they have too many things going on, mostly related to solving scale problems they'll never have.

Don't use a semi-tractor trailer to get your groceries. Admit it when you're not a shipping company. For most of us, the compact car is a better idea.

[+] flowerlad|6 years ago|reply
I am a solo developer (full stack, but primarily frontend), and Kubernetes has been a game changer for me. I could never run a scalable service on the cloud without Kubernetes. The alternative to Kubernetes is learning proprietary technologies like "Elastic Beanstalk" and "Azure App Service" and so on. No thank you. Kubernetes is very well designed, a pleasure to learn and a breeze to use. This article seems to be about setting up your own Kubernetes cluster. That may be hard; I don't know; I use Google Kubernetes Engine.

For others considering Kubernetes: go for it. Sometimes you learn a technology because your job requires it, sometimes you learn a technology because it is so well designed and awesome. Kubernetes was the latter for me, although it may also be the former for many people.

The first step is to learn Docker. Docker is useful in and of itself, whether you use Kubernetes or not. Once you learn Docker you can take advantage of things like deploying an app as a Docker image to Azure, on-demand Azure Container Instances and so on. Once you know Docker you will realize that all other ways of deploying applications are outmoded.

Once you know Docker it is but a small step to learn Kubernetes. If you have microservices then you need a way for services to discover each other. Kubernetes lets you use DNS to find other services. Learn about Kubernetes' Pods (one or more Containers that must reside on the same machine to work), ReplicaSets (run multiple copies of a Pod), Services (exposes a microservice internally using DNS), Deployments (lets you reliably roll out new software versions without downtime, and restarts pods if they die) and Ingress (HTTP load balancing). You may also need to learn PersistentVolumes and StatefulSets.

The awesome parts of Kubernetes include the kubectl exec command which lets you log into any container without almost any setup or password, kubectl logs to view stdout from your process, kubectl cp to copy files in and out, kubectl port-forward to make remote services appear to be running on your dev box, and so on.

[+] mschaef|6 years ago|reply
> Once you know Docker you will realize that all other ways of deploying applications are outmoded.

This is a strong and absolute statement to be making in a field as broad and diverse as software engineering. My experience from being on both sides of these statements it that they're often wrong, or at least short sighted.

In this case, while I get the packaging benefits of Docker, there are other ways to package applications that don't require as much extra software/virtualization/training. So the the question isn't as much about whether Docker/K8S/etc. provides useful benefits as whether or not those benefits are worth the associated costs. Nothing is free, after all, and particularly for small to moderate sized systems, the answer is often that the costs are too high. (And with hardware as good as it is these days, small-to-moderate is an awful lot of capacity.)

I've personally gotten a lot of value out of packaging things up into an uber jar, setting up a standard install process/script, and then using the usual unix tooling (and init.d) to manage and run the thing. I guess that sounds super old fashioned, but the approach has been around a long time, is widely understood, and known to work in many, many, many worthwhile circumstances.

[+] l33tman|6 years ago|reply
Everybody feels confident in the stack they have spent time using.. You like Kubernetes because you took the time to learn it, someone else will find Elastic Beanstalk or AWS ECS equally easy to setup and scale. It's not that Docker is the only way to deploy an application either, there are virtues of learning the serverless deployment modes as well on the various clouds. For many of the "proprietary lock-ins" you run into, you often get something back.

I do agree in point that kubernetes and Docker are nice of course :)

[+] stickfigure|6 years ago|reply
...or you could deploy your app on Google App Engine or Heroku and spend all your time developing features your customers care about.
[+] chrismarlow9|6 years ago|reply
I'm skeptical to believe the service is any more scalable than it would be with regular instances and multi az. mainly because in my experience scalability has way more to do with network topology and the architecture of how requests flow, rather than the tech for implementation.
[+] shusson|6 years ago|reply
> I could never run a scalable service on the cloud without Kubernetes

Can you give us an indication of the scale of your app? e.g rpm.

[+] adamcharnock|6 years ago|reply
This is my experience too. I've used smaller-scale tools (such as docker-compose, Dokku, Heroku etc) but I've found them to be a mixture of unreliable or unsuitable in the case of fairly modest complexity.

Eventually I turned to Kuberenetes to see how it compared. I spent a day-ish reading through the 'core concepts' in the docs which was plenty enough to get me started on GKE. It took me a week or two to migrate our workloads over, and once everything stabilised it has been pretty much fire-and-forget.

I have about twenty pieces of software deployed for my current client and I feel that I can trust Kuberenetes to just get on with making everything run.

I've since deployed clusters manually (i.e. bare metal), but I certainly wouldn't recommend it for anyone starting-out. Personally I'm keeping a close eye on k3s.

I think my main learning during this process – at least for my situation – was to run any critical stateful services outside of Kubernetes (Postgres, message queues, etc). I think this applies less now than it did when I started out (v1.4), but non-the-less it is a choice that is still serving me well.

[+] pdr2020|6 years ago|reply
"I could never run a scalable service on the cloud without Kubernetes."

But also

"The alternative to Kubernetes is learning proprietary technologies like "Elastic Beanstalk" and "Azure App Service" and so on. No thank you"

So can we clarify that you truly meant: "I decided not to run a scalable service in the cloud using any of the existing cloud tools that do and have supported that scenario for years. And decided to use k8s instead" :)

[+] Florin_Andrei|6 years ago|reply
> I could never run a scalable service on the cloud without Kubernetes.

I find this statement quite bizarre.

[+] rudolph9|6 years ago|reply
I’m in a similar situation and kubernetes is honestly pretty easy to use one you get. If your team is small use a managed kubernetes Like GKE of EKS

It’s worth noting that kubernetes uses containers which can be created via docket but is not dependent on docker

[+] FreeHugs|6 years ago|reply

    If you have microservices then you need
    a way for services to discover each other
Why not run them in docker containers with fixed IPs?
[+] MattSayar|6 years ago|reply
Do you have any resources you'd recommend to learn Docker?
[+] doctorbuttes|6 years ago|reply
I'm also enjoying Kubernetes. I started a hobby project on GKE just to learn, but now the project has 8,000 MAU or so and will be scaling up more in the near future. K8s is totally overkill, but I've had a good time and it's worked well so far.
[+] jblake|6 years ago|reply
I run a Saas business solo, for eight years now, netting six figures, and I've been on Heroku the entire time for just under $1,000 a month. Monolithic rails app on a single database, 300 tables.

Sometimes I feel teased by 'moving to EC2' or another hot topic to save a few bucks, but the reality is I've spent at most 2 hours a month doing `heroku pg:upgrade` for maintenance once a year, and `git push production master` for deploys and I'd like to keep it that way. I just hope Heroku doesn't get complacent as they are showing signs of aging. They need a dyno refresh, http/2, and wildcard SSL out of the box. I honestly have no idea what the equivalent EC2/RDS costs are and I'm not sure I want to know.

[+] BerislavLopac|6 years ago|reply
Software engineering is the perfect example of the "blind scientists and the elephant" problem. It is a very complex field, with a number of related but distinct disciplines and activities required to make it work; it's impossible to be an expert in everything, so we tend to specialise: we have back-end engineers, front-end engineers, data engineers, SRE experts, devops specialists, database experts, data scientists and so on. Additionally, the software we are building varies wildly in terms of complexity, dependencies, external requirements etc; and finally, the scale of that software and the teams building it can vary from one person to literally thousands.

Articles like this one, and even more comments on HN and similar sites, generally suffer from a perspective bias, with people overestimating the frequency of their own particular circumstances and declaring something outside of their needs as "niche" and generally misguided and "overhyped".

The reality is that various technologies and patterns -- microservices, monoliths, Kubernetes, Heroku, AWS, whatever -- are tools that enable us to solve certain problems in software development. And different teams have different problems and need different solutions, and each needs to carefully weigh their options and adopt the solutions that work the best for them. Yes, choosing the wrong solutions can be expensive and might take a long time to fix, but that can happen to everyone and actually shows how important it is to understand what is actually needed. And it's completely pointless to berate someone for their choices unless you have a very detailed insight into their particular needs.

[+] SirensOfTitan|6 years ago|reply
I disagree with the HN consensus here: I think managed kubernetes is really useful for startups and small teams. I also commonly hear folks recommending that I use docker-compose or nomad or something: I don't want to manage a cluster, I want my cloud to do that.

We run a fairly simple monolith-y app inside kubernetes: no databases, no cache, no state: 2 deployments (db-based async jobs and webserver), an ingress (nginx), a load balancer, and several cron jobs. Every line of infrastructure is checked into our repo and code reviewed.

With k8s we get a lot for free: 0 downtime deployments, easy real time logging, easy integration with active-directory for RBAC, easy rollbacks.

[+] _xnmw|6 years ago|reply
Overengineering is a real problem out there. I’ve seen k8s deployed for internal back office apps that have literally 5 users - a raspberry pi could’ve hosted it. Keeping things simple and reliable is often a harder skill to learn than $BIGCO_TECH, and often confounded by political incentives.
[+] WnZ39p0Dgydaz1|6 years ago|reply
I'm getting tired of these "you don't need k8 posts". Sure, if you have a simple web application with a REST API, don't use k8, unless it's for learning purposes. But nobody does that anyway.

If you have something more complex with many moving parts that are separate services, k8 is a great option. I've been using it in production for close to 2 years now - not a single service downtime, great fault-tolerance, and absolutely zero management effort. Deploying complex applications, databases, and monitoring systems is easier than ever before. I don't think using k8 is overly complex. Yes, you need to invest some time to learn it, but that's the case for every new technology.

[+] sethammons|6 years ago|reply
We used to manually ssh to deploy our dozens of nodes, just a handful on developers. git pull, restart service.

Then we got to hundreds of nodes. Chef, chef, and more chef. Deploys were typically run with a chef-client run via chef ssh (well, a wrapper around that for retries). With dozens of services and many dozens of engineers, this worked well enough.

Then we got to thousands of nodes. And hundreds of developers working on a multitude of services.

We've adopted k8s. It has been a lot of work, but the deploy story is wonderful. We make a PR and between BuildKite and ArgoCD we can manage canary nodes, full roll outs, roll backs, etc. We can make config changes or code changes easily, monitor the roll out easily, and revert anytime. I still don't _like_ k8s mind you - I don't think programming with templates and yaml is a good thing. But I've come to terms with that being the best we will have for now.

[+] pnathan|6 years ago|reply
Kubernetes solves very real problems in a way that handles a full suite of them.

This is very complex because the problem set is complex.

If you're running a substantially smaller system, k8s makes less sense.

That said, if you're familiar with running and monitoring k8s, a gke deploy will solve a lot of the pain a traditional LB + EC2 ASG will incur out of the gate. Let me explain:

Notionally, we need 4 basic services operationally for a single typical service deployment. 1 of FooService, 1 load balancer, 1 database, 1 monitoring/logging system. All of these should tolerate node death; this means roughly 3 pieces of hardware for this notional system. This is complexity that k8s covers, at a high cost of knowledge. If you're bought into AWS, the Beanstalk system will do this decently well, last I checked.

I think there is room for a k8s-like tool that is good for teams with < 10 services, and less than 10 engineers. Even k3s (https://rancher.com/docs/k3s/latest/en/) has substantial complexity at the networking layer that, I think, can be stripped for the "Small Team".

So I agree with the author in theory that k8s is overkill. But also other infra types can start getting difficult to deal with in time, and "just deploy onto a single big box" doesn't cover the operational needs.

[+] partiallypro|6 years ago|reply
Probably unpopular, but I am generally opposed to using Docker/Kubernetes for ~75%+ of projects. I've been in arguments over this, but containers being unmaintained and the complexity of Kubernetes can cause major issues. It's over engineering for smaller projects. That's just my opinion. I think a flat VM is more appropriate most of the time. But there is no denying the advantages of Docker when it's done right and used right.

A developer told me just a few weeks ago that you should "always" use Docker, which I just found to be so ridiculous.

[+] thiago_fm|6 years ago|reply
It's not that hard to use Kubernetes and it makes the developer's life easy. It's very easy to deploy helm charts and even though that there are many gotchas and complex things, if you want to deploy something simple, it is easy and completely doable to do it even solo.

(rant)

After over 10 years in development I've done and used literally all the things people complain here a lot about: Virtual machines, Single page apps, docker, microservices, FP and the list goes on. Even though I've struggled, I feel very lucky to be able to try all those things and it's a joy to use and I've shipped shitloads of great code that is making a lot of money to a lot of people and improving businesses in general.

I don't mean you need to use K8S or even like it, but there is definitely developers which know their shit very well, and can also make great Single page apps using more than 3 different JS framework, also write good backend code and so on. And also enjoy all of this and make companies definitely successful. It sickens me a bit how so much posts of this kind get a lot of attention and could be replaced by "yes, software, like everything in life, is complex!!!11". I think the article itself is completely shallow to actually touch the difficulties there is with using kubernetes and is mostly useless information. There are at least 10 posts with a better and more structure criticism, but it's just because it's cool to complain about new things, it gets automatically traction in HN(which used to be a place where people like new things...).

So... yes, you shouldn't use K8S everywhere(also applies to everything...), but it is the new thing(well, not really new...). Should we just talk about Apache mod_php? It's natural that people want to try new stuff and actually enjoy working with software. Not everybody sees everything as problems. "Now you have eight problems, hehehehehe!!11".

Am I the only one that found this post completely useless and at some degree, toxic?

(/rant)

[+] supermatt|6 years ago|reply
I see the author is a proponent of docker-compose, which I use myself for small projects. I have a docker-compose configuration in all my repos, and a `docker-compose up` brings the app up on my laptop. I could use minikube in almost exactly the same way. i.e. there is effectively no difference from a development perspective.

If you are managing kubernetes yourself, on your own hardware, the moving parts can indeed be a burden for a small team - but all of these pain points go away with a managed kubernetes, as offered by most IaaS providers. i.e. if you are using an IaaS provider, there is (usually) no difference from a production perspective.

There are less moving parts in docker compose, and its easier to run on a single VM - but it doesnt offer any of the dynamic features of kubernetes that you would want at scale. The same containers can run on both.

If you need to dynamically scale your application, or grow beyond a single machine (I disagree with the vertical scaling proposed by the author - thats for a very specific use-case IMHO), then docker-compose is simply no good. Then you need to use docker-swarm. At this point, you either need to manage a docker-swarm cluster or a kubernetes one. Kubernetes is the obvious choice here. Fortunately, there is a trivial migration path from docker-compose to kubernetes.

[+] Hippocrates|6 years ago|reply
There’s a lot of configuration to understand with k8s and even GKE. Badly configured probes, resource budgets, pod disruption budgets, node affinities etc. can have disastrous effects. I’m pushing my teams more towards serverless since it takes out nearly all ops/scaling/rollout complexities. Right now we’re seeing our serverless apps on GCF, GAE and cloud run outperform our GKE apps easily in scaling, reliability, and simplicity (configuration and time spent getting it deployed in a satisfactory manner)
[+] hypewatch|6 years ago|reply
It’s interesting that this critique of kubernetes is on a blog called “python speed” because my most recent project with kubernetes was deploying a large dask cluster. For this use case k8s was really valuable. It made the devops part so much easier than it otherwise would have been, so we could put most of our time into application logic. In other words, when we wanted to achieve substantial “python speed” kubernetes was very helpful. For data engineering projects, even with a small number of data engineers, it can be a big productivity booster.

Personally, I like kubernetes and find it easier to use than other devops tool sets, so it’s become my go-to tool. Probably wouldn’t recommend it to someone who doesn’t know it and has a simple app architecture.

[+] FlyingSnake|6 years ago|reply
I've taken over a project containing 6 DB entities. Instead of building a monolith (or normal REST API), the Architects used 7 µServices based on k8s and NoSQL DB. Now simple development tasks take extra time, and anything that affects multiple µServices needs n times the development efforts. I wish they had started with a simple monolith, and refactored to µServices if needed.
[+] michaldudek|6 years ago|reply
I’m a very happy user of Rancher 1.6 for years. Simple, nice GUI, got everything I need, works fast, can deploy as many apps /services as you wish, no new concepts to learn (if you know Docker that is).

Used it in my previous agency to manage clients websites and use it now in my startup to manage multiple envs with few apps (api, front end, workers) and nice and easy deployments via GitLab CI.

[+] pm90|6 years ago|reply
Heh, it’s quite amusing to see the posts here arguing that “you can do the same thing with multi az deployments on aws with VMs, packer and ebs. Kubernetes needs you to learn so much shit” ... do you even read what you write?

Kubernetes is not gospel. It’s an opinionated, incomplete framework for orchestrating container workloads. There are other ways to do the same thing which are fine too. It works well for the most part but has disgusting failure scenarios. So do other techs.

People who use and like kubernetes are comfortable with its trade offs and portability. You may not be. It’s fine.

Shitting on kubernetes just because you’re comfortable with another technology just because you can: that’s not fine.