top | item 31974420

Kubernetes is a red flag signalling premature optimisation

542 points| tenfourty | 3 years ago |jeremybrown.tech | reply

550 comments

order
[+] pritambarhate|3 years ago|reply
Doesn't look like the author knows what he is talking about. His point about early stage startup should not use K8S is fine. But the next advice about not using a different language for frontend and backend is wrong. I think the most appropriate advice is to choose a stack which the founding team is most familiar with. If that means RoR then RoR is fine. If it means PHP then PHP is fine too. Another option is to use a technology which is best suited for the product your are trying to build. For example, if you are building a managed cloud service, then building on top of K8S, or FireCracker or Nomad can be a good choice. But then it means you need to learn the tech being used inside out.

Also he talks about all of this and then gives example of WhatsApp at the end. WhatsApp chose Erlang for the backend and their front end was written in Java and Objective-C. They could have chosen Java for backend to keep frontend language same but they didn't. They used Erlang because they based their architecture on Ejabberd which was open source and was built with Erlang. Also WhatsApp managed all their servers by themselves and didn't even move to managed cloud services when they became available. They were self hosting till FB acquired them and moved them to FB data centres later on (Source: http://highscalability.com/blog/2014/2/26/the-whatsapp-archi...).

[+] okamiueru|3 years ago|reply
I don't think their advice about not using it in a startup is correct either. You just need to somewhat know what you're doing.

I know of such a case, where a single engineer could leverage the helm chart open source community, and set up a scalable infrastructure, with prometheus, grafana, worker nodes that can scale independently of web service, a CI/CD pipeline that can spin up complete stacks with TLS automated through nginx and cert-manager, do full integration tests, etc.

I found that to be quite impressive, for one person, one year, and would probably be completely impossible if it wasn't for k8s.

[+] lucideer|3 years ago|reply
Agree the author is wrong on that specific point, though thankfully the bulk of the article content deals with the headline, and is mostly fine wrt k8s.

Rather than the author "not knowing" what they're talking about, I suspect they're taking narrow experience and generalising it to the entire industry. Their background is selling k8s as a solution to small/medium enterprises: it strikes me that there may be a strong correlation between startups interested in that offering and those deploying failed overengineered multilang micro-architectures. Suspect the author has seen their fair share of bad multilang stacks and not a lot of counter examples.

[+] ClumsyPilot|3 years ago|reply
The Whole advice of using same language is especially silly - iOS is stuck with Swift, and the web is stuck with JS, and maybe you need an applitation that scales using actors across mutiple machines with Golang or Java, or maybe you need to plug into Windows tightly and need C#.

Kubernetes is not 'harder' if all you need is to host a webapp. Where it falls on the hardness spectrum depends on what you are trying to do, and what is the alternative. I am very fluent with Kubernetes but have no skills in managing traditional virtual machines.

[+] choeger|3 years ago|reply
Choosing more than one language as a startup can become really expensive quickly. As long as your tribes are small, chances are high that you one day run out of. e.g., python developers while you still have a lot of Java guys (or vice versa). This introduces unnecessary pain. (And obviously, you should have used Rust or Haskell from the get go for everything.)

The sole exception to this rule I would make is javascript which is more or less required for frontend stuff and should be avoided like the plague for any other development. As soon as you can get your frontend done in Rust, though, you should also switch.

[+] RapperWhoMadeIt|3 years ago|reply
I also thought WhatsApp is a bad example. They not only hosted themselves, but they used solely FreeBSD (as far as I know) in their servers. (which don't get me wrong, I find great as a FreeBSD sysadmin myself).
[+] jakupovic|3 years ago|reply
>Doesn't look like the author knows what he is talking about.

This was my first thought, and I was to comment so, but saw you already did. The only reason we see this comment is because HN has an irrational hate of K8s, for us that do run things in production at scale, k8s is the best option. The rest is either wrapped in licenses or lacks basic functionality.

[+] kitd|3 years ago|reply
I suspect a lot of the gripes and grousing about Kubernetes comes from SMEs trying to run it themselves. That will often result in pain and cost.

Kubernetes is a perfectly good platform for any size operation, but until you are a large org, just use a managed service from Google/Amazon/DigitalOcean/whoever. Kubernetes, the data plane, is really no more complex that eg Docker Compose, and with managed services, the control plane won't bother you.

K8s allows composability of apps/services/authentication/monitoring/logging/etc in a standardised way, much more so than any roll-your-own or 3rd-party alternative IMO, the OSS ecosystem around it is large and getting larger, and the "StackOverflowability" is strong too (ie you can look up the answers to most questions easily).

So, TLDR, just use a managed K8s until you properly need your own cluster.

[+] DrewADesign|3 years ago|reply
Yep. In fact, the front/back language bit is the most egregious premature optimization I can think of.
[+] psychoslave|3 years ago|reply
> I think the most appropriate advice is to choose a stack which the founding team is most familiar with. If that means RoR then RoR is fine. If it means PHP then PHP is fine too.

Taking human resource information into consideration sounds very wise. Although, learning a new language is generally not that a huge barrier, while changing your whole stack once the minimum viable product cap is passed can be very expensive. And if you need to scale the team, the available developer pool is not the same depending on which technology you have to stick with.

It doesn’t invalidate your point, but maybe it brings some relevant nuances.

[+] ryanbrunner|3 years ago|reply
> But the next advice about not using a different language for frontend and backend is wrong.

Being charitable, what I think they are getting at is maybe more about having fully separated frontend and backend applications (since the front-end examples he gives are not languages but frameworks / libraries). Otherwise it seems really backwards - I'm definitely an advocate of not always needing SPA-type libraries, but using literally zero Javascript unless your backend is also JS seems like it goes to a too-far extreme.

[+] Cthulhu_|3 years ago|reply
Re: single language, there's a grain of truth to it - see http://boringtechnology.club/ - but that one mainly says there is a cost to adding more and more languages. When it comes to back- and frontend though, I would argue there is a cost to forcing the use of a single language. e.g. NodeJS is suboptimal, and web based mobile apps are always kinda bleh.
[+] wizofaus|3 years ago|reply
"I think the most appropriate advice is to choose a stack which the founding team is most familiar with." I'd think that's exactly what typically happens most of the time. But the degree of stack-lockin that occurs with startups still surprises me even when it's clear a better choice might have been made. Mostly due to management not being prepared to grant the necessary rewrite time.
[+] JohnHaugeland|3 years ago|reply
> But the next advice about not using a different language for frontend and backend is wrong.

Er.

I read this as him saying "one of the things I've seen as a bad reason to use Kubernetes is that there are multiple languages in use."

I've seen people do this. Frontend in one container, backend in another, Kube to manage it.

If that's what author meant, author is right, that's a profoundly stupid (and common) reason to involve Kube.

[+] CapsAdmin|3 years ago|reply
sounds like it just boils down to: try to choose the technology your team is familiar with, not what other teams are successful with

Of course there's some balance needed. If your team is familiar with some niche language then long term that might not be a good strategy if you intend to bring more devs on board later.

One side of this which I don't think is discussed often is the fun of choosing new technology. How do you balance having fun and being realistic at the same time?

Fun meaning trying new technology, learning as you go, setting up systems that make you feel proud, etc. It can lead to failure, but I think having fun is important too.

[+] movedx|3 years ago|reply
I agree entirely.

I like to call what the author is referring to as, "What-If Engineering". It's the science of thinking you'll be Google next week, so you build for that level of scale today. It involves picking extremely complicated, expensive (both in compute and skilled labour) technologies to deploy a Rails app that has two features. And it all boils down to, "But what if..." pre-optimising.

It happens at all levels.

At the individual unit level: "I'll make these four lines of code a function in case I need to call it more than once later on - you know, what if that's needed?"

It also happens at the database level: "What if we need to change the schema later on? Do we really want to be restricted to a hard schema in MySQL? Let's use MongoDB".

What's even worse, is Helm and the likes make it possible to spin up these kinds of solutions in a heart beat. And, as witnessed and evidenced by several comments below, developers think that's that... all done. It's a perfect solution. It won't fail because K8s will manage it. Oh boy.

Start with a monolith on two VMs and a load balancer. Chips and networks are cheaper than labour, and right now, anyone with K8s experience is demanding $150k + 10% Superannuation here in Australia... minimum!

https://martinfowler.com/bliki/MonolithFirst.html

[+] danielvaughn|3 years ago|reply
I've told this story before on HN, but a recent client of mine was on Kubernetes. He had hired an engineer like 5 years ago to build out his backend, and the guy set up about 60 different services to run a 2-3 page note taking web app. Absolute madness.

I couldn't help but rewrite the entire thing, and now it's a single 8K SLOC server in App Engine, down from about 70K SLOC.

[+] preommr|3 years ago|reply
> It's the science of thinking you'll be Google next week,

There's other reasons to use K8s than just thinking of massive scale.

Setting up environments becomes a massive PITA when working directly with VMs. The end result is either custom scripts, which is a messier version of terraform, which ends up being messier than just writing a couple of manifest files for a managed k8s.

> anyone with K8s experience is demanding $150k + 10% Superannuation here in Australia... minimum!

sheds a tear for CAD salaries and poor career decisions

[+] osigurdson|3 years ago|reply
>> Helm and the likes make it possible to spin up these kinds of solutions in a heart beat

Genuine question, why is this bad? Is it because k8s can spin it up but becomes unreliable later? I think the industry wants something like k8s - define a deployment in a file and have that work across cloud providers and even on premise. Why can't we have that? It's just machines on a network after all. Maybe k8s itself is just buggy and unreliable but I'm hopeful that something like it becomes ubiquitous eventually.

[+] mountainriver|3 years ago|reply
Sorry but managed k8s is really simple and wildly a better pattern than just running VMs. You don’t need google scale for it to help you, and spinning things up without understanding the maintenance cost is just bad engineering
[+] d23|3 years ago|reply
The truth is at scale the last thing you want is a nest of unmanaged complexity, so it’s also the wrong instinct there. It’s usually contact with the real world that dictates what needs the extra engineering effort, and trying to do it ahead of time just means you’ll sink time up front and in maintenance on things that didn’t turn out to be your problem.
[+] iso1631|3 years ago|reply
You're forgetting that many people will want to use K8s for a project because they want it on their CV to get the high paying jobs. I saw the term on HN a couple of weeks ago -- CVOps
[+] morelish|3 years ago|reply
> I'll make these four lines of code a function in case I need to call it more than once later on - you know, what if that's needed?

The code example is not always right.

Beware, if you know it will be needed, you might as well make it a function now. Likewise if you think probably it will be needed, why not make it a function now?

It’s not a good review comment or rejection to say “yeah but I don’t want to do that because it’s not yet needed”. Sure, but what if you are just being lazy and you don’t appreciate what it should look like long term?

The “I don’t want to write a function yet not needed” is not a clear cut example.

[+] roflyear|3 years ago|reply
At least in the sense of code you arent doing any real harm I can think of and there are other benefits like testing and organization.
[+] Jistern|3 years ago|reply
>> I agree entirely.

I agree entirely too.

>> Start with a monolith on two VMs and a load balancer. Chips and networks are cheaper than labour,

Kudos to you! You are a dangerous man for you opine the truth.

My advice is generally, "Build something. Then see if you can sell it." or "Sell something and then go build it." Either way, it all starts soooo small that the infrastructure is hardly a problem.

If you "get lucky" and things really take off. Sure. Yeah. Then get a DevOpps superstar to build what you need. In reality, your business will very probably fail.

[+] CipherThrowaway|3 years ago|reply
I lean conservative in my tech choices but I just don't see the big issue with Kubernetes. If you use a managed service like GKE it is really a breeze. I have seen teams with no prior experience set up simple deployments in a day or two and operate them without issues. Sure, it is often better to avoid the "inner platform" of K8s and run your application using a container service + managed SQL offering. But the difference isn't huge and the IaC ends up being about as complex as the K8s YAML. For setting up things like background jobs, cron jobs, managed certificates and so on I haven't found K8s less convenient than using whatever infrastructure alternatives are provided by cloud vendors.

The main issue I have seen in startups is premature architecture complexity. Lambda soup, multiple databases, self-managed message brokers, unnecessary caching, microservices etc. Whether you use K8s or not, architectural complexity will bite your head off at small scales. K8s is an enabler for overly complicated architectures but it is not problematic with simple ones.

>Did users ask for this?

Not an argument. Users don't ask for implementation details. They don't ask us to use Git or build automation or React. But if you always opt for less sophisticated workflows and technologies in the name of "just getting stuff done right now" you will end up bogged down really quickly. As in, weeks or months. I've worked with teams who wanted to email source archives around because Git was "too complicated." At some point you have to make the call of what is and isn't worth it. And that depends on the product, the team, projected future decisions and so on.

[+] tr33house|3 years ago|reply
As a startup founder that's not VC funded, I would totally recommend you look into building with kubernetes from the get go. The biggest learning curve is for the person setting up the initial deployments, services, ingress etc Most other team members may just need to maybe change the image name and kubectl apply to roll things out. Knowing that rollouts won't bring down prod and that they can be tested in different environments consistently is really valuable.

I started out with Iaas namely Google App Engine and we suffered a ton with huge bills especially from our managed db instance. Once the costs were too high we moved to VMs. Doing deployments was fine but complicated enough that only seasoned team members could do it safely. We needed to build a lot of checks, monitoring etc to do this safely. A bunch of random scripts existed to set things up and migrating base operating system etc required a ton of time. Moving to kubernetes was a breath of fresh air and I wish we'd done it earlier. We now have an easy repeatable process . Infra is easier to understand. Rollouts are safer and honestly, the system is safer too. We know exactly what ports can allow ingress, what service boundaries exist. What cronjobs are configured, their state etc with simple kubectl commands.

Using kubernetes forces you to write configurable code and is very similar to testing: it sounds like it'll slow you down and shouldn't be invested in until the codebase is at a certain size but we've all learned from experience how is actually speeds everything up, makes larger changes faster, cheaper customer support and saves you from explaining why a certain feature has been broken for 10 without anyone's knowledge

[+] hsn915|3 years ago|reply
> The biggest learning curve is for the person setting up the initial deployments, services, ingress etc Most other team members may just need to maybe change the image name and kubectl apply to roll things out.

This is a huge redflag.

It's basically admitting that you expect most later employees to not understand k8s or how its being used. You may think they don't need because it works, but you have to think about what happens when it doesn't work.

The shops I've been to all had the same mindset: the docker/k8s infra was setup by one guy (or two) and no one else on the team understands what's going on, let alone have the ability to debug the configuration or fix problems with it.

Another thing that might happen is some team members understand just barely enough to be able to "add" things to the config files. Over time the config files accumulate so much cruft, no one knows what configuration is used for what anymore.

[+] likortera|3 years ago|reply
> I started out with Iaas namely Google App Engine and we suffered a ton with huge bills especially from our managed db instance Are you factoring in the salaries of the people setting up Kubernetes? And the cost of those people/salaries not working on the actual product? And the cost of those people leaving the company and leaving a ton of custom infrastructure code behind that the team can't quickly get up to speed with?

> ton with huge bills especially from our managed db instance

This doesn't have much to do with App Engine, right? Last time I used it, we were using a PostgresQL instance on AWS and had no problems with that.

> Doing deployments was fine but complicated enough that only seasoned team members could do it safely

I just plain don't believe this. I bet you were doing something wrong. How is it possible that the team find too difficult to do an App Engine deployment but then they're able to setup a full kubernetes cluster with all the stuff surrounding it? It's like saying I'm using React because JavaScript is too difficult.

> Using kubernetes forces you to write configurable code and is very similar to testing: it sounds like it'll slow you down and shouldn't be invested in until the codebase is at a certain size but we've all learned from experience how is actually speeds everything up, makes larger changes faster, cheaper customer support and saves you from explaining why a certain feature has been broken for 10 without anyone's knowledge

This is far, far, far from my own experience.

Some other questions:

How did you implement canary deployment?

How much time are you investing in upgrading Kubernetes, and the underlying nodes operating systems?

How did kubernetes solve the large database bills issue? How are you doing backups and restoration of the database now?

If I were to found a company, specially not VC founded, dealing with kubernetes would be definitely far below on my list of priorities. But that's just me.

[+] moomoo11|3 years ago|reply
Please don’t do this. I’m dealing with the mess caused by following this line of thinking.

One guy (okay it was two guys) set up all the infrastructure and as soon as it was working bounced to new jobs with their newfound experience. The result is that dozens of engineers have no idea what the heck is going on and are lost navigating the numerous repos that hold various information related to deploying your feature.

In my opinion (and I’m sure my opinion has flaws!), unless you have global customers and your actual users are pushing 1k requests per second load on your application servers/services, there is no reason to have these levels of abstractions. However once this becomes reality I think everyone working on that responsibility needs to learn k8s and whatever else. Otherwise you are screwed once the dude who set this up left for another job.

And honestly.. I’ve built software using node for the application services and managed Postgres/cache instances with basic replication to handle heavy traffic (10-20k rps) within 100ms. It requires heavy use of YAGNI and a bit of creativity tho which engineers seem to hate because they may not get to use the latest and shiniest tech. Totally understand but if you want money printer to go brrr you need to use the right tool for the job at the right time.

[+] lbriner|3 years ago|reply
Totally agree!

Kubernetes isn't just about global scale that most people will never need, which would agree with the article. It is about deploying new apps to an existing production system really quickly and easily. We can deploy a new app alongside an old app and proxy between them. Setting up a new application on IIS or a new web server to scale is a mare, doing the same on AKS (managed!) is a breeze. It is also really good value for money, because we can scale relatively quickly compared to dedicated servers.

It is also harder to break something existing with a new deployment because of the container isolation. We might not need 1000 email services now but we could very quickly need that kind of scale and I don't want to be running up 100s of VMs at short notice as the business starts taking off when I can simply scale out the K8S deployment and add a few nodes. There is relatively little extra work (a dockerfile?) compared to hosting the same services on a web server.

[+] ezekiel11|3 years ago|reply
the only rationale to do what you described is if and only if you have outside capital. If you are spending your hard earned boostrapped cash on this, I'm sorry but its a poor business decision that won't really net you any technical dividends.

Again, I really see this the result of VC money chasing large valuations, and business decisions influencing technical archietcture, a sign of our times, of exuberance and senselessness

Engineering has to raise the cost of engineering to match it (280 character limit crud app on AWS lambda with 2 full stack developers vs 2000 devs in an expensive office).

[+] ale42|3 years ago|reply
Why should using different languages for front-end and back-end be a problem? I rather think that it is better to use languages that are appropriate for the given problem. It is not premature optimization to have parts of a back-end implemented in C/C++/Go/whatever else if high performance is needed. It would rather be a waste of resources, money and energy not to use an high-performance language for high-performance applications. Of course using the same language for the front-end might make no sense at all.
[+] Noughmad|3 years ago|reply
I really don't understand all these complaints about how Kubernetes is so complex, how it's an investment etc. I am a single developer that uses Kubernetes for two separate projects, and in both cases it has been a breeze. Each service gets a YAML file (with the Deployment and Service together), then add an Ingress and a ConfigMap. That's all. It's good practice so still have a managed DB, so the choice of Kubernetes vs something running on EC2 doesn't change anything here.

Setting up a managed kubernetes cluster for a single containerized application is currently no more complicated than setting up AWS Lambda.

What you get out of it for free is amazing though. The main one for me is simplicity - each deployment is a single command, which can be (but doesn't have to be) triggered by CI. I can compare this to the previous situation of running "docker-compose up" on multiple hosts. Then, if what you're just deploying is broken, Kubernetes will tell you and will not route traffic to the new pods. Nothing else comes close to this. Zero-downtime deployments is a nice bonus. Simple scaling, just add or remove a node, and you're set.

Oh, and finally, you can take your setup to a different provider, and only need some tweaks on the Ingress.

[+] SassyGrapefruit|3 years ago|reply
I agree I consider kubernetes to be a simplification. I have two apps running at my company the first is a forest of php files and crontabs strewn about a handful of servers. There are weird name resolution rules, shared libs, etc. Despite my best efforts its defied organization and simplification for 2.5 years.

The second is a nice, clean EKS app. Developers build containers and I bind them with configuration drop'em like they're hot right where they belong. The builds are simple. The deployments are simple. Most importantly there are clear expectations for both operations/scaling and development. This makes both groups move quickly and with little need for coordination.

[+] mattbillenstein|3 years ago|reply
In the limit, there are some startups that could run production on a single Linux host - I recently helped one get off Heroku and their spend went from ~$1k/mo to ~$50/mo and it made debugging and figuring out performance issues so much easier than what they were doing previously...
[+] mixedCase|3 years ago|reply
I'm getting tired of the "You don't actually need Kubernetes while starting out!" crowd, despite being part of it. Of course you don't. Of course if you don't know Kubernetes, learning it as you try to get a company going on a minimum headcount is not the most efficient approach.

But for pete's sakes man, if you have used K8s before, know what you're doing and you're running on cloud, just shove a couple off-the-shelf Terraform modules at the problem and there's your execution environment needs solved, with easy and reliable automations available for handling certs, external access, load balancing, etc, all of it reasonably easy to get going once you have done it before.

Stop pretending Kubernetes is this humongous infrastructure investment that mandates a full time job to keep up at low scale. Of course if you have done this multiple times before you don't need to be told this, but people new to it shouldn't be fed a barrage of exaggerations.

[+] tenfourty|3 years ago|reply
[OP here] It feels bizarre saying this, having spent so much of my life advocating for and selling a distribution of Kubernetes and consulting services to help folks get the most of out it, but here goes! YOU probably shouldn't use Kubernetes and a bunch of other "cool" things for your product.

Most folks building software at startups and scale-ups should avoid Kubernetes and other premature optimisations. If your company uses Kubernetes, you are likely expending energy on something that doesn't take you towards your mission. You have probably fallen into the trap of premature optimisation.

Please don't take this post to be only aimed against Kubernetes. It is not. I am directing this post at every possible bit of premature optimisation engineers make in the course of building software.

[+] shele|3 years ago|reply
> Imagine spending a lot of time and money picking out the best possible gear for a hobby before actually starting the hobby.

Haha, this is exactly what “hobby” means to a lot of people. Less judgmental: thinking and dreaming about the right tools in disproportion to the need is something people do a lot, presumably because it is a source of joy.

[+] halotrope|3 years ago|reply
I don't know. I default to GKE for all new deployments. It really reduces the mental overhead of infrastructure for me.

I can add services, cronjobs and whatnot as I see fit in a standardized manner and don't worry about. A couple YML's get you started and you can scale as you see fit. Anything you would like to deploy can share 1-2 vCPU. The whole thing is also relatively portable across clouds or even on premise if you want it. Since everything is in containers, I have an escape hatch to just deploy on fly.io, Vercel or whatnot.

I get the criticism, that people over complicate projects that maybe don't even have traction yet. Bashing k8s is the wrong conclusion here IMHO. It feels a bit like people bashing React and SPAs just because what they have built so far never needed it. Just stick to your guns and stop evangelizing your way of doing things.

[+] sweaver|3 years ago|reply
For sure, there is power at the cost of agility!

I've seen cases where we started off as simply as possible with no k8's. We built the initial product really quickly using a ton of managed services. Whilst it was great to get us going, once we hit "growth" things just didn't scale. (1) The cost of cloud was getting astronomical for us (and growing with each new deployment) and (2) it was totally inflexible (whether that be wanting to deploy in a different cloud, or extend the platform's featureset) because we didn't own the underlying service.

We ended up porting everything to k8's. That was a long & arduous process but it gave us total flexibility at significant cost savings. The benefits were great, but not everyone has access to the engineers/skillset needed to be successful with k8's.

That's why we built Plural.sh – it takes the hard work out of k8's deployments. I've seen people go from zero to a full production deployment of a datastack on k8's in just 2 weeks. It deploys in your cloud, and you own the underlying infra and conf so you have total control of it. And because we believe in being open, you can eject your stack out of plural if you don't like it and keep everything running.

Great post, and hope all is well with you!

[+] discordianfish|3 years ago|reply
I feel like most of these rants come from people who never built the alternative to kubernetes to support a modern workflow (CI/CD with branch deploys, monitoring, access control etc). I love kubernetes because I don't need to build bespoken platforms at every company I join. I probably would have switched careers by now if I still had to deal with site specific tooling that all essentially implement a worst version of what Kubernetes has to offer.
[+] wiredone|3 years ago|reply
I'll be honest - i've worked places where kubernetes was a thing, and places where it wasn't. Both within the last 5 years.

Kubernetes is a layer of complexity that just isn't warranted for most companies. Hell even Amazon still sticks with VMs. Autoscaling and firecracker solve most things.

Its nice to have cluster (pod?) management as a first-class concept, but you get that with tagging of instances just as well for most use cases.

In short - i think the author has a point.

[+] supermatt|3 years ago|reply
I don’t get these complaints AT ALL. I don’t use kubernetes, simply because I am running apps in managed environments, and have been using docker-compose with vscode remote to emulate those environments. But being able to define your resources and how they are linked via a schema, makes sense even from a dev perspective. Isn’t that all that kubernetes is doing at it’s most basic? Sounds like that saves time to me over manually setting everything up for every project you work on.
[+] jhoelzel|3 years ago|reply
Guys Kubernetes is a container platform for multiple nodes. I know it seems hard to understand from the outside, but its really not. You would naturally come up with ALL the same componets if you were to take your container strategy onto multiple computers.

What if you dont need multiple servers? Well go the single node approach and have a flexible, true and tested way to spin up containers, which can and should be able to crash whenever they have to.

Using Containers is pre mature optimization too? maybe I should get my typewriter.

[+] avereveard|3 years ago|reply
having repeatable infrastructure from day 1 is great. kubernetes is the simplest way to have that. it's not the only way, but it's provider agnostic, has a lot of well maintained and understood tooling around, and splits clearly artifacts from deployment (i.e. no scripts configuring stuff after startup, everything is packaged and frozen when you build and upload the container image)

> Solve problems as they arise, not in advance.

while this does make sense for supporting varying requirements the lean way, it fails to address the increased costs that rearchitecting a solution mid-lifecycle incurs.

> Do more with less.

goddamn slogan driven blogging. what is the proposed solution? doesn't say. are we supposed to log in every prod/test/dev machine and maintain the deps by hand with yum/apt? write our own chef/puppet scripts? how is that better than docker images running kubernetes? The comparison between solutions is the interesting part.

op never say. guess "works on his pc" is enough for him, we can only assume he envision a battery of mac laptops serving pages from a basement, with devs cautiously tipotoing around network cables to deliver patches via external usb drives

[+] spoiler|3 years ago|reply
I disagree.

There are a lot of valid reasons why a start-up might prefer Kubernetes to other solutions.

A simple and good enough reason is the need for variable compute:

You might need more computing sporadically (say for example you're doing software in the sports industry; your demand changes with the number and size of events, as well as on weekends).

Another reason might be a start up that allows their customers to execute some type of arbitrary code (eg a no-code product). This can vary from customer to customer, and it can also vary within the customers use cases.

Imagine having to manage all these storage, networking and compute resources manually... Or going full circle and managing it with a bunch of puppet/ansible/shell scripts. Now you slap some automated scale triggers on top that fires off these scripts.

Congratulations! We've built something that looks like Kubernetes; if we squint. There's some smoke coming out of it. Documentation? Lol, we don't have time for that. we are a company that gets shit done, we don't toy around with Shcubernetes and documentation! Error handling/monitoring/reporting? Eh, just read the code if something fails. Need to add cert issuance? Yeah let's implement our own ACME integration. Network ingress? Let's just have a 500line haproxy infront of an 2000line nginx config; no big deal. DNS? just append to named; who cares?

The name for the "we get shit done" crowd should probably be "we don't give a shit about anything and have others solve problems we created because of our lack of thinking and foresight", but it doesn't sound quite as memorable.

It's just people who are comfortable cutting corners at other people's expenses. When they have to own the shit they made, they start blaming others and leave the company.

Sorry for the rant.

[+] bane|3 years ago|reply
I remember working with a client in the last 5 years that demanded a Kubernetes cluster to run custom analytics on very fast incoming data streams (several GB per hour). By "custom analytics" I mean, python scripts that loaded a day's worth of data, computed something, wrote the results to disk, and quit.

During development of the scripts, the developers/data scientists wrote and tested everything outside of the cluster since they were simple scripts. They had no problem grinding through a day's worth of data in their testing. But going into prod, had to shove it into the cluster. So now we had to maintain the scripts AND the fscking cluster.

Why?

"What if the data volume increases or we need to run lots of analytics at once?"

"You'll still be dominated by I/O overhead and your expected rate of growth in data volume is <3% per year. You can just move to faster disks to more than keep up. Also there's some indexing techniques that would help..."

Nope, had to have the cluster. So we had the cluster. At the expense of 10x the hardware, another rack of equipment, a networking guy, and a dedicated cluster admin (with associated service contracts from a support vendor). It literally all ran fine on a single system with lots of RAM and SSDs -- which we proved by replicating all of the tasks the cluster was doing.

Argh...

[+] parkingrift|3 years ago|reply
I just don’t understand this type of article. If you understand how to use and deploy kubernetes and you have confidence in it… you should use it. What you spend in extra infra costs is trivial.

…and if you don’t understand how to use and deploy kubernetes just what in the fresh hell are you doing? Stick with technology you know and then move to kubernetes later if or when you need it.

We’re a relatively small shop, and we’re using k8s. All of our senior stuff know and understand it. The pipeline is fully automated. If we prematurely optimizing anything it’s developer output over saving money.