(no title)
shockinglytrue | 5 years ago
If you want to run VPS-sized pods full of untracked mutable state with their own ghetto logging / monitoring / deployment / configuration management baked in, nothing is stopping you, but by the time we're "maintaining production systems", and we discover that crumby little $5 VPS has been touched by 7 people over 2 years, nobody knows what is on it any more, turns out the default logrotate configuration was shredding audit logs, and nobody has a clue how to reinstall it without literally starting over, things start to look a little different.
- "I wish I had a list of all the jobs that were supposed to be on this VPS"
- "I wish I didn't have to setup random log archiving cronjobs for all these apps"
- "I wish I knew what directories I need to back up"
- "I wish I could split this one job off the VPS without having to rewrite and retest its config"
- "I wish I could run a separate deployment environment on the VPS without having to set up a whole new VPS"
- "I wish I could rotate this API key but I haven't a clue where it is stored or what is referencing it"
- "I wish I could give the intern access to deploy the web site without letting him also take a copy of the HR database"
- "I wish I could deploy this new job version and roll it back if I fuck things up"
- "I wish I could run this ancient proprietary tool that needs 32bit libs from Debian 0.4 without installing them globally and potentially bricking the whole VPS"
- "I wish I didn't have to go install and configure a bunch of monitoring plugins every time I deploy a new app"
You can set all that up by hand of course, after all it does represent years of fruitful low-intellect busywork for idle hands, or you can come up with your own self-assembled collection of third party tools to do it for you, investing the research, installation and configuration time that entails, and accepting the cost of having created an ad-hoc system only you understand, or you could simply pick one big framework that does it all at once and try to make it work.
The nice thing with the one-big-framework approach is that you also get a common language and operational model understood by thousands of people you can hire on the open market. They understand how your configs are laid out, they know how to discover mutable state and query the health of all your existing jobs. And by the time they leave the company, you can understand all that about the work they've done.
This is only addressing Kubernetes as an operational methodology. Containerization as an architectural style is an entirely different topic.
0xFACEFEED|5 years ago
> I wish...
This right here is a big part of the problem. "I wish". "What if" is another one.
In the real world you don't have all of these problems initially and often times never at all. They trickle in as you grow. And in our business an absolutely essential property of good tooling is that it grows with your needs.
This talk of "years of fruitful low-intellect busywork for idle hands" for "self-assembled collection" of tools is the antithesis of the Unix philosophy. I like the Unix philosophy. It's worked out very well for us so far.
> a common language and operational model understood by thousands of people you can hire on the open market
We've been doing just fine with sysadmins? The community standardizes on a set of tools and people learn them. Do you really believe that pre-k8s it was some kinda combinatorial wild west without any shared knowledge? Come on. Come. On.
k8s environments aren't all that standardized either btw. There's just as much duct tape and glue as everything else. CI/CD configuration, service meshes, multi-cloud provider configurations, etc. All of these interfaces bring their own quirks and pitfalls. And your k8s nodes still need to be optimized just like before. The control plane is only one part of the picture.
All I'm saying is this... an experienced sysadmin with a mature config management system and AWS/GCP/Azure will get you a perfectly maintainable infrastructure for most use cases.
There are big organizations where k8s makes sense b/c you'd end up building a worse home grown version of basically the same thing. I'll give you that. But I'm seeing k8s pitched as the default infra solution for SMBs and the like. That's where the angst is coming from.
GrumpyNl|5 years ago