top | item 26502900

Unironically using Kubernetes for my personal blog

266 points| tate | 5 years ago |mbuffett.com

227 comments

order
[+] vinceguidry|5 years ago|reply
Kubernetes is... fun. It's fun by making things that used to require hundreds of hours to figure out, only require dozens. This can give superpowers to the right kind of engineer. I spent a bunch of time hand-rolling my own cluster with it's own Wireguard network, on personal hardware, with one node in the cloud because I thought I needed one.

What's neat about it is, you generally solve one problem, and once it's solved, it stays solved. Your solution is in a yaml file somewhere or in a command line option that you've persisted to a script or whatever. And everything accumulates! If you manage to get a storage cluster installed, boom, all of a sudden you have a ton of options available to you.

Once the cluster is stable and you are comfortable bringing it up and tearing it down so that everything's working right, you can start pulling parts of the k8s infrastructure in, need a DNS server? Run it on your cluster! K8s wants a docker container registry, you really don't want to run that on your cluster in the beginning, but once your cluster's secure, why not! K8s starts eating everything in your life like the effing Borg and it's great!

If you're the kinda cat that will spend 80 hours on a Factorio world, then dive into the mods and make the tweaks you really wish the authors would make, and you have enough admin experience to do general troubleshooting of complex systems, I can't recommend Kubernetes enough for the sheer fun factor.

The downside if you want to do anything serious with it is, actually the same as running your own infrastructure always is and was, network connectivity. Hardware is cheap. Last mile network connectivity, isn't. The solution to that, is, of course, colocation.

[+] majormajor|5 years ago|reply
> What's neat about it is, you generally solve one problem, and once it's solved, it stays solved. Your solution is in a yaml file somewhere or in a command line option that you've persisted to a script or whatever. And everything accumulates! If you manage to get a storage cluster installed, boom, all of a sudden you have a ton of options available to you.

Can you expand on how this is different from traditional networking? I used to run a FreeBSD router with a ZFS storage setup and various other stuff on my home network, and it wasn't like I had to keep tinkering with it. Once I had network storage, I could use it from lots of different things, etc. But eventually you need software patches, updates, etc, and that was where the ongoing pain would sometimes crop up. Is that so different here? From some coworkers who are closer to the cluster, keeping up with K8s version changes doesn't seem like a small effort.

[+] smoe|5 years ago|reply
I think one big big aspect, in “use the right tool for the job”, that is often overlooked, is how good you already are wielding said tool.

I use Terraform to manage the few personal cloud resources I have. It is overkill, but I already learned it on the job so it ended up being quicker than setting stuff up by hand. If not, I wouldn’t have bothered learning it. At least in this context.

Same when building an MVP, I’m not going to pick some shiny tools I have never used before, but tools I know I can get the job done with.

Sure, I do like learning new things and tinker around with them, but it is not always the right time to do so, and the list of things I’d like to get good at, engineering related or not already is long a enough for a lifetime so one needs to prioritize.

[+] johnsoft|5 years ago|reply
>Once the cluster is stable and you are comfortable bringing it up and tearing it down >K8s wants a docker container registry, you really don't want to run that on your cluster in the beginning, but once your cluster's secure, why not!

I'm betraying my ignorance here, but how does this work? If you're running the registry in your cluster, and you tear down your cluster (and the registry with it), how do you rebuild the cluster without being able to pull images?

[+] emodendroket|5 years ago|reply
You can always run it in the cloud and at least theoretically be able to switch providers and take your cluster with you.
[+] turtlebits|5 years ago|reply
I really like K3s. Comes with an ingress controller out of the box which makes it easier to get up and running. Add Let's Encrypt, and its pretty trivial to stand up new services on their own subdomains.
[+] uberduper|5 years ago|reply
It really doesn't take much experience with kubernetes before it becomes fairly easy. And then using it for trivial things like this isn't a big deal.

My home "server" runs plex, sab, mongo, unifi, and various other things in a single node k8s with a local zfs volume provisioner. The previous revision of this server I'd switched to using docker for everything and was annoyed with the upgrade process with just docker alone. With k8s, I just use :latest for most images and an upgrades happen every time I restart a pod or reboot the machine.

I've been working with k8s since petSets and so this is all NBD to me. Linux had a learning curve too and ya'll got over it.

[+] tail_exchange|5 years ago|reply
Kubernetes is incredibly daunting for someone who only used VMs, but after spending some time with it, it's not really that hard. I also unironically run my personal stuff on K8s just because it's familiar to me, and it is also a good exercise for a very useful skill. People complain about the price, but you can easily get a 3-node cluster on GCP for less than $15/month.
[+] imiric|5 years ago|reply
Did you use Docker Swarm in the previous revision?

I've been quite happy with it for a few years now on a single-node cluster and a handful of services. I could easily add another node, but haven't had the need to. The setup was quite simple and it requires practically no maintenance. If I have to reboot everything starts up automatically, image upgrades are a breeze, and I really have no issues with it.

Sure it doesn't have all the bells and whistles of a k8s cluster, but it's perfectly fine for personal use.

I'm still partly annoyed that Swarm is mostly dead in this space and k8s has undoubtedly "won". It's only a matter of time before Docker Inc. fully abandons it. Such a shame.

[+] nonameiguess|5 years ago|reply
You can use Flux and it will just poll your source control and image registries to see if any updates to the images or configs are available and reconcile automatically. No need to even restart pods. Totally commodity continuous deployment. It's wonderful.
[+] tenacious_tuna|5 years ago|reply
A friend and I have recently stood up homelabs for funsies. Mine's a cluster of x64 boxes, his is a small pile of raspi4s. He went for a k8s cluster off the bat--having worked with cloud stuff for a while, he had the familiarity to hit the ground running.

I was immediately overcome by all the abstractions. I had no idea where to look to figure things out, and was mostly relying on advice from my friend. I dind't know what an ingress controller was, much less how to configure one--but I knew I wanted each 'service' to have its own IP I could route to from my network.

Overall it felt like I had SO MUCH TO LEARN at *every point* it was difficult to get even close to my actual goals (CI/CD for a personal site + some hosted game servers).

I eventually went with the philosophy of "build something now, and move towards perfection later" / "don't let best be the enemy of good" and begain spooling up LXCs and VMs to do the work I needed, planning to move things into k8s later when I better understood the actual things I wanted to move.

(Plus then I got some satisfaction out of actually accomplishing the goals I wanted, instead of just banging my head on k8s documentation and learning all the abstractions.)

As an example, I've not used docker in any meaningful capacity. For anything. No idea how to make a docker image. To k8s the CI for my docker site, I needed to know how to :

1. install the dependencies, which requires compiling a plugin for pandoc, which requires installing haskell and cabal. this is expensive, so I'd prefer to get the pre-res set up once... but that doesn't seem to be how docker works? Do I need an image repository? can I use DockerHub? I've seen HN talk about how docker is trying to monetize, should I run my own repository? Can I do that on my cluster? I'll need an ingress controller to route to it... I don't even know what that is. 2. I need some way to pass the built website files to the contianer I actually want to host them. I think that means I need an NFS share of somekind to store the files, so one coantainer can load them and another can read them. Do I hos tthat on my NAS? I could put an NFS share in the cluster, maybe. No idea how to get Docker to mount one, or k8s to host one. All the examples I seem to find deal more with connecting to remote services on a host than mounting local storage. is it even local storage? 3. Everyone says infrastructure as code is good, so I guess I'll follow this flux tutorial--only to find out the one I followed is out of date, and I should follow their NEW one. But they still assume I know way more baout k8s than I actually do, but still, I'll spend the few hours to get this operational, so then in theory everything else I deploy can be IaC'd, which is just good practice.

At this point I'm so many layers of abstraction deep, I have no idea what I'm actually doing or how concepts relate to each other, and I'm no closer to actually having my goal.

So last night I spent 2 hours spinning up VMs on my cluster and installing dependencies, configuring an nginx proxy, and now I actually have my personal blog self-hosted and updatable. Way more "progress" than the 10ish hours I've sunk into building a k8s cluster already.

There's something to be said for limiting the number of abstractions you're dealing with.

[+] systemvoltage|5 years ago|reply
Can someone who is familiar with Kubernetes please explain in plain words why we use Kubernetes? I’ve read a lot about it but I always read some bullshit non-answer such as “It’s an orchestration platform for containers”. What does that even mean?

Even this blogpost doesn’t explain what and why’s of Kubernetes.

I have a docker container. I deploy them on vm. I use load balancer to split traffic. Could you please walk me through what problem Kubernetes would solve here?

[+] pravus|5 years ago|reply
The problem Kubernetes is trying to solve is to eliminate the burden and potential mistakes in the workflow you mentioned by automating your container deployments and actively monitoring their state. This allows the cluster to balance workloads, heal failing components, (re-)distribute work to nodes with the appropriate resources, and migrate between different versions of container images without downtime or manual intervention.

It does all of this by allowing you to specify a service architecture in configuration files and then actively ensures that this configuration is maintained even as the underlying state of containers change. You can specify things such as the minimum number of backend containers needed to provide a service, scaling parameters to add more backend containers as load increases, and you can tag nodes with different attributes so that containers are distributed and maintained with the appropriate amount of resources.

Kubernetes also provides automating various aspects of networking such as provisioning and configuring load balancers for service ingress as needed. It provides an internal DNS service which automatically registers names for deployments so that linked services can just refer to each other by name without any additional configuration. It can also manage things like SSL certificates which can be shared across multiple services.

Lastly, it provides you with a single place where you can store secrets and configuration values that these services require and again, you wire all of this up with configuration files which can be stored in a git (or other VCS).

[+] qbasic_forever|5 years ago|reply
Read up on the philosophy of "pets vs. cattle". Kubernetes is meant to totally abstract the hardware and operating system. For a use case of one app on one machine it's not really doing anything.

But what if I told you that you needed to run that docker container on 5,000 machines with 2,500 load balancers in regions across the world? Are you going to SSH into thousands of boxes manually and run docker commands? Are you going to try setting up some monster ansible inventory to do the same? In practice the best minds in distributed computing have found those kind of practices break down at large scale--you just cannot reason or deal with individual machines when there are thousands of them.

This is where kubernetes comes in--it's an abstraction that lets you declare "here's the state I want, X machines running Y containers, all linked through Z services" and kubernetes will make it happen, period. It will take care of contacting thousands of machines, controlling the running containers, ensuring they stay running, handling failures, monitoring, load balancing, etc. You no longer think about problems in terms of low-level machines and instances, you think about the higher level objective like deploying code.

The beautiful thing is that it scales down nicely. A simple 50 line YAML file that declares running your docker container and load-balancing it with a service can easily deploy just to your local machine, or be scaled up to run on 5,000 machines by just changing a variable in the deployment scale. The same simple one-liner kubectl command kicks of either deployment and helps you monitor its progress. If you've ever worked in distributed systems it is really incredible to see this in action at scale.

[+] moufestaphio|5 years ago|reply
> I have a docker container. I deploy them on vm. I use load balancer to split traffic. Could you please walk me through what problem Kubernetes would solve here?

What happens when your VM dies? Kubernetes would automatically bring it back up. It has health checks, and knows when containers die/crash.

What happens when you need another docker container due to traffic? Again, Kubernetes fixes situations like this. Kubernetes has a lot of built in support around scaling etc.

Also, what if your docker container doesn't need a whole VM? Say you've got 5 different docker containers (which all scale independently), and lets say 3 VMs. Kubernetes will distribute them across those VMs based on there resource needs.

There's a lot more, but that's kind of what I think of when you say 'Orchestration of containers'.

[+] pickledish|5 years ago|reply
After using k8s for about a year at work now, the way I understand it is that there’s a really large gap in abstraction between “a bare metal machine or VM with a shell prompt” and “a service that handles requests with some Python code”, and kubernetes helps fill that gap with some sane defaults and useful abstractions for people who run services.

If you run your docker app on kubernetes, you get a lot of things for free with the platform (rolling no-downtime deployments, service discovery, auto scaling) that you’d have to set up manually if you were running your service on (say) EC2 instances instead.

It can be a headache to learn sometimes, but ultimately saves a lot of effort if your use case fits!

[+] mrkeen|5 years ago|reply
Not a huge fan, but for me the exact point where Docker ends and Kubernetes starts is binding a container to a network port. If you write what port to use inside a Dockerfile, it does nothing, it's only advisory.

For better or for worse, you need something outside the Dockerfile that can run it. That can be you, if you want to type out 'docker run -p8080:80' etc. You could probably script it, but does your script do restarts, failover, etc?

[+] Townley|5 years ago|reply
Downsides (see: tremendous complexity) being generally understood, I'll list some of the upsides I personally feel compared to your described setup:

- If you wanted to scale up or down the number of container processes running on your VM, you'd need to write some code that looks at system utilization. Autoscaling k8s clusters do that for you. They can even provision additional VMs ("nodes") for you during times of heavy traffic, or scale down to save money.

- Updating your app requires either logging into each VM, or writing an Ansible playbook to do that for you. By the time you've written a zero-downtime, health-check-honoring, contextually-aware Ansible playbook, you've made your own container orchestration solution.

- If you run multiple containers that need to talk to each other, you'd need to handle their networking. K8s gives you tools for handling networking between containers in the same namespace that allows them to communicate without exposing them to the wider internet.

- The ecosystem of utilities is as good (and sometimes better) than you'd experience in your VMs setup. cert-manager makes certificate management almost as easy as LetsEncrypt does on a single machine. Prometheus and Grafana are excellent logging and monitoring solutions (and, IMO, much easier to setup on K8s than ELK is within a distributed VM setup). Cillium provides extremely powerful and useful networking and security policies that leverage eBPF

- Changes you make to the configuration of your server won't carry over if you ever need to switch hosting providers, or (more often the case for me) just want to start fresh.

It's absolutely a huge learning curve, but eventually the complexity (mostly) goes away, and you're left with a reproducible method for deploying apps. So in the same way a rails/django developer might use an overpowered solution for their blog API, or a React developer may build a custom frontend when wordpress would also do... someone who's taken the time to familiarize themselves with K8s might find the familiarity and consistency of the interface enjoyable, even if it is clearly killing a fly with a sledgehammer.

[+] ceterumnet|5 years ago|reply
I've used Kubernetes for a few years now, and there was a description that really resonated with me: You can think of k8s as an operating system where we can deploy applications, especially those that run more than a handful of services.

Said another way, if Linux (or whatever) is the OS for your server / VM / host level / network device, k8s is the OS for your cloud application.

And, when k8s is implemented properly, it takes a lot of headaches that can come from dealing with the myriad problems that arise when your architecture goes beyond a basic handful of "tiers."

[+] __blockcipher__|5 years ago|reply
Kubernetes does a lot of different things but one of the more important is resource management. You spin up a fleet of Kubernetes nodes which can have totally heterogenous (differing) amounts of compute/memory/disk, and have the scheduler make intelligent decisions about where to stick containers based off your (optionally specified) list of resource requirements.

So it solves the bin-packing problem automatically rather than having to manually map out an efficient way to use your infra. It doesn’t reduce the complexity per say - the interactions can get complicated if you’re using stuff like node taints/tolerations instead of the more computationally simple “App X needs this much RAM but beyond that I don’t care where it lives”

If I find a bunch of raspberry PIs in my basement and want to have them join my fleet, I just do that and even if my fleet varies from 256 core CPU boxes with huge raid arrays all the way down to tiny raspberry PIs, the scheduling just works. Note here the broader pattern of abstracting away the physical hardware; it’s a really important concept to grok.

[+] matwood|5 years ago|reply
> “It’s an orchestration platform for containers”

But that is what it means. K8s, ECS, even docker swarm are ways to orchestrate containers to do something useful.

Take your lb example. What happens when one of the containers you deployed or VMs you deployed to dies? How does it get restarted? Where does the lb send traffic?

[+] dvcrn|5 years ago|reply
I tell k8s: take this container, set up one or more pods with it, add some env variables and out a router on top, and restart the pods if they crash

If I add more physical hosts or scale the pods, k8s does everything for me like moving them around across the available resources

[+] cbushko|5 years ago|reply
For me, using Kubernetes is all about the patterns that it provides and how it removes a certain class of problems for you.

Deployments, scaling, logging, etc are some of the patterns it provides and the consistency matters. How many of us have worked at companies where deploying two services have been completely different? One team runs the jenkins pipeline while another team FTPS the files over. Now multiply that by several services and several tasks (logging, scaling, etc).

The benefit is in the patterns and standards it provides.

[+] hertzrat|5 years ago|reply
From the other replies here, it sounds like what kubernetes does for most people is what people used to do with bash scripts in the past: automatic configuration and deployment of sets of Phoenix servers (except in containers instead of vms these days) with infrastructure as code. The main difference sounds like you write declaratively and you get to use yaml. It seems to throw in monitoring and control too, which seems like a timesaver. Is that about right?
[+] StreamBright|5 years ago|reply
None. K8s is the best way to add insane amount of complexity and yaml based programming where you do not need either of these. At my day job we spend hours on debugging k8s and many of the time it is impossible to find out why something times out or fails. People who like k8s tell me they use it because of monitoring and deployments. I am not sure what they mean by it. It is very hard to monitor application running on k8s and deployment is usually solved with tools made for deployments.
[+] m463|5 years ago|reply
My head kind of spins here too. Obviously all these folks have gone through the tech tree implementing the various parts and then ceded control to kubernetes when they found it could handle it.

Seems to me like it's helpful to do an oil change yourself before you take it to the dealer, then understand what they do before you take it to jiffy lube the next time (or vice versa). You keep abstracting it away until you just give your credit card. :)

[+] tyingq|5 years ago|reply
It's a single control plane for the things you're currently managing separately. You mentioned a load balancer, several VMs, and maybe you have some scripts you use to deploy or to add VMs when needed. And, as you suspect, K8S is overkill for many situations.
[+] Fiahil|5 years ago|reply
> Can someone who is familiar with Kubernetes please explain in plain words why we use Kubernetes?

I use Kubernetes because it makes my application, its configuration, and its operations portable.

That's it.

[+] spondyl|5 years ago|reply
Most of the replies are tackling Kubernetes from a technical perspective so for a people perspective, I can try to explain what might be a logical progression that I've seen.

A startup that grows to have hundreds of developers might transition from running managed VMs to "the cloud". One team sets up the network (virtual networks and some subnets).

As new employees join, they have no reason to interact with those teams who are effectively "hidden" so they deploy their stuff and perhaps wrangle with subnets and what not. Someone tells you that you need to attach subnet-a20w88vhuh4fuih to your resource and it will magically be accessible in the office.

Nothing is in charge of VM sizing so you've got people blowing hundreds or thousands on massive VMs when they're only using 10% of it and vice versa, teams whose application is choking but they don't really have a good mental model of say general purpose VMs vs memory optimised so they just bump up the SKU instead of being more efficient. This is happening everywhere as the company accelerates more and more.

It gets worse when you have a shared cluster say; for an entire team that is globally distributed and the new intern application is doing some weird O(n)6 computations and absolutely blowing the side out of every other resource you've got.

Now at this point, it's effectively a communication/culture problem but Kubernetes can "fix" some of these issues in a sense.

Network for the most part becomes abstracted away and what you're left with is defining security (what ports and protocols should I expect) on an application level, rather than on a security group level. It's kinda neat because these rules are localised to your application whereas they might have been configured manually in a cloud portal or via some terraform config owned by some team in the shadows.

Each of your deployed applications become their own isolated units called pods. A pod could be one or more containers but it's effectively a standalone slice of an application (ie the web frontend while a redis instance might be another pod). There are bigger abstractions to group application pods together but that's besides the point.

These pods get deployed to a cluster (a bunch of VMs) and cough "orchestrated" but the value here is that your containers might be running right next to some containers for the business team or the machine learning team and you would never know. You don't need to know either. The value, as foreshadowed above, is that if you're being a noisy neighbour, your container will either get rebalanced somewhere else or just shut down for exceeding memory usage.

I'm a bit flakey on this point but since each node in a cluster is a massive VM, there's no need to worry about over or underspending based on your computational use as well. You define the amount of memory you want to allow and you get matched to a relevant node based on how much capacity is available. As you gain more users, you just add more nodes. Before that, you might have been "reserving" say X thousand compute hours of certain VM SKUs or whatever. You might still do that but you could feasibly just have whatever your node sizes pre purchased making capacity planning pretty straight forward.

Generally, there'll be some team whose purpose is to manage said cluster so in a funny way, it somewhat revives the whole dev/ops split in that your compute team generally know the nitty gritty of networking and what not while your developers just deploy an application and it "lives on Kubes".

I may have a missed a bunch of stuff but hopefully this outlines some of the more "people" issues a bit? It's half and half useful but also it can be used as a technical fix to a social issue.

[+] brown9-2|5 years ago|reply
now imagine you have 10,000 containers you want to run across 5000 VMs. Kubernetes solves that for you so you aren’t deploying each of those individually.
[+] chomp|5 years ago|reply
I mean... I'm happy he's happy, but for instance, I sign up with a budget webhost, run rsync and my blog is deployed. Super low mental burden. There's no arguing that K8s for something like a personal blog is a little overkill.

Again, totally not hating on this, we all have our hobbies and I love that the author is super into this.

[+] taeric|5 years ago|reply
It is mind bending to see how much more complicated than "rsync" deploying a simple static site often gets.
[+] dexterchief|5 years ago|reply
I'm happy you're happy, but do you really need all that fancy file syncing/diff/resolution stuff? Wouldn't FTP have been enough? Vim over SSH?

Seems like overkill. :P

[+] oauea|5 years ago|reply
I am unironically using Kubernetes for my personal home server. Thanks to https://k3s.io/ this is really easy to do, great fun and extremely useful.

I have a git repo containing all my helm charts & docker files, testing & deploying changes is absolutely trivial now. And it's great to have everything version controlled.

Previously I used to use Ansible, but you quickly run into issues which make you want containerization: Conflicting library/tool versions, packages that pollute too much of the system, port conflicts, hassle of keeping the playbook idempotent, etc.

So while docker-compose would also do fine, having kubernetes to manage the ingress' routing system is rather practical. And the same goes for the other bits and bobs of infrastructure offers you if you're already using it. It's just very convenient.

I've been doing this for a few years now, and am now up to 14 different apps running on my single home machine in Kubernetes, ranging from Home Assistant to PostgreSQL to Plex.

Also it's just good experience. I also use Kubernetes for work, and this has made me noticeably more proficient.

[+] samcat116|5 years ago|reply
This is something I see a lot of people ignoring about Kubernetes. There are two completely different sides to it: the developer experience, and the sysadmin/devops/sre/what-have-you experience. This post completely ignores that second part, which is arguably much harder. You can ignore large swaths of those topics by using managed K8s platforms, but the pricing for those means that running a blog like this might run you $50-$80/mo
[+] chmod775|5 years ago|reply
The author seems to think the only real alternatives are docker or whatever.

So here's a way to host your personal blog if you don't want to over engineer it:

1. Have a git repo with your nginx/whatever config files.

2. Have a VPS running debian.

3.

    apt-get install nginx git ...
    git clone ...
    ln ... # create symbolic links to your nginx/whatever config in your git repo
    systemctl restart nginx ...
4. You're done. Create a cron job to automatically pull the latest changes from your git repo if you want.

The above steps should take most people around 10 minutes.

If you need to actually pivot into something that scales easier from there, I recommend following these steps/levels as your scale increases:

1. Create an automatic install script.

2. Use that script to create a .deb package instead of installing directly (optionally create a repository for this).

3. If you want to move to docker or what have you, it's trivial to install a debian package in a container.

But let's be real, you never even will have to do any of the above because it's a personal blog, and it'll probably scale to the world population on a $5 VPS, especially if you slap cloudflare in front.

[+] nrmitchi|5 years ago|reply
Kubernetes for running a blog (or multiple) is fantastic... if you are already using Kubernetes for other things.

If it's for just the blog, and the only goal is to run the blog, then it is (almost definitely) overkill.

If you already have a cluster up, or have a bunch of other projects already running on the cluster, then Kubernetes is likely the easiest way to run a blog.

[+] lyschoening|5 years ago|reply
I mean it's one Kubernetes cluster. What could it cost? 300 dollars?
[+] koeng|5 years ago|reply
I also unironically use kubernetes (in particular, k3s) to run my personal blog. I run it on a colocated server I built myself, with the hope to have a couple colocated servers in the future I could network together. Right now, it has proxmox running a few VMs, 3 of which are for k3s, one for hhd backups, and one for a postgres server (which I am lazy, so am running with Dokku. Makes backups easy). The drives all run ZFS mirrors (2 2tb nvme, 2 8tb hdd)

Honestly, it's a super comfy setup, and very little maintenance. The one thing I have is a master update script that does the docker image upgrading and rollouts. I make updates all the time and pretty much don't think about it. More energy activation than a PaaS like Dokku, but worth in long run, I think.

[+] spion|5 years ago|reply
YAML is the biggest flaw of Kubernetes, so I'm quite exited that cdk8s is progressing very nicely https://github.com/cdk8s-team/cdk8s

There are other solutions that are potentially better (e.g. Dhall) but cdk8s seems to have momentum and sense for tackling the practical stuff (integrates easily with cdk, library with simplified constructs cdk8s-plus, import and convert existing stuff easily etc)

[+] simo7|5 years ago|reply
Well I wish he expanded a bit more on the “Just do X, that’s so much simpler” section.

He's on AWS, he could have gotten all those benefits with ElasticBeanstalk or ECS even. Plus no yaml files but actual IaaS (terraform etc.) and much better integrations with the other AWS services.

I'd personally still use EC2 for a blog, but if you're looking for convenience/battery-included type of thing...yes I'd argue you're better off just doing X instead.

[+] erichmond|5 years ago|reply
Another thing worth calling out, there's nothing wrong with over-engineering solutions in pursuit of knowledge.

Especially when it's just a personal project.

[+] callamdelaney|5 years ago|reply
Okay but when Kubernetes breaks it's usually none-trivial to understand why. I've spent more time than I'd like diagnosing Kubernetes issues with networking and orchestration which could have been spent on building features.
[+] unknown2374|5 years ago|reply
50 lines for configuration of a blog is not what I'd have personally called "simple". Granted GUIs (e.g. AWS console as mentioned in the blog) are not as easy as an config file, the comparison should be against other blog deployment programs, like hugo/jekyll/ghost, not.. the AWS console.
[+] revskill|5 years ago|reply
What clicks me about Kubernetes is not about orchestration or zero downtime or scaling,... I can just use Kubernetes on 1 node for all of my services.

What matters is the declarative approach of Kubernetes. It's like the first time I learnt about React: declarative renderering/deployment based on state !

[+] proxysna|5 years ago|reply
My 3 droplets run my Rust, Quake, gitea, registry, wireguard & pihole combo, traefik, grafana + prometheus with hashicorp nomad & consul. So yeah running an orchestrator for trivial tasks is actually fun, this is a good exercise and just in general a fun thing to do.
[+] pnathan|5 years ago|reply
I run GKE for some small apps. I also use AWS S3 hosting for my personal blog. The cost differences are... non trivial to the point of a bad joke, if we were comparing ability to reliably ship plaintext over the wire. But I'm not. I host a database and webapps on the k8s cluster, without adding extra EC2 nodes, RDS costs, or wrestling with AWS Lambda limitations.

I can also confidently say that having something approximating a stable web app demands doing a lot of serious thinking, and "a single server running Apache on Digital Ocean" does not cover that case sufficiently. You need to tolerate failure, failover, load balancing, bin-packing, etc. I used to run a small autoscaling group on EC2 for my own systems; the dang thing would fail to come up on one node very frequently and so a number of the queries would fail. I eventually burnt it to the ground and redid it. I've never had that hassle in k8s. Its designed to succeed, in a way the "box of parts" approach doesn't.

Boxes of parts are useful. For a complexity-sensitive & thoughtful infrastructure engineer, having something like the old Synapse/Nerve[1] system with your apps distributed across some 5-20 machines with a monitor lease to spawn new ones on failure would probably approximate Kubernetes for a few years, until you have to do something fancypants. You've still reimplemented part of Kubernetes, though... The other angle is, boxes of parts can go in wildly weird directions.... if you need it.

Looking at some infrastructure these days professionally, the question is - when do we move to Kubernetes. It's not interesting or useful to the company to be maintaining our own thing or own strange path. The only questions are around the path - how much rework needs to happen and how much building in k8s needs to happen to get there.

GKE is a very good starting point for k8s. Strong recommend.

n.b. With respect to the cost. I consider this a professional investment / professional development expense. Spending $100-$200/month on a software engineer's salary is a reasonable return for being able to readily say I have experience in a current topic. Also I can run my own apps. :)

https://github.com/airbnb/nerve

[+] bostonvaulter2|5 years ago|reply
> For example, the entire deployment configuration for this blog is contained in this yml file

Isn't that not quite correct?

If I'm reading this correct (which I may very well not be), isn't this a reference to a Dockerfile:

    image: marcusbuffett/blog:latest
Which then might have lots of other complexity contained within. I wouldn't call that self-contained, I think that's overselling it.
[+] hsson|5 years ago|reply
I wholeheartedly agree with everything said in this blog post. The real downside to using k8s for a personal blog or hobby project is that it's so damn expensive. I did try rolling my own k8s using k3s and some Raspberry Pis, but that quickly became annoying to maintain and to get up and running in the first place.