top | item 17415284

Kubernetes 1.11 includes In-Cluster Service Load Balancing and CoreDNS

131 points| rbanffy | 7 years ago |kubernetes.io | reply

49 comments

order
[+] ponyous|7 years ago|reply
Is K8S ever suitable for small business? K8S is only supported on expensive hosts (AWS, GCE, Azure), is it possible to spin it up on hetzner/ovh or another provider that is cheaper than AWS, possibly even on my own hardware? How hard it would be to make the simplest reliable prod environment on bare metal with K8S?

I'm at the moment running 1 (tech) man SaaS with minimum profits, here are my goals (1,2,3 are essential):

1. Be cheap - It's a side project, I don't need to sacrifice money for speed as I enjoy what I'm doing.

2. Automatic failover (This should cover majority of downtime imho)

3. Backups

4. Automatic scalability (CPU over 95% for last 2 minutes or smth like that? add new containers pls)

5. Load balancing (using erlang so this can be handled on application level if nodes can see eachother)

6. If gitlab autodevops can work out of the box that would be fantastic

Unrelated to above: - How are K8S backups handled? I spoke with a friend (K8S fanboy) and he said you have several containers spawned of the DB and they are all interconnected. If one of them fails another one has the data. This seems stupid to me, what if the whole zone fails and all DB containers are shut off? - Same friend asked me: "What restarts your services if they fail?". "init.d". "And what if the machine fails?" Well I gotta restart it manually. "You see, you should use K8S orchestrator". "What restarts K8S orchestrator if it fails?" <Silence> ^ I have a feeling big part of community is like that, "super cool feature of K8s you should use it" but there is no connection to the rest of the context.

Ps. conversation was in another language so I'm not sure I used the right terms eg. "K8S orchestrator"

[+] Birch-san|7 years ago|reply
I recommend Rancher for the small business that wants to roll their own Kubernetes for free. Rancher 2 is a bit underdocumented right now, but we had great success with Rancher 1.

Your bare metal hosts can be provisioned as Rancher hosts, which are dumb slaves managed by a Rancher server.

From the Rancher server, you can trivially express "I want this Docker container deployed across n Rancher hosts" or "I want 1 instance of this docker container across all Rancher hosts with the tag "has public IP".

You can group your Rancher hosts into Sandbox and Prod Rancher environments. You can easily install Rancher's load balancer service on them, or mount network storage, or register secrets upon them (like private Docker registry keys).

It also gives you health checks, host monitoring, and zero downtime redeploys. Super easy to use from the UI or CLI. Easy to install, too.

[+] inscrutable|7 years ago|reply
If you're going the kube route, I would go with GKE with pre-emptible nodes. You can have a single micro base node ($25/month), and then an auto-scaling, pre-emptible pool for a discount of 80%/month vs spot prices on AWS/GCP. There's not a lot for you to do once you set it up. With pre-emptible the nodes will on average be unavailable 5-10% of the month, but typically not at the same time.

There are cheaper baseload alternatives appearing, but it'll probably be 12 months until they're stable.

https://labs.ovh.com/kubernetes-k8s

https://github.com/hetznercloud/hcloud-cloud-controller-mana...

[+] meta_AU|7 years ago|reply
Running K8S on other environments isn't a difficult job, but you do need a certain level of experience running systems. You need to set up EtcD, in prod you have 3 or 5 of them - there are a couple of topologies for running EtcD and peer discovery (you will need to RTFM). This is all your K8S state, so you need to set up backups for these.

Each node you are going to include in your cluster needs to have docker and kubelet running. You will need to start kubelet with your init system (systemd, etc).

Then all you need is to bootstrap a control plane. The control plane has 3 components. The API server needs to communicate to etcd, everything else just talks to the API server. The API server is stateless and can (should) have multiple instances running. The other two parts are the Controller and Scheduler. Also stateless. The tool Bootkube[1] can generate all the configuration and perform the bootstrapping for you.

To answer your dot points, all but 4 are easy enough with using the kubernetes docs and some systems/networking knowledge. Point 4, assuming you mean starting new machines based on load, will require you to look at which systems are supported by the autoscaling system and potentially add an integration for your environment. There is the horizontal pod autoscaler that will run more of your service when needed and the cluster autoscaler[2] which will start more machines when it can't run as more instances of your service on the current number of machines.

Edit: There are also tools like Patroni, Stolon, and postgres operators that will assist with your DB (postgres in this case) management. Scaling, HA, backups, etc.

1 - https://github.com/kubernetes-incubator/bootkube

2 - https://github.com/kubernetes/autoscaler/blob/master/cluster...

[+] bryanlarsen|7 years ago|reply
- It's certainly possible to run Kubernetes on bare metal. We do it. I recommend kubespray to set things up, there are a bunch of other ways.

The two big things you get on the cloud that you need plugins for on bare metal are networking and storage. Networking plugins are basically fire&forget these days. Storage is definitely not at that stage yet. You can use a plugin that provides redundancy & failover like Rook/cephfs or Heketi/gluster or you can use local storage and handle redundancy/failover similar to how you'd do it without Kubernetes.

- Kubernetes is like any other abstraction. It complicates things and adds another layer to manage and understand, but it lets you manage disparate server apps similarly. So it really only makes sense if you have a good number of disparate server apps to manage.

- If you're looking for 5 nines or something like that, as a small operator you're more likely to screw things up through a PBKAC or misconfiguration than you are via a hardware failure. So you probably want two clusters with manual failover between them. That way you can switch to one cluster while you operate on the other. The second cluster could be a GKE cluster scaled down, but that's still a bunch of complexity and some expense. You're talking about zonal failures, so you'd need two clusters to protect against that anyways, so...

- you probably need several layers of backups. An etcd backup, ark, and your persistent storage backups. Ark handles persistent storage backups on the cloud, but if you're on bare metal, you need to do that yourself the old fashioned way.

[+] dboreham|7 years ago|reply
>what if the whole zone fails and all DB containers are shut off?

I agree that the container clustering world tends to focus on stateless scenarios, conveniently ignoring the "hard stuff". However, that doesn't mean it can't be done. k8s can be told to maintain persistent volumes that can be claimed by containers. This mechanism can be (has been) used to implement "real" DB HA. e.g. see this talk : https://www.youtube.com/watch?v=Zn1vd7sQ_bc

[+] Thomaschaaf|7 years ago|reply
Getting all of the mentioned requires a real cloud provider which offers more than just servers. Thus you need to choose among the cloud providers. Digitalocean is another provider offering a managed k8s. GCE gives you 600 usd credit - this is valuable if you want to learn new tech.

I actually switched all our servers from hetzner to aws because for our needs it was cheaper and I could spend my time on things I care about. For me K8s only handles non persisting data. All data other is stored outside (Managed DB (like RDS) or S3) of it. You only need to backup the masters. If you use something like GCE that is managed for you & free.

[+] jacques_chester|7 years ago|reply
> K8S is only supported on expensive hosts (AWS, GCE, Azure), is it possible to spin it up on hetzner/ovh or another provider that is cheaper than AWS, possibly even on my own hardware?

Of course. There are literally dozens of distributions and repackaged versions now[0]. The advantage of GKE et al is that they keep them up to date -- still a bit harder than it looks.

> Be cheap

It very much depends on what "cheap" means and what your needs are. GKE with a cluster autoscaler is fairly inexpensive for what you get. But if your time is free and you like to tinker, self-managing on a VPS might be for you.

Be prepared to invest a fair amount of time. Kubernetes is simple in each part, but there are lots of parts to learn.

> Automatic failover (This should cover majority of downtime imho)

This is more of an app concern. Kubernetes itself won't really do this for you. Something like Istio will probably also be necessary to help with traffic management.

> Backups

Very much up to the distribution / hosting environment.

> Automatic scalability (CPU over 95% for last 2 minutes or smth like that? add new containers pls)

Horizontal Pod Autoscaler; a Vertical Pod Autoscaler is under development. Other autoscalers are being developed for more specialised uses (I've worked on one) and I expect there will be some concept of pluggable autoscaling soon.

> Load balancing

Numerous solutions in this area. Typically left to the IaaS. Istio can do this.

> How are K8S backups handled? ... several containers spawned of the DB

Regarding DB backups: I would again use a hosted service if at all possible. Several containers can share a volume in a Pod or across Pods. But long-running stateful services is not yet quite the Kubernetes sweet spot.

> What restarts K8S orchestrator if it fails?

Again, numerous solutions. I work for Pivotal, our solution in PKS is to run everything with BOSH. Something like GKE uses Google's own monitoring and recovery systems.

I should add here that I don't think GKE is necessarily the "best" of the public hosted options. Just the only one I've used in anger so far. I actually prefer to do development work on minikube whenever I can. If you go down that route, shop around!

[0] https://kubernetes.io/partners/

[+] numbsafari|7 years ago|reply
It depends on how you define small business. You can have a SaaS app with a lot of users and revenue, and a small team and a couple servers and maybe not benefit from k8s.

Or you can have a startup with a small team and small number of users, but the legal and market requirements may dictate things like availability requirements for which k8s can add a lot of value that would normally cost you hiring multiple ops folks and writing a lot of boilerplate code.

For the latter kind of business, k8s is probably a good investment that is worth the added costs of running on a more expensive cloud provider. You’ll probably have to stick to them for security/compliance reasons anyway.

For the former type of company... k8s is probably only worth chasing if you are already seeing scale, or if you see becoming more like the latter. In that case, I would first test my app on something like GKE to see what it means from that perspective. Then I would look at deploying on my preferred hosting provider.

If you can’t get your app to run well/how you like on GKE, it’s probably not worth planning a cluster deployment just to find out the same.

[+] rubenbe|7 years ago|reply
1) If you're willing to do some custom setup, it's quite easy to spin up Kubernetes on bare metal or an OVH VPS using kubespray or kubeadm.

3)IMHO even for small setups because you buy into a <pseudo standard> ecosystem. Backups can be handled for example by stash[0] instead of having to (partially) write your own backup solution.

4)I haven't tried this on OVH, but as it's a openstack based environment certain items are well integrated e.g. persistent volume handling [1]

[0]https://github.com/appscode/stash [1]https://kubernetes.io/docs/concepts/storage/storage-classes/...

[+] rbanffy|7 years ago|reply
I'm a big fan of the Google App Engine standard runtime. It's not too cheap if you have lots of traffic, but a lot can be squeezed out of the free tier, and it's very opinionated, but, for proof of concept, it's great.

My wife's company website ran on it for years.

[+] Jhsto|7 years ago|reply
Shameless plug: https://www.juusohaavisto.com/microservices.html

To answer your question briefly: Kubernetes is hard to stay up-to-date, but it aims to improve later on. Kubernetes was largely created to solve problems scaling physical hardware -- I would not setup Kubernetes unless you run into a problem where you are growing out of dedicated host offerings thus need to update your apps across multiple such devices (easier options exist, look at LXC clustering). You need to define "reliable". If it is just your app or hardware crashing, you can probably get away with two dedicated computers in different colocations.

[+] nevalau|7 years ago|reply
Kontena Pharos (https://github.com/kontena/pharos-cluster) aims to make installation of Kubernetes as easy as possible. Works on any cloud or on-premises, also ARM64 supported.

It handles HA-setup, etcd member replacements, hardened configuration for enhanced security (follows NIST SP800-190 recommendations). You can easily extend the installation with addons of your own or from any Kubernetes ecosystem project

(Disclaimer: I'm one of the contributors)

[+] playworker|7 years ago|reply
FWIW I've found MAAS and Juju the easiest way to get a Kubernetes cluster up and running on bare metal - I've not tried every other method though so YMMV :)
[+] merb|7 years ago|reply
Yes it is.

I do it with kubeadm and metallb (for my public ip) I'm not sure how good ip failover will work on hetzner/ovh, but on your own hardware in your own network it is not a problem.

(K8s does not need backups, only your etcd and configs should be backupped but it is unlikely that you loose your whole control plane in one step.)

[+] rmetzler|7 years ago|reply
You could run Kubernetes on a Raspberry Pi cluster using the Hypriot distribution.
[+] sandGorgon|7 years ago|reply
Docker Swarm - you have 80% of the power of kubernetes and 10x simpler.
[+] meta_AU|7 years ago|reply
I've always liked how K8S release notes come with a series of blog posts doing a deep dive into each feature. But when I think back on it, because they come out after the release announcement I end up forgetting about them until the next release.

I wonder if releasing the posts in the period leading up to the release would be better, or if that would just lead to artificial delays on the actual release.

[+] doctoboggan|7 years ago|reply
Tangentially related but hopefully someone one HN can help me out.

I want to try using Kubernetes with my current project. I've built it using Docker and docker-compose. My compose file currently has four services (web, db, dbadmin, and reverse proxy). I am using a flask app for the web container, postgres for db, pgadmin for the dbadmin, and traefik for the reverse proxy.

I am currently running the above stack on a single host using `docker-compose up`. I like how docker-compose creates a network for me where I can access other containers by their container name.

For my own education, I would like to try deploying this with Kubernetes. Specifically I am interested in learning how to spin up multiple web containers that all talk to the same postgres container.

Can someone let me know if I am thinking about Kubernetes correctly, and if so some good resources for me to learn how to deploy by stack with it?

[+] nateguchi|7 years ago|reply
Asked this on the other posting of this but, does anyone know when Kubernetes will stabilise their IPv6 support? Anyone using / planning to use this in production?
[+] bogomipz|7 years ago|reply
Does the "IPVS-based in-cluster service load balancing" replace the Kube-proxy iptables load-balancing then?
[+] sisk|7 years ago|reply
It’s an alternative, selectable by changing the proxy-mode flag on kube-proxy. If the iptables implementation is working for you, I wouldn’t necessarily jump to it—though note that iptables has a big performance hit when you start getting to hundreds to thousands of overlay IPs and you’ll notice it if you manage a mid-to-large sized cluster. Certainly worth playing with (throw it on for a node or two for now), or worth the default for a new cluster, but like with any recently stable option, if it ain’t broke …
[+] hanikesn|7 years ago|reply
Yes, it does. It's still kube-proxy though, but it will use ipvs instead of iptables.