I think it'll be quite interesting to see how the smaller players organize themselves around the multitude of cluster resource management tools emerging as a natural reaction to Kubernetes growing out of the work Google's done on Borg.
I am curious to see how long of a shake-out period will exist before there's either a de facto stack of "compute resource" tooling, or if there's always going to be a highly fragmented and diverse way to accomplish your goals. Just off the top of my head (and there's way more) I'm thinking about Tectonic[1], Mesosphere[2], Rocket [3], Kismatic [4] as a few examples.
As a technologist and a planner, it's been challenging to see far enough into the future to decide on what tools to devote myself to learning at this point. I do think we're certainly in a "post-public cloud" timeline where we're getting good enough (or will be in 6-12 months) at abstracting virtualization right up to a millimeter or two below the application layer of our stacks. How we choose to do so seems to be currently up in the air.
In my mind, this opens up the possibility of compute as a resource much wider than had previously been possible. We'll be less reliant upon Azure, AWS, and GCP's mixture os Paas and Iaas and much more interested in compute as a resource, likely from bare metal or private cloud providers.
I'm looking forward to the increased efficiency (both through compute power and cost) and security available in moving from a application-level virtualization to operating system-level virtualization.
Disclosure: I work at Google and was a co-founder of the Kubernetes project.
I think your observations are interesting. From my (somewhat biased) viewpoint I don't think we will enter into a 'post cloud' world. There are very real efficiency gains from running at public cloud provider scale, and the economics you see right now are not what I would consider 'steady state'. Beyond that the systems we are introducing with Kubernetes are focused on offering high levels of dynamism. They will ultimately fit your workload precisely to the amount of compute infrastructure you need, hopefully saving you quite a lot of money vs provisioning for peak. It will make a lot of sense to lease the amount of 'logical infrastructure' you need vs provisioning static physical infrastructure.
There are however legitimate advantages to our customers in being able to pick their providers and change providers as their needs change. We see the move to high levels of portability as a great way to keep ourselves and other providers honest.
Yeah, I think that sadly, there is going to be a little bit of an inevitable equivalent to the unix wars of the early 80s. The sooner we can reach a standard place, the better it's going to be for the container community and developers more generally.
One of the reasons that I pushed hard to get Kubernetes open sourced, is the hope that we could get out in front of this, and allow the developer community to rally around Kubernetes as an open standard, independent of any provider or corporate agenda.
I'm also very curious which direction things will move. I think I'm less convinced than you are that it'll be away from AWS and the like though, they're innovating at least as fast as the open-source container cluster tools (at least it seems that way to me).
I can imagine a future where it gets easier and more common to build an arbitrarily complex backend by just hooking together AWS services, using Lambda (or something that evolves from it) to write all your custom business logic without ever thinking about a server, VM, or container. I'm working on a greenfield app and very seriously considered this route now we but ended up deciding the uncertainty vs doing it the way we know wasn't quite worth it. It feels very close to the tipping point to me though.
I was at Google from 2006-2012 and like most Googlers, used Borg extensively. Since leaving, I've been generally impressed by the AWS ecosystem, but have been sorely missing Borg-like functionality. It's felt like the rest of the industry is about 10 years behind where Google is.
I think the crucial question for us is going to be adoption and support within the AWS ecosystem. It checks out (to me at least) as the the technically superior option, but Amazon clearly wants to compete in this space as well and they have the home turf advantage.
like @brendandburns, I just want the best technology to win and become the standard. It would be shame if the Amazon/Google rivalry got in the way of something that important.
Can someone from Amazon chime in on this? Is there anything the Google team could do that would make Kubernetes a neutral project that Amazon would support? I feel that there's a ton of raw knowledge that Google engineers have accumulated on cluster management, and Kubernetes is an opportunity for that not to go to waste.
Kubernetes is going to become the standard api for container orchestration only because there are no other tools out there trying to do as much. There was a vacuum around container orchestration tooling and Google got there first. Kubernetes components can be swapped out for other community-driven efforts, take mesos as an example, which can be used to replace the default k8s scheduler. With k8s you can avoid lock-in with different cloud providers. I think Google is hoping that we end up on their cloud platform but its nice to see that it is being built from the ground up to be used with other cloud platforms.
I'm really not sold on this yet. We've done a number of projects testing and using Kubernetes, CoreOS, Mesos and most recently, Docker's swarm. It's been interesting to see how and why the technology space is evolving, but a couple of general thoughts:
1) The concept of container as primitive, especially the thorough implementation that Docker put together is extraordinarily powerful.
2) The swarm idea - which provides a matching API to the Docker API - is a really near idea, even if it lacks the HA and scheduling functions to really make things work well.
3) I think the next evolution is really to iron out the network stack here. Kubernetes needs flannel in most circumstances, and the process is not seamless or as simple as Docker.
I'd also love to know the split between this, Omega, and Kubernetes at Google.
[+] [-] anonymuse|11 years ago|reply
I am curious to see how long of a shake-out period will exist before there's either a de facto stack of "compute resource" tooling, or if there's always going to be a highly fragmented and diverse way to accomplish your goals. Just off the top of my head (and there's way more) I'm thinking about Tectonic[1], Mesosphere[2], Rocket [3], Kismatic [4] as a few examples.
As a technologist and a planner, it's been challenging to see far enough into the future to decide on what tools to devote myself to learning at this point. I do think we're certainly in a "post-public cloud" timeline where we're getting good enough (or will be in 6-12 months) at abstracting virtualization right up to a millimeter or two below the application layer of our stacks. How we choose to do so seems to be currently up in the air.
In my mind, this opens up the possibility of compute as a resource much wider than had previously been possible. We'll be less reliant upon Azure, AWS, and GCP's mixture os Paas and Iaas and much more interested in compute as a resource, likely from bare metal or private cloud providers.
I'm looking forward to the increased efficiency (both through compute power and cost) and security available in moving from a application-level virtualization to operating system-level virtualization.
[1] https://coreos.com/blog/announcing-tectonic/ [2] https://github.com/mesosphere [3] https://github.com/coreos/rkt [4] https://github.com/kismatic
[+] [-] cmcluck|11 years ago|reply
I think your observations are interesting. From my (somewhat biased) viewpoint I don't think we will enter into a 'post cloud' world. There are very real efficiency gains from running at public cloud provider scale, and the economics you see right now are not what I would consider 'steady state'. Beyond that the systems we are introducing with Kubernetes are focused on offering high levels of dynamism. They will ultimately fit your workload precisely to the amount of compute infrastructure you need, hopefully saving you quite a lot of money vs provisioning for peak. It will make a lot of sense to lease the amount of 'logical infrastructure' you need vs provisioning static physical infrastructure.
There are however legitimate advantages to our customers in being able to pick their providers and change providers as their needs change. We see the move to high levels of portability as a great way to keep ourselves and other providers honest.
-- craig
[+] [-] brendandburns|11 years ago|reply
One of the reasons that I pushed hard to get Kubernetes open sourced, is the hope that we could get out in front of this, and allow the developer community to rally around Kubernetes as an open standard, independent of any provider or corporate agenda.
[+] [-] jganetsk|11 years ago|reply
Think of a cluster of VMs running CoreOS + Tectonic as an alternative to Google Container Engine.
Kismatic apparently calls itself "the Kubernetes Company."
Disclaimer: I work on Google Cloud but not Kubernetes or GKE.
[+] [-] dcosson|11 years ago|reply
I can imagine a future where it gets easier and more common to build an arbitrarily complex backend by just hooking together AWS services, using Lambda (or something that evolves from it) to write all your custom business logic without ever thinking about a server, VM, or container. I'm working on a greenfield app and very seriously considered this route now we but ended up deciding the uncertainty vs doing it the way we know wasn't quite worth it. It feels very close to the tipping point to me though.
Either way it's definitely an exciting time
[+] [-] CMCDragonkai|11 years ago|reply
[+] [-] jqgatsby|11 years ago|reply
I think the crucial question for us is going to be adoption and support within the AWS ecosystem. It checks out (to me at least) as the the technically superior option, but Amazon clearly wants to compete in this space as well and they have the home turf advantage.
like @brendandburns, I just want the best technology to win and become the standard. It would be shame if the Amazon/Google rivalry got in the way of something that important.
Can someone from Amazon chime in on this? Is there anything the Google team could do that would make Kubernetes a neutral project that Amazon would support? I feel that there's a ton of raw knowledge that Google engineers have accumulated on cluster management, and Kubernetes is an opportunity for that not to go to waste.
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] thinkersilver|11 years ago|reply
[+] [-] InTheArena|11 years ago|reply
I'd also love to know the split between this, Omega, and Kubernetes at Google.