cmcluck's comments

cmcluck | 9 years ago | on: Ask HN: Product Managers, how did you get there and what's your background?

Context: Was product guy at Google (built a few cloud products, did some work in the open source ecosystem), now CEO of a startup.

Background: I don't think I picked product management, it sort of picked me. When I was a really junior engineer I worked in a small team environment with much more senior engineers. We didn't have product management support, so someone needed to talk to the customer and figure out what they needed, and then later have the hard conversation when we were slipping our date. That ended up being me. Someone needed to document what we were doing, that was me. At the end of the day when we had little management support, someone had to represent the needs of the team and hold the team together during an aggressive corporate downsizing. The team looked to me to do that. I sort of drifted into this role without ever being asked to do it. I loved coding, but turns out I liked solving business problems just as much. My path to proper product management went through program management at Microsoft which was a bit of a half-way house. Good customer passion but more focused on execution than on the health of the business.

This doesn't directly answer your question, but I hope is helpful: what are the attributes I have seen of successful PMs? * Have good technical instincts. You don't necessarily have to code well but you need to smell credible to engineers and not have them flip the bozo bit on you. I watched a product guy argue that we should figure out how to reduce latency between global data centers and then someone kindly point out that speed of light was the problem at hand, and we really couldn't do much about it. Don't be that guy, you will never come back from that point. * Champion the customer. Product managers have to really 'get' their product deeply, understand it, use it, live with it. They need to be able to see it they way a customer sees it and represent the hiezen-customer to the team. The primary work product of the PM is the PRD (product requirements document) and the customer should shine through. * Own your business. It isn't enough to build neat technology, that people love, but if no one knows about it or you can't sell it you are wasting your time. Know your sales people, know your marketing strategy, understand the pricing model. Make sure they all get what the product does and is good for. * Be the janitor before you try to be the CEO. There are a million things a team needs to do. The product manager needs to fill the gaps. Win by doing the things the engineers can't or don't want to do, but don't 'wall paper' over problems with the team structure. Remember however that doing a gap job well indefinitely gets in the way of creating high functioning teams. You need to work your way out of a gap filling job. * Knowledge is currency. To lead, you have to have something and see something the engineers don't. Understand your competition, use their products, speak to a lot of customers, bring that knowledge back to the team and they will start to trust you. * Stay out of execution: you are not a project manager. The eng function should not be babies, they need to hire their own project managers to run their scrums, organize execution, etc. If things go well for your product you are going to be talking to customers, negotiating partnerships, etc just as the team starts to hit an inflection curve in execution, you can't afford to be trapped in the office running their processes.

Hope that helps.

cmcluck | 9 years ago | on: Kubernetes Founders Launched Startup Heptio to Bring Containers to Enterprise

A fair point. One thing worth remembering is that this was a point in time thing. I have seen a lot of movement and some very positive signals around convergence of OpenStack, and a real focus on the end user community. When I was doing the digging things felt different and there is a decent chance that were open stack where it is now I would have taken a different position.

The mission of CNCF is the promotion of 'cloud native technologies' -- specifically container packaged, dynamically scheduled, micro-services oriented workloads. It isn't about picking winners, it is about establishing a safe space for innovation and bringing to bear the collective communities. We have legitimately taken some time in getting the identity of the foundation established, but I feel like Dan Kohn (our new ED) is doing super work in creating a collaborative space for new projects.

cmcluck | 9 years ago | on: Kubernetes Founders Launched Startup Heptio to Bring Containers to Enterprise

[disclosure: Craig -- CEO] to name a few 1. support and services. this is a really important factor for most enterprises; they want to know they have expert staff on call who have a decent shot at getting a change they need upstreamed. 2. consolidation and operations at scale. turns out Kubernetes is being deployed as a 'devops tool' today and people create lots of teensy clusters. we built it to be super flexible and work either for smaller clusters, or work at scale (using namespaces, etc). there are advantages to running larger expert operated clusters (borg style) and we want to help enterprises get there with consolidation and operations tools. 3. integration tech. there are oceans of 'legacy' systems that basically run enterprise today and need to be integrated with. 4. help with for non-Google environments. despite the fact that there are tons of interested commercial parties in K8s, except for a few awesome community folks companies aren't putting a ton of resource into AWS, or OpenStack, etc and do unglamorous testing work, etc. Would love to help get the cloud provider model sustainable, and put effort into doing stuff like testing and better deployment tech.

cmcluck | 9 years ago | on: Kubernetes Founders Launched Startup Heptio to Bring Containers to Enterprise

Story goes like this: Joe and I did Google Compute Engine together. Once that was on rails we started looking at the gap between GAE and GCE. Joe found Docker way back before it was a household name, and we started thinking hard about the 'compute continuum'. What beyond the container format was needed. Brendan was working in the meantime on something that looked like cloud formation that we were also playing with. We had instantly good chemistry with the three of us, and he started looking at Docker too.

To raise awareness of Docker in Google, I asked Brendan to pull together a demo for our all hands. In a nutshell what he produced was the bones of Kubernetes. I remember looking at it and having a moment, he had built a mini borg cell on VMs. Basically made borg a devops accessible tool, not just a monolithic clustering tool (like Mesos was). When I saw that the product ramifications were obvious, I called Joe over to look and the rest is history.

cmcluck | 9 years ago | on: Kubernetes Founders Launched Startup Heptio to Bring Containers to Enterprise

(This is Craig -- CEO) Ouch :). I was never much good at naming. Take 'Kubernetes' as an example. I still get razzed about it.

I would describe this as a case of 'domain based naming' (i.e. we could get the domain name, it was uncontested space).

I hope to create something positive out of it, if we do it up right hopefully the company character will dominate the name.

cmcluck | 9 years ago | on: Kubernetes Founders Launched Startup Heptio to Bring Containers to Enterprise

[This is Craig -- CEO of this new venture, co-founder of K8s (with Joe and Brendan), and person who started CNCF (with Jim Zemlin, and a bunch of community wonks from big tech)]

It is a funny you say this. I spent a lot of time looking around the community at what existed before starting CNCF, and agonized over this. We needed to take K8s to foundation so that it wouldn't be a 'Google project'. Google was actually the best steward of the tech you could imagine, because the plan was always to make k8s ubiquitous and just win on quality of infrastructure but the community had no way to know that.

I looked at OpenStack hard, and like the energy and enthusiasm but really worried about (1) balkanization that was emerging with no 'true north' -- it just didn't have technical taste, (2) the tragedy of the commons -- most vendors were focused on their own interests and neglected the end users, (3) lack of coherence.

When designing CNCF I tried hard to work through this by creating a better foundation structure. (1) the business board has very limited authority over projects, hopefully making sure that we avoid it being a pay-for-play affair. (2) we made provisions for little companies to get top level seats based on community contributions (ditto) (3) we created an empowered end user group that would have equal authority to any other affair to make sure real users interest are promoted (4) we added a TOC (technical oversight committee) that was the most empowered group to establish true north that is community elected -- the idea is they need to champion the projects and establish technical 'taste' (e.g. Brian Grant from Google -- the guy who drives consistency sat on this group, not me the guy who had access to the purse strings and who was focused on the business).

(side note: i picked this structure because i was geeking on government structures at the time, and figured that the separation of powers yields more sustainable administration)

cmcluck | 9 years ago | on: Talk of a Split from Docker

Disclosure: I am one of the Google people who founded the k8s project. Product guy, not engineer though.

We are really concerned about 'writing letters from the future' to the development community and trying to sell people on a set of solutions that were designed for internet scale problems and that don't necessarily apply to the real problems that engineers have.

I spent a lot of time early on trying to figure out whether we wanted to compete with Docker, or embrace and support Docker. It was a pretty easy decision to make. We knew that Docker solved problems that we didn't have in the development domain, and Docker provided a neat set of experiences. We had 'better' container technology in the market (anyone remember LMCTFY?) but the magic was in how it was exposed to engineers and Docker put lightening in a bottle. The big things were creating a composable file system, and delivering a really strong tool chain are the obvious two, but there were others. I remember saying 'we will end up with a Betamax to Docker's VHS' and I believe that remains true.

Having said that there are a number of things that weren't obvious to people who weren't running containers in product and at scale. It is not obvious how containers judiciously scheduled can drive your resource utilization substantially north of 50% for mixed workloads. It isn't obvious how much trouble you can get into with port mapping solutions at scale if you make the wrong decisions around network design. It isn't obvious that labels based solutions are the only practical way to add semantic meaning to running systems. It isn't obvious that you need to decentralize the control model (ala replication managers) to accomplish better scale and robustness. It isn't obvious that containers are most powerful when you add co-deployment with sidecar modules and use them as resource isolation frameworks (pods vs discrete containers), etc, etc, etc. The community would have got there eventually, we just figured we could help them get there quicker given our work and having been burned by missing this first time around with Borg.

Our ambition was to find a way to connect a decade of experience to the community, and work in the open with the community to build something that solved our problems as much as outside developers problems. Omega (Borg's successor) captured some great ideas, but it certainly wasn't going the way we all hoped. Kind of classic second system problem.

Please consider K8s a legitimate attempt to find a better way to build both internal Google systems and the next wave of cloud products in the open with the community. We are aware that we don't know everything and learned a lot by working with people like Clayton Coleman from Red Hat (and hundreds of other engineers) by building something in the open. I think k8s is far better than any system we could have built by ourselves. And in the end we only wrote a little over 50% of the system. Google has contributed, but I just don't see it as a Google system at this point.

cmcluck | 9 years ago | on: What I found wrong in Docker 1.12

Disclaimer: I work at Google and was a founder of the Kubernetes project.

In a nutshell yes. We recognized pretty early on that fear of lockin was a major influencing factor in cloud buying decisions. We saw it mostly as holding us back in cloud: customers were reluctant to bet on GCE (our first product here at Google) in the early days because they were worried about betting on a proprietary system that wasn't easily portable. This was compounded by the fact that people were worried about our commitment to cloud (we are all in for the record, in case people are still wondering :) ). On the positive side we also saw lots of other people who were worried about how locked in they were getting to Amazon, and many at very least wanted to have two providers so they could play one off against the other for pricing.

Our hypothesis was pretty simple: create a 'logical computing' platform that works everywhere, and maybe, if customers liked what we had built they would try our version. And if they didn't, they could go somewhere else without significant effort. We figured at the end of the day we would be able to provide a high quality service without doing weird things in the community since our infrastructure is legitimately good, and we are good at operations. We also didn't have to agonize about extracting lots of money out of the orchestration system since we could just rely on monetization of the basic infrastructure. This has actually worked out pretty well. GKE (Google Container Engine) has grown far faster than GCE (actually faster than any product I have see) and the message around zero lock-in plays well with customers.

cmcluck | 10 years ago | on: Google Open-Sourced Kubernetes to Boost Its Cloud

disclaimer: i am a founder of the Kubernetes project and did the article with Cade at Wired. i also was product lead for compute engine back in the day fwiw :).

I am not sure which projects you have looked at from Google in terms of Open Source, but in the case of Kubernetes we have worked pretty hard to engage a community outside of Google and work with the community to make sure that Kubernetes is solid. One of the things that I like about the it is that many of the top contributors don't work at Google. People like Red Hat have worked very closely with us to make sure that (1) Kubernetes works well on traditional infrastructure (2) that it is a comprehensive system that meets enterprise needs, (3) that the usability is solid. People like Mirantis are working to integrate Kubernetes into the OpenStack ecosystem. The project started as a Google thing, but is bigger than a single company now.

Another thing worth noting: building a hosted commercial product (Google Container Engine) in the open by relying exclusively on the Kubernetes code base has helped us ensure that what we have built is explicitly not locked into Google's infrastructure, that the experience is good (since our community has built much of the experience), and that the product solves a genuinely broad set of problems.

Also consider that many of our early production users don't run on Google. Many do, but many also run on AWS or on private clouds.

-- craig

cmcluck | 10 years ago | on: Kubernetes: The Future of Deployment

(disclaimer: i work at Google and was one of the founders of the project)

when we were looking at building k8s our mission was to help the world move forwards to a more cloud native approach to development. by cloud native i mean container packaged, dynamically scheduled, micro-services oriented. we figured that in the end our data centers are going to be well suited to run cloud native apps, since they were designed from the ground up for this approach to management, and will offer performance and efficiency advantages over the alternatives. we also however recognized that no matter how cheap, fast and reliable the hosting offering is, most folks don't want to be locked into a single provider and Google in particular. we needed to do what we were doing in the open, and the thing that we built needed to be pattern compatible with our approach to management and quite frankly address some of the mistakes we had in previous frameworks (Borg mostly as a first system).

we looked really closely at Apache Mesos and liked a lot of what we saw, but there were a couple of things that stopped us just jumping on it. (1) it was written in C++ and the containers world was moving to Go -- we knew we planned to make a sustained and considerable investment in this and knew first hand that Go was more productive (2) we wanted something incredibly simple to showcase the critical constructs (pods, labels, label selectors, replication controllers, etc) and to build it directly with the communities support and mesos was pretty large and somewhat monolithic (3) we needed what Joe Beda dubbed 'over-modularity' because we wanted a whole ecosystem to emerge, (4) we wanted 'cluster environment' to be lightweight and something you could easily turn up or turn down, kinda like a VM; the systems integrators i knew who worked with mesos felt that it was powerful but heavy and hard to setup (though i will note our friends at Mesosphere are helping to change this).

so we figured doing something simple to create a first class cluster environment for native app management, 'but this time done right' as Tim Hockin likes to say everyday.

now we really like the guys at Mesosphere and we respect the fact that Mesos runs the vast majority of existing data processing frameworks. by adding k8s on mesos you get the next-generation cloud native scheduler and the ability to run existing workloads. by running k8s by itself you get a lightweight cluster environment for running next gen cloud native apps.

-- craig

cmcluck | 10 years ago | on: Kubernetes: The Future of Deployment

(disclosure: i work at Google and picked the name)

comments above are right -- we wanted to stick to the nautical theme that was emerging in containers and 'kubernetes' (or helmsmen is greek) seemed about right. the fact that the word has strong roots in modern control theory was nice also.

fun fact: we actually wanted to call it 'seven' after seven-of-nine (a more attractive borg) but for obvious reasons that didn't work out. :)

cmcluck | 11 years ago | on: Kubernetes is going to support rkt

Disclosure: I work at Google.

I think your point is fair: all technology should be reviewed strictly on its merits. We certainly don't have a magic formula that gets it right 100% of the time. I can however say we are quite committed to this area and this project in particular.

Here are the things I think about most days:

In the case of Kubernetes, we are doing is a little different to anything we have done before. We have a commercial offering (Google Container Engine) based on this project that we ultimately hope the world to consider as a great way to run containers for money (yes, we hope to create a business around this). The nice thing is we are a service provider so we just need to convince someone to use our compute services for us to benefit from an open source project we sponsor. We aren't really hiding our intentions here; we do containers well inside Google and have some whizz-bang container hosting infrastructure and services. The problem is that people are not likely to use it unless there is a 100% open, fully compatible alternative available for off-Google use. By way of background I was the product lead for Google Compute Engine (our infrastructure-as-a-service product) for several years. The product is going really well now, but I learned the hard way that big customers like choices and no-one likes to be locked in. This isn't just us being Googley (though I have a personal predilection to openness); this is a simple fact of the marketplace today. We could not convince someone to bet on a disruptive technology if an open and credible alternative does not exist.

The second thing is that we see tremendous power in the open source community that we want to tap into, both for our cloud product line, and for our own internal use (it is no secret we use a lot of OSS internally). Kubernetes has achieved an almost ridiculous velocity (check out our PR acceptance rates) because the community is supporting it. It is a better product because we benefit not only from having a bunch of Google engineers working on it, but because we also get engineers from Red Hat, CoreOS and others who bring different perspective and capabilities. We aren't enterprise guys for example, but the Red Hat folks are -- the combined team is way better than the sum of the parts. Our service (Google Container Engine) is stronger for it, and in the limit we expect to use Kubernetes to run a lot of our internal services too.

Cheers, -- craig

cmcluck | 11 years ago | on: Kubernetes is going to support rkt

Disclosure: I work at Google and am a co-founder of the Kubernetes project.

Correct. We already support Docker and plan to indefinitely. This is us extending support to rkt/appc also.

cmcluck | 11 years ago | on: Kubernetes is going to support rkt

Disclosure: I work at Google and am a co-founder of the Kubernetes project.

Actually no. Google Ventures runs quite autonomously from the product teams; I speak to them from time-to-time but they make their own decisions and don't influence ours.

We supported the rkt/appc PR because (1) we try hard to be an open community and to not play favorites, and (2) because we think the project has good promise as an open standard and as a lightweight modular runtime.

Please note that we are trying hard not to play favorites. Docker support will continue indefinitely and we will continue to make investments in the Docker community.

cmcluck | 11 years ago | on: Borg: The Predecessor to Kubernetes

Disclosure: I work at Google and was a co-founder of the Kubernetes project.

I think your observations are interesting. From my (somewhat biased) viewpoint I don't think we will enter into a 'post cloud' world. There are very real efficiency gains from running at public cloud provider scale, and the economics you see right now are not what I would consider 'steady state'. Beyond that the systems we are introducing with Kubernetes are focused on offering high levels of dynamism. They will ultimately fit your workload precisely to the amount of compute infrastructure you need, hopefully saving you quite a lot of money vs provisioning for peak. It will make a lot of sense to lease the amount of 'logical infrastructure' you need vs provisioning static physical infrastructure.

There are however legitimate advantages to our customers in being able to pick their providers and change providers as their needs change. We see the move to high levels of portability as a great way to keep ourselves and other providers honest.

-- craig

page 1