top | item 8323989

Docker closes $40M Series C led by Sequoia

135 points| yla92 | 11 years ago |blog.docker.com | reply

99 comments

order
[+] sillysaurus3|11 years ago|reply
This means Sequoia expects Docker to either go public or to be acquired for at least (10 * $40M / sequoia_ownership) in order to be considered a "win," right? (A "win" in the sense of being worth the VC's investment, not in the sense of being valuable to the world.)

The reason I say this is because a VC who merely breaks even on investments will eventually go out of business, so it would be a mistake to invest unless the expectation is that Docker might be a win for them.

Assuming Sequoia owns, say, 35%, then that comes to an expected acquisition price of about $1.15B for Sequoia to earn 10x their money back.

What are some hypothetical scenarios which end with Docker going public? What are some scenarios where a company would acquire Docker for north of $1B?

I'm not trying to imply anything about Docker with these questions. Personally, I love Docker. It's just fun to theorycraft.

[+] ig1|11 years ago|reply
Generally the multiple expected is lower for later stage rounds as there's less risk. Also Sequoia wouldn't have got anywhere near 35%.

That said the investors would certainly be looking for >$1bn exit.

[+] jonknee|11 years ago|reply
VMWare is valued at over $40B currently. They could scoop up Docker for a couple % of their market cap. I'm not saying that will happen, but it really wouldn't surprise me. There have been a ton of lofty acquisitions in this space.
[+] wslh|11 years ago|reply
It is obvious that Docker will be acquired very soon. I think the Sequoia investment is a very safe bet, almost a gift. Companies such as VMware are nervous because less VMs are deployed with Docker and there is less impact in performance.

Docker doesn't replace VMs but there is an intersection in their use cases.

[+] wastedhours|11 years ago|reply
Also, VCs adding pieces of the stack to their portfolio is interesting. Assumedly some of the Sequoia companies will be thinking about Docker use, Docker already has paid plans, so some of their cash "stays in the family", they own and have a vested interest in the success of deployments further up the line, driving more revenue back down.

Not that I'd imagine Sequoia has to do this or has any issues with deal-flow, but any company doing cool things running on containers, who's going to be the VC they call first?

[+] tracker1|11 years ago|reply
I'm curious as to how Docker.io and CoreOS.com live in harmony... I mean, CoreOS is built around Docker.io as a core piece, but I can see it overshadow docker proper in terms of money to be made in the long run.

It will definitely be interesting.

[+] weavie|11 years ago|reply
How are Docker monetizing their product? Is it just in hosting and support? By open sourcing Docker they have opened up the door to hundreds of competitors offering the same thing, often for a much lower cost? Is their only competitive advantage the fact that they own the project and thus understand it better and can dictate its course?

I'm sure they would make for a very interesting case study on how to do open source right.

[+] davidw|11 years ago|reply
How do people actually use Docker?

In my world of bootstrapped, smaller apps looking for market traction, even if things go well, a few Linodes should be enough to handle most of the traffic I'll ever need to deal with, so this kind of thing is kind of foreign to me. I'm curious how people utilize it in practice.

[+] csirac2|11 years ago|reply
I (or rather, jenkins) builds all my software in it, I don't actually use it for containerizing final applications.

In a nutshell, for me the value is in trivial repeatability. I can reproduce the entire build toolchain, test environment and produce artifacts all from a few KiB git repo which centres around the Dockerfile and submodules to dependencies.

Some of my ARM stuff takes hours to cross-compile and involves enormous amounts of fiddly babysitting normally. Dockerfiles have RUN statements (think lines a shell script) which are cached. My adjustments toward the end of a Dockerfile take only seconds to test and produces the exact same result as if it had really run each statement from the start, which doesn't sound like much but turns out (for me) to be pretty liberating compared to constantly trying to fight other automation where you have to dance around short-circuiting stuff to re-use bits of a past build to save time and get only a handful of "pristine" iterations in a day (that might differ to the iterations you rolled by hand).

[+] king_magic|11 years ago|reply
Self-hosted PaaS platforms like Dokku, Flynn, Deis, etc give you Heroku-like deployment & management without the cost of something like Heroku. All three of these are built on Docker.

I've been running Dokku on DigitalOcean lately and it has been great (though I'm planning on eventually moving to Deis, now that DigitalOcean supports CoreOS).

https://github.com/progrium/dokku https://flynn.io http://deis.io

[+] ishbits|11 years ago|reply
Right now I use it for demos... We use a stack that is little complex for someone wanting a quick taste. So provided they have a modern Linux with with docker installed they can quickly do a self demo.

And if they don't, well having a fresh image that is easily rebuildable makes it easier for me to give demos than to spin fresh VMs. Arguably I could have done the same with Vagrant, but really didn't need the overhead of the VM.

But for using it in deployment, no.

[+] xienze|11 years ago|reply
Bringing up a properly configured VM that stays up to date is actually a lot of work, and the more software you pile directly on top of that VM, the harder it is.

For me, my VMs are actually dead simple and relatively homogeneous. Fresh install + lock down SSH + lock down iptables + install Git + install Docker.

Then I build from the correct Dockerfile and open whichever port(s) on the VM. Bang, instant node that does [whatever]. Time to upgrade the DB/mail server/app? Build from the new Dockerfile, stop the existing container, bring up the new one. No worrying about installing/uninstalling the correct dependencies on the VM and getting into an inconsistent state.

That's what Docker does for you.

[+] char_pointer|11 years ago|reply
It changes your deployment "atom" so instead of deploying compressed artefacts, you deploy containers. This has some advantages in itself (eg. it makes it easier to do gradual upgrades of your full stack, and you can run containers side by side on the same host), but is especially nice in combination with Mesos and Marathon which enables you to scale out horizontally across your cluster.
[+] hunvreus|11 years ago|reply
Out of curiosity, how do you manage your servers? Chef, Puppet, Salt Stack? Simple shell scripts? Why not using something like Heroku if you want the flexibility of scaling only when needed?
[+] valarauca1|11 years ago|reply
Went to read this blog when I found out blog.docker.com doesn't support TLS 1.2. And only has one available cypher suite

      TLS_RSA_WITH_RC4_128_SHA
Which is cool because RC4 is broken.

docker.com actually does support TLS1.2, but their blog subdomain doesn't :\

[+] ewindisch|11 years ago|reply
Thank you. The blog is on different infrastructure than our website and the DockerHub. We'll look at this pronto! If you discover any other security issues or concerns, please send them to [email protected].
[+] brianbreslin|11 years ago|reply
Can someone explain Docker in layman's terms and juxtapose it against something I already understand (aws perhaps)?
[+] taylorbuley|11 years ago|reply
In the naughts, virtualization developed as a software layer to abstract physical hardware and provide so-called "virtual machines" that, instead of working exclusively with hardware, function as a group and share resources as a pool.

Docker provides one more layer of abstraction and grouping where a machine (or, commonly, a virtual machine) abstracts its resources in order to provide them to thread-like "containers." These containers share their resources with other containers running on a given host.

A docker container, written like a spec into a `Dockerfile`, is a way to package your application as if there was a `run.sh` that would install your OS, any dependencies and your application itself -- and, importantly, run that application after everything is installed. The host can choose to surface to the world any ports on the running container, or keep them private to itself. The container draws from the host's pool of resources so long as your application continues to run inside the "thread" managed by docker.

[+] darrelld|11 years ago|reply
Docker is kind of like a virtual machine, except without all the overhead of the operating system so you have a much smaller package. It also uses Linux containers ( http://en.wikipedia.org/wiki/LXC) to help with this process.

The overall idea is that I can make an app, use Docker to control things like versions of 3rd party apps that your app relies on. So for a web app I can include the version of MySQL, PHP, Ruby etc that I want, and then distribute a "dockerized" version of the app. Now when I distribute for testing or to other servers I don't need to worry about versions. It just works.

At least that's what I gather from reading their site last weekend. I plan to start using it for one of our projects soon.

[+] SlipperySlope|11 years ago|reply
Docker uses built in features of the Linux kernel to provide namespaces, called containers, in which to run isolated processes. Docker runs on Linux. Each container is built from some Linux distribution in which all system calls are delegated to the underlying Linux. Thus the container is mainly application code as compared to a VM instance in which each instance is a full copy of the OS. Notably, a docker container does not offer a GUI unless you run VNC. So one would not ordinarily develop programs within a container.

Docker makes it easy to use features of the Linux kernel that have been around for a while. Expect Microsoft to discover this technique in a couple of years.

[+] a3049073|11 years ago|reply
I tried Docker for a little while, just to see what it is. It seems like the authors never used UNIX before. Nonstandard argument format, some strange formatting in the manual page. And the idea of downloading random software from strangers from the internet and running it on your machine creeps me out as well.
[+] nickstinemates|11 years ago|reply
Sorry you had a bad experience. When was the last time you tried Docker? As an example..

> Nonstandard argument format

This has changed

> some strange formatting in the manual page

Also has changed

> And the idea of downloading random software from strangers from the internet and running it on your machine creeps me out as well.

So you don't use any sort of package management with the distro of your choice?

BTW - you don't have to use docker the way you describe. You can `docker import` any rootfs to create a base image and only push/pull images you have created.

[+] frewsxcv|11 years ago|reply
"And the idea of downloading random software from strangers from the internet and running it on your machine creeps me out as well."

Do you compile everything you install from source on your open hardware? If so, you are by far the minority here. If not, you're inconsistent.

[+] wereHamster|11 years ago|reply
Of course nobody is running random images from the internet in production. You build your own. Building custom images is not rocket science.

But to get started with docker it's incredibely easy to download an image and have something running within minutes.

[+] steeve|11 years ago|reply
Congrats to all the team!
[+] nickstinemates|11 years ago|reply
Thank you for being a big part in the community, specifically around boot2docker. You rock!
[+] sz4kerto|11 years ago|reply
It's very interesting to see that Docker is getting so much recognition, money and success -- while the real core of this thing, lxc is rarely mentioned and also the authors are not part of this huge success.
[+] indielol|11 years ago|reply
Docker is one of those FOSS that I always want to actively contribute to, but I don't lest they seem to be doing great without any help.
[+] nickstinemates|11 years ago|reply
Someone made a contribution earlier today as small as adding a carriage return in an RST file. That is extremely appreciated by everyone.

I'd encourage you to jump in. The IRC channel is fairly active and there's a ton of places to get started at all experience levels. Let me know if you need help.

[+] notacoward|11 years ago|reply
I wonder how much of this was just a way to rearrange who owns how much, ahead of the inevitable acquisition.
[+] droob|11 years ago|reply
"the money helps show the market that the company has stability"

Free money from some dudes unrelated to the company's business really shouldn't indicate "stability", should it?

[+] markokrajnc|11 years ago|reply
This will help them much with big customers who decide if they should use Docker - because they have now longer-term stability and support.
[+] dschiptsov|11 years ago|reply
'Hot' here should be interpreted as a new, fresh, popular meme and buzzword - 'a cool stuff for a cloud - orchestration, you know'.

Well, in this way it is hot indeed.

[+] jacques_chester|11 years ago|reply
Docker doesn't do orchestration and doesn't provide a PaaS.

I imagine they'll try to grow in that direction because their customers will hanker for it, but (and I'm biased here because I work for a PaaS developer) they'll find that building automagical distributed platforms is hard. Very hard.

Edit: from the blog post -- it looks like moving up into PaaS is their intention.

[+] jister|11 years ago|reply
yep they are hot as in overhyped
[+] mrwizrd|11 years ago|reply
Here's a copy of the blog post for anyone having trouble reading.

Today is a great day for the Docker team and the whole Docker ecosystem.

We are pleased to announce that Docker has closed a $40M Series C funding round led by Sequoia Capital. In addition to giving us significant financial resources, Docker now has the insights and support of a board that includes Benchmark, Greylock, Sequoia, Trinity, and Jerry Yang.

This puts us in a great position to invest aggressively in the future of distributed applications. We’ll be able to significantly expand and build the Docker platform and our ecosystem of developers, contributors, and partners, while developing a broader set of solutions for enterprise users. We are also very fortunate that we’ll be gaining the counsel of Bill Coughran, who was the SVP of Engineering at Google for eight years prior to joining Sequoia, and who helped spearhead the extensive adoption of container-based technologies in Google’s infrastructure.

While the size, composition, and valuation of the round are great, they are really a lagging indicator of the amazing work done by the Docker team and community. They demonstrate the amazing impact our open source project is having. Our user community has grown exponentially into the millions and we have a constantly expanding network of contributors, partners, and adopters. Search on GitHub, and you’ll now find over 13,000 projects with “Docker” in the title.

Docker’s 600 open source contributors can be proud that the Docker platform’s imprint has been so profound, so quickly. Before Docker, containers were viewed as an infrastructure-centric technology that was difficult to implement and remained largely in the purview of web-scale companies. Today, the Docker community has built that low-level technology into the basis of a whole new way to build, ship, and run applications.

Looking forward over the next 18 months, we’ll see another Docker-led transformation, this one aimed at the heart of application architecture. This transformation will be a shift from slow-to-evolve, monolithic applications to dynamic, distributed ones.

SHIFT IN APPLICATIONS

As we see it, apps will increasingly be composed of multiple Dockerized components, capable of being deployed as a logical, Docker unit across any combination of servers, clusters, or data-centers.

DISTRIBUTED, DOCKERIZED APPS

We’ve already seen large-scale web companies (such as GILT, eBay, Spotify, Yandex, and Baidu) weaving this new flexibility into the fabric of their application teams. At Gilt, for example, Docker functions as a tool of organizational empowerment, allowing small teams to own discrete services which they use to create innovations they can build into production over 100 times a day. Similar initiatives are also underway in more traditional enterprise environments, including many of the largest financial institutions and government agencies.

This movement towards distributed applications is evident when we look at the activity within Docker Hub Registry, where developers can actively share and collaborate on Dockerized components. In the three months since its launch, the registry has grown beyond 35,000 Dockerized applications, forming the basis for rapid and flexible composition of distributed applications leveraging a large library of stable, pre-built base images.

Future of Distributed Apps: 5 Easy Steps

The past 18 months have been largely about creating an interoperable, consistent format around containers, and building an ecosystem of users, tools, platforms, and applications to support that format. Over the next year, you’ll see that effort continue, as we put the proceeds of this round to use in driving advances in multiple areas to fully support multi-Docker container applications. (Look for significant advances in orchestration, clustering, scheduling, storage, and networking.) You’ll also see continued advances in the overall Docker platform–both Docker Hub and Docker Engine.

The work and feedback we’ve gotten from our customers as they evolve through these Docker-led transformations has profoundly influenced how Docker itself has evolved. We are deeply grateful for those contributions.

The journey we’ve undertaken with our community over the past 18 months has been humbling and thrilling. We are excited and energized for what’s coming next.

[+] borplk|11 years ago|reply
It's nice to see the folks building the building blocks getting some - financial - attention.

Now to yield a nice return on that they just have to turn docker into a ephemeral social photo sharing app for blind vegan Bulldogs and say they want to change the world ;)

[+] nickstinemates|11 years ago|reply
That's hilarious. :)

I think containers as a concept have the chance to really fundamentally change the way applications are developed, delivered, and managed in data centers going forward - whether it's my own little rack sitting in a corner office, or a large scale, multi dc deployment.

We're betting on that being Docker, but, the worst thing that could happen is to become complacent and not recognize there's a tremendous amount of work left to do.