It depends on the context, I don't know about corporate persons with profit incentives but if we're talking human persons then containers don't solve anything. They're just the symptom of the disease that is future shock. The underlying libraries we depend on just change too fast now and no devs care about forwards compatibility so we end up with all OS/Distros having libs that stop working in about a year (or more like 3 months with Rust/JS/etc).
The solution has to either come in the form of static compilation, or, even less feasible, getting devs to actually care if their software runs on platforms more than a year old. Containers just make everything worse in all cases beyond the contrived "it just worked and I never need to change anything".
Containers halfway solved some big existing problems that most people don't seem to see very well.
Packaging is hard, and both debian-based and rpm-based (and really most other's I've seen) are pretty awful. (except BSDs, which I've had a lovely time with)
They're slow, they're stateful, writing them involves eldritch magic and a lot of boilerplate, and they're just frequently broken. Unless you're installing an entire OS from scratch you're probably going to have a hard time getting your system into the same state as somebody else's. And running that from-scratch OS install is definitely possible in a as-code way, it can take an hour.
Containers came along and provided a host of things traditional packaging systems didn't and they took over by storm and with them came a whole lot of probably unnecessary complexity from people wanting to add things. Adding things without ending up with a huge mass of complexity is hard and takes a lot of context knowledge.
So we ended up solving a host of problems with containers and creating a whole new set along the way.
> They're just the symptom of the disease that is future shock.
Yes, absolutely, and I hope you mean that in the capital-F "Future Shock", Alvin Toffler sense, because there is a lot he wrote that hasn't even been carried over and digested. Software is an endlessly disorienting sea of change, getting faster and thus worse as time progresses, and it's frankly madness at this point.
It seems absolutely no one is committed to providing a stable platform for any purpose whatsoever. Even Java, where I spent many years being ingrained with the absolute necessity of backwards compatibility with old (perhaps even dumb) classfile versions, has been making breaking changes as part of its ramp up to semi-annual major version releases. Node Long Term Support "typically guarantees that critical bugs will be fixed for a total of 30 months."[1] Pfft. It's a joke. You can't get your damn API design straight by version 12? I'll do my damnedest to avoid you forever, then. It's so unserious and frankly irresponsible to break so much stuff so often.
But change only begets more change. We're all on an endless treadmill, constantly adapting to the change for no reason. And people have to adapt to our changes, and so it goes.
Containers side-stepped the deficiencies of Linux distributions, which had become so based on 'singleton' concepts; one init system, one version of that library etc.
A shame because there's an inherent hierarchy; everything from the filesystem, UIDs, $LD_LIBRARY_PATH that could really allow things to co-exist without kludge of overlay filesystems. Just it was never practical to eg. install an RPM in a subdirectory and use it.
Containers aren't a very _good_ solution, they're just just best we've got; and still propped up by an ever-growing Linux kernel API as the only stable API in the stack...
> getting devs to actually care if their software runs on platforms more than a year old.
This is why we don't play games with siloing responsibilities on the tech stack. Every single developer on the team is responsible for making the entire product work on whatever machine it is intended to work on. No one gets to play "not my job", so they are encouraged to select robust solutions lest they be paged to resolve their own mess in the future.
Maybe those solutions are containers in some cases, but not for our shop right now. Our product ships as a single .NET binary that can run on any x86 machine supported by the runtime.
Containers solve the problem of clashing library versions needed by different applications running on a single host (and I know there are other ways to solve this).
This is really not a new problem :) I remeber dealing with shared libary versioning issues from no long after I started in IT in the 90's and it's been a problem since.
Containers aren’t the final destination, but they’ve enabled polyglot orchestration i.e., an app developer can target Kubernetes without needing to manage the minutia of operating a bunch of Linux hosts. It seems like almost every company that isn’t using containers for SaaS software development ends up badly reinventing Kubernetes and sinking a ton of time and money into maintaining it, and as a “human person”, I’m glad that I can focus my efforts on higher-level problems. When a technology inevitably matures to replace containers, I’ll look into it, but for now containers are the best way to build and manage heterogeneous distributed systems.
I don't think dependencies is the only benefit of containers. I personally like the isolation they provide and generally prefer running services in containers, even if they are using the same dependencies as my OS. I run Linux too, so I don't have to worry about any virtualization framework overhead.
Java running in a container is somewhat amusing because of this. So you have a several solutions to the problem of agnostic packaging (java/jar/ear/war/etc) running inside another whole solution for agnostic packaging.
Containers just moved the compatibility barrier up the abstraction stack. That’s not terrible (fewer and fewer understand how their computer actually works) but all those same problems still remain. Now they just apply to remote APIs instead
This looks more like an advertisement than a useful blog post.
Also:
> Consider also that Docker relies on Linux kernel-specific features to implement containers, so users of macOS, Windows, FreeBSD, and other operating systems still need a virtualization layer.
First, FreeBSD has its own native form of containers and Windows has its own native implementation. Docker != containers.
I really don't see how Docker (or containers as we mostly know them) relying on kernel-features from an open source operating system in order to run Linux OS images as something to even complain about, and there is nothing preventing Mac from implementing their own form of containers.
I am familiar with FreeBSD jails (and IMO, they are actually superior to Linux containers in most respects). My point is not so much that other systems don't have the tech to make containers work - or that OS vendors are not capable of adding containers to their kernels - but that having container technology is not the same as having a smooth devex for containerized applications.
I think the next step(s) will be something closer to what the combination of Cloudflare Workers + KV + Durable Objects gives you... I think there also needs to be some implementation of PubSub added to the mix as well as a more robust database store. Fastly has similar growing options, and there are more being advanced/developed.
In the end, there's only a few missing pieces to offer a more robust solution. I do think that making it all webassembly will be the way to go, assuming the WASI model(s) get more flushed out (Sockets, Fetch, etc). The Multi-user web doom on cloudflare[1] is absolutely impressive to say the least.
I kind of wonder if Cloudflare could take what FaunaDB, CockroachDB or similar offers and push this more broadly... At least a step beyond k/v which could be database queries/indexes against multiple fields.
Been thinking on how I could use the existing Cloudflare system for something like a forum or for live chat targeting/queries... I think that the Durable Objects might be able to handle this, but could get very ugly.
I spent 6+ years fighting this exact battle. It's hard. It's resource intensive. And timing is everything. It requires either one company to front all the development cost and bring it to the world after validating it or it needs an ecosystem to emerge through a shared pain and understanding. We're not there yet.
Author here. I have been developing Docker applications for years now, and while the experience is better than it used to be, it's still not great. I work for Deref, which is working on developer tooling that is more amenable to modern development workflows. We'd love to hear what pains you have with the current state of development environments.
I wish someone would rewrite docker-compose in a single go or rust binary so that I don't have to deal with the python3 crypto package being out of date or something when simply configuring docker/docker-compose for another user (usually me on a different machine or new account).
the one thing containers addressed was their use as a countermeasure to rising costs from greedy VPS providers, and as an agile framework to quickly evacuate from a toxic provider (cost, politics, performance, etc...)
providers in turn responded by shilling their 'in house' containerization products and things like Lambda for lock-in.
Virtual Machines gained popularity as are kludge to get around the remarkably horrible state of operating systems. The inability to reliably save and restore the state of a computer grew to be so costly that it became worthwhile to pay the performance penalty of a layer of emulation/virtualization to route around it.
Containers were the next logical step, as each virtual machine vendor tried to lock in their users. Containers allowed routing around it.
Both of these steps could be eliminated if a well behaved operating system similar to those in mainframes could be deployed, so that each application sat in its own runtime, had its own resources, and no other default access.
There's a market opportunity here, it just needs to be found.
Since the author mentionned it, is the 12 factor app still a best practice? Was it a best practice? I saw the website a few times and all of it makes sense for me, but I haven't seen much discussion about it.
Containers don't solve anything more than virtual machines. Containers are 'better' than virtual machines because they have less overhead and are 100% open source.
Containers and VMs let you divide and solve problems in isolation in a convenient manner. You still have the same problems inside each container.
Firstly, Docker & k8s made using containers easy. Minimal distros like alpine simplify containers to a set of one or more executable. You could implement the same thing with a system of systemd services & namespaces.
But now that everything was a container, you need a way to manage what & where containers are running and how they communicate with each other.
It looks like 90% of the stuff different container tools and gadgets try to solve is the issues they created. You can no longer install a LAMP stack via 'apt install mysql apache php7.4' so instead you need a tool that sets up 3 containers with the necessary network & filesystem connections. It certainly better because it is all decoratively defined but it is still the same problem.
This is why I mostly stayed out of containers until recently. The complexity of containers really only helps if you need to replicate certain server/application. You will still need to template all of your configuration files even if you use Docker, etc.
What is changing everything IMO is NixOS because it solves the same issues without jumping all the way to Docker or k8s. Dependencies are isolated like containers but the system itself whether it is a host/standalone or a container can be defined in the same manner. This means that going from n=1 to n>1 is super easy and migrating from a multi-application server (i.e a pet server) to a containerized environment (i.e to a 'cattle' server/container) is straightforward. It's still more complex and a bit rough compared to Docker & k8s but using the same configuration system everywhere makes it worthwhile.
the one problem containers solved for me better than anything I ever used in previous UNIX/LINUX is heirarchical resource tracking. I work with many codes that fork from their main binary and do their work in subprocesses. If your resource manager isn't scraping /proc to invert the process tree, it needs a way to assign resources to process trees such that the entire tree sum cannot exceed the resource limitation.
[+] [-] superkuh|4 years ago|reply
The solution has to either come in the form of static compilation, or, even less feasible, getting devs to actually care if their software runs on platforms more than a year old. Containers just make everything worse in all cases beyond the contrived "it just worked and I never need to change anything".
[+] [-] colechristensen|4 years ago|reply
Packaging is hard, and both debian-based and rpm-based (and really most other's I've seen) are pretty awful. (except BSDs, which I've had a lovely time with)
They're slow, they're stateful, writing them involves eldritch magic and a lot of boilerplate, and they're just frequently broken. Unless you're installing an entire OS from scratch you're probably going to have a hard time getting your system into the same state as somebody else's. And running that from-scratch OS install is definitely possible in a as-code way, it can take an hour.
Containers came along and provided a host of things traditional packaging systems didn't and they took over by storm and with them came a whole lot of probably unnecessary complexity from people wanting to add things. Adding things without ending up with a huge mass of complexity is hard and takes a lot of context knowledge.
So we ended up solving a host of problems with containers and creating a whole new set along the way.
[+] [-] titzer|4 years ago|reply
Yes, absolutely, and I hope you mean that in the capital-F "Future Shock", Alvin Toffler sense, because there is a lot he wrote that hasn't even been carried over and digested. Software is an endlessly disorienting sea of change, getting faster and thus worse as time progresses, and it's frankly madness at this point.
It seems absolutely no one is committed to providing a stable platform for any purpose whatsoever. Even Java, where I spent many years being ingrained with the absolute necessity of backwards compatibility with old (perhaps even dumb) classfile versions, has been making breaking changes as part of its ramp up to semi-annual major version releases. Node Long Term Support "typically guarantees that critical bugs will be fixed for a total of 30 months."[1] Pfft. It's a joke. You can't get your damn API design straight by version 12? I'll do my damnedest to avoid you forever, then. It's so unserious and frankly irresponsible to break so much stuff so often.
But change only begets more change. We're all on an endless treadmill, constantly adapting to the change for no reason. And people have to adapt to our changes, and so it goes.
[1] https://nodejs.org/en/about/releases/
[+] [-] zbuf|4 years ago|reply
Containers side-stepped the deficiencies of Linux distributions, which had become so based on 'singleton' concepts; one init system, one version of that library etc.
A shame because there's an inherent hierarchy; everything from the filesystem, UIDs, $LD_LIBRARY_PATH that could really allow things to co-exist without kludge of overlay filesystems. Just it was never practical to eg. install an RPM in a subdirectory and use it.
Containers aren't a very _good_ solution, they're just just best we've got; and still propped up by an ever-growing Linux kernel API as the only stable API in the stack...
[+] [-] bob1029|4 years ago|reply
This is why we don't play games with siloing responsibilities on the tech stack. Every single developer on the team is responsible for making the entire product work on whatever machine it is intended to work on. No one gets to play "not my job", so they are encouraged to select robust solutions lest they be paged to resolve their own mess in the future.
Maybe those solutions are containers in some cases, but not for our shop right now. Our product ships as a single .NET binary that can run on any x86 machine supported by the runtime.
[+] [-] raesene9|4 years ago|reply
This is really not a new problem :) I remeber dealing with shared libary versioning issues from no long after I started in IT in the 90's and it's been a problem since.
Solving that problem seems like a win to me.
[+] [-] throwaway894345|4 years ago|reply
[+] [-] encryptluks2|4 years ago|reply
[+] [-] tyingq|4 years ago|reply
[+] [-] kaba0|4 years ago|reply
The problem doesn’t start with virtualization, that is indeed a side-track.
[+] [-] gumby|4 years ago|reply
[+] [-] jayd16|4 years ago|reply
[+] [-] encryptluks2|4 years ago|reply
Also:
> Consider also that Docker relies on Linux kernel-specific features to implement containers, so users of macOS, Windows, FreeBSD, and other operating systems still need a virtualization layer.
First, FreeBSD has its own native form of containers and Windows has its own native implementation. Docker != containers.
I really don't see how Docker (or containers as we mostly know them) relying on kernel-features from an open source operating system in order to run Linux OS images as something to even complain about, and there is nothing preventing Mac from implementing their own form of containers.
[+] [-] kendru|4 years ago|reply
[+] [-] tracker1|4 years ago|reply
In the end, there's only a few missing pieces to offer a more robust solution. I do think that making it all webassembly will be the way to go, assuming the WASI model(s) get more flushed out (Sockets, Fetch, etc). The Multi-user web doom on cloudflare[1] is absolutely impressive to say the least.
I kind of wonder if Cloudflare could take what FaunaDB, CockroachDB or similar offers and push this more broadly... At least a step beyond k/v which could be database queries/indexes against multiple fields.
Been thinking on how I could use the existing Cloudflare system for something like a forum or for live chat targeting/queries... I think that the Durable Objects might be able to handle this, but could get very ugly.
1. https://blog.cloudflare.com/doom-multiplayer-workers/
[+] [-] cfors|4 years ago|reply
But that's why anytime you integrate with one of these tools you should be aware that there is a cost for maintaining that integration.
[+] [-] asim|4 years ago|reply
My efforts => https://micro.mu
Oh and prior efforts https://github.com/asim/go-micro
[+] [-] kendru|4 years ago|reply
[+] [-] johnchristopher|4 years ago|reply
I wish someone would rewrite docker-compose in a single go or rust binary so that I don't have to deal with the python3 crypto package being out of date or something when simply configuring docker/docker-compose for another user (usually me on a different machine or new account).
[+] [-] ryanmarsh|4 years ago|reply
[+] [-] nimbius|4 years ago|reply
providers in turn responded by shilling their 'in house' containerization products and things like Lambda for lock-in.
[+] [-] mikewarot|4 years ago|reply
Containers were the next logical step, as each virtual machine vendor tried to lock in their users. Containers allowed routing around it.
Both of these steps could be eliminated if a well behaved operating system similar to those in mainframes could be deployed, so that each application sat in its own runtime, had its own resources, and no other default access.
There's a market opportunity here, it just needs to be found.
[+] [-] Zababa|4 years ago|reply
[+] [-] KingMachiavelli|4 years ago|reply
Containers and VMs let you divide and solve problems in isolation in a convenient manner. You still have the same problems inside each container.
Firstly, Docker & k8s made using containers easy. Minimal distros like alpine simplify containers to a set of one or more executable. You could implement the same thing with a system of systemd services & namespaces.
But now that everything was a container, you need a way to manage what & where containers are running and how they communicate with each other.
It looks like 90% of the stuff different container tools and gadgets try to solve is the issues they created. You can no longer install a LAMP stack via 'apt install mysql apache php7.4' so instead you need a tool that sets up 3 containers with the necessary network & filesystem connections. It certainly better because it is all decoratively defined but it is still the same problem.
This is why I mostly stayed out of containers until recently. The complexity of containers really only helps if you need to replicate certain server/application. You will still need to template all of your configuration files even if you use Docker, etc.
What is changing everything IMO is NixOS because it solves the same issues without jumping all the way to Docker or k8s. Dependencies are isolated like containers but the system itself whether it is a host/standalone or a container can be defined in the same manner. This means that going from n=1 to n>1 is super easy and migrating from a multi-application server (i.e a pet server) to a containerized environment (i.e to a 'cattle' server/container) is straightforward. It's still more complex and a bit rough compared to Docker & k8s but using the same configuration system everywhere makes it worthwhile.
[+] [-] dekhn|4 years ago|reply
[+] [-] forgotmypw17|4 years ago|reply
[+] [-] jmartens|4 years ago|reply
[+] [-] OfflineSergio|4 years ago|reply
[+] [-] ISV_Damocles|4 years ago|reply
[+] [-] swagasaurus-rex|4 years ago|reply
All applications run in its own container, unless they are granted granular permissions to do otherwise.
The code and assets for a program belong in its own quarantined section, not spread out over the filesystem or littered around /etc/, /var/
Built in networking for these containers.
[+] [-] the-dude|4 years ago|reply