Nah docker is excellent and far far preferable to the standard way of doing things for most people.
All the traditional distribution methods have barely even figured out how to uninstall a piece of software. `apt remove` will often leave other things lying around.
He complains about configuration being more complex because its not a file? Except it is, and its so much simpler to just have a compose file that tells you EXACTLY which files are used for configuration and where they are. This is still not easy in normal linux packages. You have to google or dig through a list of 10-15 places that may be used for config..
The other brilliant opinion is that docker makes the barrier to entry for packaging too low.. but the alternative is not those things being packaged well in apt or whatever, its them not being packaged at all.. this is not a win.
Whats the alternative to running a big project like say Sourcegraph through a docker compose instance? You have to get set up ~10 services yourself including Redis, Postgres, some logging thing blah blah. I do not believe this is ever easier than `docker compose up -d`.
And if you want to run it without docker, the docker images are basically a self-documenting system for how to do that. This is strictly a win over previous systems, where there generally just is no documentation for that.
Personally docker lowers the activation energy to deploy something that I can now try/run so many complex pieces of software so easily. I run sourcegraph, nginx, postgres, redis on my machine easily. A week ago I wanted to learn more about data engineering - got a whole Apache Airflow cluster setup with a single compose file, took a few minutes. That would have been at least an hour of following some half-outdated deploy guide before so I just wouldn't have done it.
---
The beginning of the post is most revealing though:
> The fact that I have multiple times had to unpack flatpaks and modify them to fix dependencies reveals ... Still, these systems work reasonably well, well enough that they continue to proliferate...
Basically you are okay with opening up flatpaks to modify them but not opening up docker images.. it just comes down to familiarity.
The thing I'm always interested in finding out is how people using lots of containers deal with security updates.
Do they go into every image and run the updates for the base image it's based on? Does that just mean you're now subject to multiple OS security and patching and best practices?
Do they think it doesn't matter and just get new images then they're published, which from what I can see is just when the overplayed software has an update and not the system anduvraries it relies on?
When glibc or zlib have a security update, what do you do? For RHEL and derivatives it looks line quay.io tries to help with that, but what's the best practice for the rest of the community?
We haven't really adopted containers at all at work yet, and to be honest this is at least part of the reason. It feels like we'd be doing all the patching we currently are, and then a bunch more. Sure, there are some gains, but if also feels like there's a lot more work involved there too?
> Whats the alternative to running a big project like say Sourcegraph through a docker compose instance? You have to get set up ~10 services yourself including Redis, Postgres, some logging thing blah blah. I do not believe this is ever easier than `docker compose up -d`.
Even with Docker, it's not as easy as `docker compose up -d`.
You need to setup backups too. How do you backup all these containers? Do they even support it? Meanwhile I already have Postgres and Redis tuned, running, and backuped. I'd even have a warm Postgres replica if I could figure how to make that work.
Then you need to monitor them for security updates. How are you notified your Sourcegraph container and its dependencies' containers need to be updated and restarted? If they used system packages, I'd already have this solved with Munin's APT plugin and/or Debian's unattended-upgrades. So you need to install Watchtower or equivalent. Which can't tell the difference between a security update and other updates, so you might have your software updated with breaking changes at any point.
Alternatively, you can locate and subscribe to the RSS feed for the repository of every Dockerfile involved (or use one of these third-party services providing RSS feeds for Dockerhub) and hope they contain a changelog. If I installed from Debian I wouldn't need that because I'm already subscribed to [email protected]
Wait wait wait. Docker has two use cases and you're conflating them. The original use case is:
Project Foo uses libqxt version 7. Project Bar uses libqxt version 8. They are incompatible so I'd need two development workstations (or later two LXC containers). This is slow and heavy on diskspace; docker solves that problem. This is a great use of docker.
The second use case that it has morphed into is:
I've decided stack management is hard and I can't tell my downstream which libraries they need because even I'm no longer sure. So I'll just bundle them all as this opaque container and distribute it that way and now literally nobody knows what versions of what software they are running in production. This is a very harmful use case of docker that is unfortunately nearly universal at this point.
Exactly, Docker + Compose is the best way of running server software these days.
I keep my compose files in source control and I have got a CI server building images for anything that doesn’t have first party images available.
Updates are super easy as well, just update the pinned version at the top of the compose file (if not using latest), then ’docker-compose pull’ followed by ’docker-compose up -d’
The entire thing is so much more stable and easier to manage than the RedHat/Ubuntu/FreeBSD systems I used to manage.
I spent the last couple of days trying to set up some software waddling though open source repositories, fixing broken deps, pinning minor versions, browsing SO for obscure errors... I wish I had a Docker container instead.
I just found about a half a gigabyte of files from programs I uninstalled two laptops ago on my machine, and files from a Steam game I got a refund for because it would crash after fifteen minutes. It’s frustrating.
> Whats the alternative to running a big project like say Sourcegraph through a docker compose instance? You have to get set up ~10 services yourself including Redis, Postgres, some logging thing blah blah. I do not believe this is ever easier than `docker compose up -d`.
Maybe my recollection is just fuzzy, but it seems to me back in the day many projects just had fewer dependencies and more of them were optional. "For larger installations and/or better performance, here's where to configure your redis instance."
Instead now you try and run someone's little self-hosted bookmark manager and it's "just" docker-compose up to spin up the backend API, frontend UI, Postgres, redis, elasticsearch, thumbor, and a KinD container that we use to dynamically run pods for scraping your bookmarked websites!
I'd almost _rather_ that sort of setup be reserved for stuff where it's worth investing the time to set it up.
All of this complexity is easier to get up this way, but that doesn't make it easier to properly manage or a _good_ way to do things. I'd much rather run _one_ instance of Postgres, set up proper backups _once_, perform upgrades _once_, etc. Even if I don't care about the hardware resource usage, I do care about my time. How do I point this at an external postgres instance? Unfortunately, the setup instructions for many services these days start _and end_ at `docker-compose up`.
And this idea of "dockerfiles as documentation" should really die. There are often so many implicit assumptions baked into them as to make them a complete minefield for use as a reference. And unless you're going to dig into a thousand lines of bash scripts, they're not going to answer the questions you actually need answers to like "how do I change this configuration option?".
> figured out how to uninstall a piece of software. `apt remove` will often leave other things lying around.
What's the Docker way of uninstall? In most cases Docker packaged software uses some kind of volume or local mount to save data. Is there a way to remove these when you remove the container? What about networks? (besides running prune on all available entities)
> All the traditional distribution methods have barely even figured out how to uninstall a piece of software. `apt remove` will often leave other things lying around.
Sounds like an issue that needs to be fixed instead of working around it. Also dependency hell. Few distros manage those hard problems nicely but they do exist.
> A week ago I wanted to learn more about data engineering - got a whole Apache Airflow cluster setup with a single compose file, took a few minutes.
Off-topic, but how did you like it? Tried it out a couple of years ago and felt like it overcomplicates things for probably 99% of use cases, and the overhead is huge.
> All the traditional distribution methods have barely even figured out how to uninstall a piece of software.
The most traditional way is to compile under /usr/local/application_name, and symlinking to /usr/local/(s)bin. Remove the folder and the links, and you're done.
> `apt remove` will often leave other things lying around.
"remove" is designed to leave config and database files in place, assuming that you might want to install it later without losing data. apt has "purge" option for the last decade which removes anything and everything completely.
> Whats the alternative to running a big project like say Sourcegraph through a docker compose instance? You have to get set up ~10 services yourself including Redis, Postgres, some logging thing blah blah. I do not believe this is ever easier than `docker compose up -d`.
Install in a single or on a couple of pet servers. Configure them, forget them. If you fancy, spawn a couple of VMs, and snapshot them daily. While it takes a bit more time, for a single time job, I won't mind.
Docker's biggest curse is it enables many bad practices and advertises as best practices. Why enable SSL on a service while I can add a SSL terminating container? Why tune a piece of software if I can spawn five more copies of that for scaling?, etc.
Docker is nice when it's packaged and documented well, and good for packaging stable meta-utilities (e.g.: A documenting pipeline which runs daily and terminates), but using it as a silver bullet and accepting its bad practices as gold standards is wasting resources, space, time; and creating security problems at the same time.
Basically you are okay with installing containers blindly but not installing services and learning apt purge. it just comes down to familiarity.
> Anyway, what I really wanted to complain a bit about is the realm of software intended to be run on servers.
Okay.
> I'm not sure that Docker has saved me more hours than it's cost
I'm not sure what's the alternative for servers here. Containers have certainly saved me of a lot of headache and created very little overhead. N=1 (as it seems to be the OP).
> The problem is the use of Docker as a lowest common denominator [...] approach to distributing software to end users.
Isn't the issue specific for server use? Are you running random images from the internet on your servers?
> In the worst case, some Docker images provide no documentation at all
Well, in the same vein as my last comment, Docker is not a silver bullet for everything. You still have to take care of what you're actually running.
Honestly the discussion is valid, but I think the OP aimed at "the current state of things" and hit a very valuable tool that doesn't deserve some of the targeted cristicism I read here.
edit: my two cents for those who cannot bother and expect just because it's a container, everything will magically be solved: use official images and those from Bitnami. There, you're set.
> I'm not sure what's the alternative for servers here.
Nixos/nixpkgs: isolated dependencies / services, easy to override if needed, configs follow relatively consistent pattern (main options exposed, others can be passed as text), service files can do isolation by whitelisting paths without going full-blown self-contained-os container.
> Are you running random images from the internet on your servers?
Many home server users do this. In business use, unless you invest lots of time into this, a part of your services is still effectively a random image from the internet.
> Isn't the issue specific for server use? Are you running random images from the internet on your servers?
Well exactly, there's what the author is writing about.
The whole article is dedicated to the problem of Docker being used as a distribution method, that is as a replacement for say Debian package.
So in order to use that software you need to run a Docker image from the internet which is open poorly made and incompatible with your infrastructure. Had a package been available you'd simply do "apt-get install" inside your own image built with your infrastructure in mind.
> You still have to take care of what you’re actually running.
This is the central thesis of OP, though. Pre-made/official images are not very good and docker in general doesn’t provide any means to improve/control quality.
You know who really knows how to package software? Mark Russinovich, Nir Sofer, and all the others who gave us beautiful utililies in standalone EXE's that don't require any dependencies.
For the longest time I stayed on older versions of .NET so any version of Windows since 2003 could run my software out of the box. Made use of ILMerge or a custom AssemblyResolve handler to bundle support DLL's right into my single-file tools - it wasn't hard.
I have no complaints about Docker, but I do find where I used to be able to download simple zip files and place their contents into my project I now just get a black box Docker link with zero documentation and that makes me sad.
>One of the great sins of Docker is having normalized running software as root. Yes, Docker provides a degree of isolation, but from a perspective of defense in depth running anything with user exposure as root continues to be a poor practice.
>Perhaps one of the problems with Docker is that it's too easy to use
If you've ever had to make a nonroot docker image (or an image that runs properly with the `--read-only` flag), it's not as trivial and fast to get things going—if it was default, perhaps docker wouldn't have been so successful in getting engineers of all types and levels to adopt it?
It's rare to find tooling in the DevOps/SRE world that's easy to just get started with productively, so docker's low barrier to entry is an exception IMO. Yes, the downside is you get a lot of poorly-made `Dockerfiles` in the wild, but it's also easy to iterate and improve them, given that there's a common ground. It's a curse I suppose, but I'd rather have a well-understood curse than the alternative being an arbitrary amount of bespoke curses.
> One of the basic concepts shared by most Linux systems is centralization of dependencies. Libraries should be declared as dependencies, and the packages depended on should be installed in a common location for use of the linker. This can create a challenge: different pieces of software might depend on different versions of a library, which may not be compatible. This is the central challenge of maintaining a Linux distribution, in the classical sense: providing repositories of software versions that will all work correctly together.
Maybe someone with more knowledge of Linux history can explain this for me, because I never understood it: Why is it so important that there must always only be one single version of a library installed on the entire system? What keeps a distribution from identifying a library by its name and version and allowing application A to use v1.1 and application B to use v1.2 at the same time?
Instead the solution of distros seems to be to enforce a single version and then to bend the entire world around this restriction - which then leads to unhappy developers that try to sidestep distro package management altogether and undermine the (very reasonable) "all software in a distro is known to work together" invariant.
If there's a chain of dependencies (libraries depending on other libraries), a single process might end up with different versions of the same library in it's memory.
That's not going to work, since the interface/API of the library is typically not versioned.
For security (and general bug fixing). If a security issue is found you want to only fix it in one place. The container alternative is searching down how all the containers were built, which might be very varied and some not even reproducible, and fixing them all.
>Doing anything non-default with networks in Docker Compose will often create stacks that don't work correctly on machines with complex network setups.
I run into this often. Docker networking is a mess.
Depending on the load and use case, encapsulating docker itself in an lxc container or a standalone vm can be a semi maintainable and separated solution.
Dockerfiles are really really simple. they do essentially three things: set a base image, set environment variables, and run scripts. and then as sort of a meta-thing, they prove that the steps in the dockerfile actually work, when you start the docker container and see that it works.
if you don't want to run in docker, a dockerfile is still a perfect setup script. open it up and see what it does, and use that as install instructions.
I’ve debugged project Dockerfiles to discover that they were pulling dependencies from URLS with “LATEST” in them. A Dockerfile isn’t really proof that anything currently works.
I remember how things were before docker. Better is not a word I'd use for that.
It sucked. Deploying software meant dealing with lots of operating system and distribution specific configuration, issues, bugs, etc. All of that had to be orchestrated with complicated scripts. First those were hand written, and later we got things like chef and puppet. Docker wiped most of that out and replaced it with simple build time tools that eliminate the need for having a lot of deploy time tools that take ages to run, are very complex to maintain, etc.
I also love to use it for development a lot. It allows me to use lots of different things that I need without having to bother installing those things. Saves a lot of time.
Docker gives us a nice standard way to run and configure whatever. Mostly configuration only gets hard when the underlying software is hard to configure. That's usually a problem with the software, not with docker. These days if you are building something that requires fiddling with some configuration file, it kind of is broken by design. You should design your configuration with docker in mind and not force people to have to mount volumes just so they can set some properties.
The reason docker is so widespread is that it is so obviously a good idea and there hasn't been anyone that came along with something better that actually managed to get any traction worth talking about. Most of the docker alternatives tend to be compatible with docker to the point that the differences are mostly in how they run dockerized software.
And while I like docker, I think Kubernetes is a hopelessly over engineered and convoluted mess.
These sorts of things usually take longer to get working than equivalent software distributed as a conventional Linux package or to be built from source.
Yes, but they are done once, and forced to ship the docker image. The stupid amount of time we've spent looking for that one package dependency because someone forgot that they installed something to make a project work... Or the classic we setup SSL, no one knows how they setup SSL once they are done, etc.
Docker forces a lot of the infrastructure decisions that devs make in their sandbox to be actually well defined. Not that it makes their choices any more sane, safer, secure. At least someone can take a look at the mess and replicate it, as many time as they want, quickly, break it, fix it, upgrade it, without ever requesting a dev VM build, having sysops install preqs, etc.
Is docker work? Everything is work. Do I think docker should be the default go to? No I'd like for people to use app services and simply perform the build on the app node, but that magic is even harder to debug and troubleshoot.
> Making things worse, a lot of Docker images try to make configuration less painful by providing some sort of entry-point shell script that generates the full configuration from some simpler document provided to the container.
I see this as the container world reinventing the wheel of reasonable defaults for software that has long since lost sight of that. Nginx and Apache are two of the worst offenders, which won't just serve files out of a directory without a few dozens lines of config.
I think most of the comments here largely miss the point of the article. The guy doesn't complain about docker as a whole and doesn't criticise it's usage for software deployment which normally is the main application.
He complains about Docker being used as a software distribution method, that is as a replacement for say Debian package, pip package, npm package etc.
So in order to use that software you need to run a Docker image from the internet which is often poorly made and is incompatible with your infrastructure. Had a package been available you'd simply do "install" inside your own image built with your infrastructure in mind.
With that I agree completely. Docker and worse even docker-compose are terrible ways of software distribution and should never be used except for demos and for rare cases where software is not distributed to the customer in the normal sense but rather directly deployed into his system.
Docker and docker-compose are still very good methods of software deployment.
This, but I want to add, I think the root cause is less due to the ease of writing a Dockerfile vs writing a deb, rpm, etc. and more due to the low cost of hosting. Whatever low-quality Dockerfile you write, you can sign up on Docker Hub, build and push there, and you're done. GitHub Packages doesn't support deb or rpm, and anyway for better or for worse, packages are tightly coupled to the distribution they're packaged for. That means either getting your package into the package repository of each distribution you want to target, or hosting your own package repository, which is non-trivial in both financial and labor cost.
Docker has a lot more crap, but it did dramatically lower the barrier of entry, which is a good thing. The proper response isn't to bemoan the lower barrier of entry, but to attempt to lower the barrier of entry of traditional packaging.
>He complains about Docker being used as a software distribution method, that is as a replacement for say Debian package, pip package, npm package etc.
If that is a valid complaint, why does he choose two examples where that is not the case? Nextcloud AIO is just one option among many and certainly not the "standard way" of hosting your nextcloud instance. Coincidentally I came from hosting nextcloud the "standard way" and I'm really glad AIO exists and I don't have to manage nonsense like nextcloud ending support for the latest Debian php version. And Home Assistant is mainly distributed as the OS variant with the docker version being the step child afterthought that barely functions.
Docker keeps reminding me of people in the past shipping VM images, or sometimes physical machines, often with rather inappropriate desktop hardware and software. So do these points:
> The problem is the use of Docker as a lowest common denominator, or perhaps more accurately lowest common effort, approach to distributing software to end users.
Shipping physical machines probably is even lower than that though. Or even the VMs.
> One of the great sins of Docker is having normalized running software as root.
I am not as familiar with Docker practices, but unfortunately, AFAICT, people did that frequently on regular systems as well, just to not bother with permissions. (Edit: now I recalled people also not following the FHS and storing things in the root directory inside Docker images, but sometimes it was/is similar without containers as well, and inside a container it does not clutter the host system, at least).
> Having "pi" in the name of a software product is a big red flag in my mind, it immediately makes me think "they will not have documented how to run this on a shared device."
This approach is similar to shipping physical machines. Or at least maintaining odd legacy software on a dedicated machine.
I think a rather pessimistic view is that proper packaging switched to Docker or single-purpose machines, but an optimistic one is that those are the unnecessary VMs and larger single-purpose machines that were replaced by Docker and RPi. Maybe there is a little of both going on.
Unpopular Opinion: Linux got it wrong and Windows got it right. (Or at least closer to right.) Programs shouldn't use centralized dependencies. Programs should ship ALL of their dependencies. Running a program should be as simple as downloading a zip, extracting, and clicking/typing run.
Docker exists because building and running software is so outrageously complex that it requires a full system image. And it turns out Docker didn't actually solve it after all!
It's much easier to ship more than one program, with all their dependencies packaged, because of lower cost of maintenance per program/library. It is also easier to automate downloading and unpacking than do that manually for every package. If you want to try this idea, just download a Linux distribution.
There are a lot of use cases for docker, I think this is complaining about a few specific things in more complex applications that probably shouldn't be deployed that way because you're not just exposing port 80, it's a whole tool deployed to end users, like an appliance. But that's not my primary use case for Docker.
For me, the best thing about Docker is that it's brought the average developer experience from "oh, I think I have an install.sh for that around somewhere" to mostly repeatable builds that are mostly self documenting. Any time a tool is self documenting it's a win. If you want the cake, you have to write down the recipe. That's huge. It's forcing lazy devs (which we all are) to just write it down. At this point the amount of "weird bearded guy tribal knowledge" that is now documented in a Dockerfile somewhere is a treasure trove.
Things still break all the time for a million dumb reasons but as a least common denominator it's a great place to start. It's not a solution for everything and it sounds like that's what this article is about. Docker+Compose is not great for everything, so don't use it for those situations. But it's so much better than what was before.
Without reading the article, I would also agree that Docker has made things better for me rather than worse. If someone else spent the effort to create a Dockerfile for their app, it will reduce the amount of issues I have trying to deploy it greatly. At least at that point they have figured out the majority of dependencies required to run their app, then I only have to troubleshoot the details rather than starting from scratch for whatever server distribution that I'm running it on.
I distribute server-side software and it was a pain to provide the infra requirements. At higher level users easily miss what is yours and what's another tech, they just don't care to tell the difference. Is not a matter of documenting, is just not their concern.
That drives many issues and it becomes a snowball soon as inexperienced users start to vent bad practices. In a effort to help other users they often spread more damage.
With Docker I was able to take charge of the infra, which erased all the uncertainties on my next layer but it spawned the uncomfortable need of learning Docker to use my stuff, which users took very reluctant.
The best distribution method is to pack a binary release. Not only the package is lightweight, it doesn't need any fancy instruction. You can keep Docker for your internal use, don't ship it to end users.
Having worked at Red Hat and worked on many Docker / Kubernetes systems I agree with some parts of the article, but my view is that the wrold is going through a transition phase right now of moving to containerised systems.
Take for example running something like 3Scale (an internet gateway) in Docker or Kubernete. It can be a nightmare to configure and run 3Scale using containers with the multiple memory limits and other container specific issues. Far easier to get 3Scale running without containers.
So many software systems were not designed in the Docker era, and going forward many container applications will be designed to be easier to configure/use in the Container world due to a "Container/Docker native" mindset when designing the system in the first place
99.9% of the problems you spoke to, which are very real. Could be solved if people building the software would just understand one thing. A container is not a mini VM. It is not in any way shape or form a virtual machine. If what you need is a lightweight virtual machine. Build that. Do not build a container because it's the latest and greatest buzzword. But instead I see large monolithic applications, shoved into a container, and then I hear a multitude of complaints about performance issues ETC. You may be able to drive a nail with a screwdriver but it's not a good idea.
In the old days, the age-old cry of the beleaguered developer was “My code isn’t buggy, it works on my machine!”, to which the response was “We’re not shipping your machine to the customer!”. Well, science marches on, and we’ve invented a way to do exactly that. Instead of, you know, writing actually robust and simple-to-deploy software.
Has any great design been invented that isn't tied to hardcoded paths preventing multiple versions of the same library and would allow easy linking to any version? Nix? Anything else?
[+] [-] zaptheimpaler|2 years ago|reply
All the traditional distribution methods have barely even figured out how to uninstall a piece of software. `apt remove` will often leave other things lying around.
He complains about configuration being more complex because its not a file? Except it is, and its so much simpler to just have a compose file that tells you EXACTLY which files are used for configuration and where they are. This is still not easy in normal linux packages. You have to google or dig through a list of 10-15 places that may be used for config..
The other brilliant opinion is that docker makes the barrier to entry for packaging too low.. but the alternative is not those things being packaged well in apt or whatever, its them not being packaged at all.. this is not a win.
Whats the alternative to running a big project like say Sourcegraph through a docker compose instance? You have to get set up ~10 services yourself including Redis, Postgres, some logging thing blah blah. I do not believe this is ever easier than `docker compose up -d`.
And if you want to run it without docker, the docker images are basically a self-documenting system for how to do that. This is strictly a win over previous systems, where there generally just is no documentation for that.
Personally docker lowers the activation energy to deploy something that I can now try/run so many complex pieces of software so easily. I run sourcegraph, nginx, postgres, redis on my machine easily. A week ago I wanted to learn more about data engineering - got a whole Apache Airflow cluster setup with a single compose file, took a few minutes. That would have been at least an hour of following some half-outdated deploy guide before so I just wouldn't have done it.
---
The beginning of the post is most revealing though:
> The fact that I have multiple times had to unpack flatpaks and modify them to fix dependencies reveals ... Still, these systems work reasonably well, well enough that they continue to proliferate...
Basically you are okay with opening up flatpaks to modify them but not opening up docker images.. it just comes down to familiarity.
[+] [-] kbenson|2 years ago|reply
Do they go into every image and run the updates for the base image it's based on? Does that just mean you're now subject to multiple OS security and patching and best practices?
Do they think it doesn't matter and just get new images then they're published, which from what I can see is just when the overplayed software has an update and not the system anduvraries it relies on?
When glibc or zlib have a security update, what do you do? For RHEL and derivatives it looks line quay.io tries to help with that, but what's the best practice for the rest of the community?
We haven't really adopted containers at all at work yet, and to be honest this is at least part of the reason. It feels like we'd be doing all the patching we currently are, and then a bunch more. Sure, there are some gains, but if also feels like there's a lot more work involved there too?
[+] [-] progval|2 years ago|reply
Even with Docker, it's not as easy as `docker compose up -d`.
You need to setup backups too. How do you backup all these containers? Do they even support it? Meanwhile I already have Postgres and Redis tuned, running, and backuped. I'd even have a warm Postgres replica if I could figure how to make that work.
Then you need to monitor them for security updates. How are you notified your Sourcegraph container and its dependencies' containers need to be updated and restarted? If they used system packages, I'd already have this solved with Munin's APT plugin and/or Debian's unattended-upgrades. So you need to install Watchtower or equivalent. Which can't tell the difference between a security update and other updates, so you might have your software updated with breaking changes at any point.
Alternatively, you can locate and subscribe to the RSS feed for the repository of every Dockerfile involved (or use one of these third-party services providing RSS feeds for Dockerhub) and hope they contain a changelog. If I installed from Debian I wouldn't need that because I'm already subscribed to [email protected]
[+] [-] bandrami|2 years ago|reply
Project Foo uses libqxt version 7. Project Bar uses libqxt version 8. They are incompatible so I'd need two development workstations (or later two LXC containers). This is slow and heavy on diskspace; docker solves that problem. This is a great use of docker.
The second use case that it has morphed into is:
I've decided stack management is hard and I can't tell my downstream which libraries they need because even I'm no longer sure. So I'll just bundle them all as this opaque container and distribute it that way and now literally nobody knows what versions of what software they are running in production. This is a very harmful use case of docker that is unfortunately nearly universal at this point.
[+] [-] chillfox|2 years ago|reply
I keep my compose files in source control and I have got a CI server building images for anything that doesn’t have first party images available.
Updates are super easy as well, just update the pinned version at the top of the compose file (if not using latest), then ’docker-compose pull’ followed by ’docker-compose up -d’
The entire thing is so much more stable and easier to manage than the RedHat/Ubuntu/FreeBSD systems I used to manage.
(I use Alpine Linux + ZFS for the host OS)
[+] [-] angarg12|2 years ago|reply
[+] [-] ckastner|2 years ago|reply
If you mean configuration files, then this is by design.
`apt purge` removes those as well.
[+] [-] hinkley|2 years ago|reply
[+] [-] nucleardog|2 years ago|reply
Maybe my recollection is just fuzzy, but it seems to me back in the day many projects just had fewer dependencies and more of them were optional. "For larger installations and/or better performance, here's where to configure your redis instance."
Instead now you try and run someone's little self-hosted bookmark manager and it's "just" docker-compose up to spin up the backend API, frontend UI, Postgres, redis, elasticsearch, thumbor, and a KinD container that we use to dynamically run pods for scraping your bookmarked websites!
I'd almost _rather_ that sort of setup be reserved for stuff where it's worth investing the time to set it up.
All of this complexity is easier to get up this way, but that doesn't make it easier to properly manage or a _good_ way to do things. I'd much rather run _one_ instance of Postgres, set up proper backups _once_, perform upgrades _once_, etc. Even if I don't care about the hardware resource usage, I do care about my time. How do I point this at an external postgres instance? Unfortunately, the setup instructions for many services these days start _and end_ at `docker-compose up`.
And this idea of "dockerfiles as documentation" should really die. There are often so many implicit assumptions baked into them as to make them a complete minefield for use as a reference. And unless you're going to dig into a thousand lines of bash scripts, they're not going to answer the questions you actually need answers to like "how do I change this configuration option?".
[+] [-] ofrzeta|2 years ago|reply
What's the Docker way of uninstall? In most cases Docker packaged software uses some kind of volume or local mount to save data. Is there a way to remove these when you remove the container? What about networks? (besides running prune on all available entities)
[+] [-] rigid|2 years ago|reply
Sounds like an issue that needs to be fixed instead of working around it. Also dependency hell. Few distros manage those hard problems nicely but they do exist.
[+] [-] bakuninsbart|2 years ago|reply
Off-topic, but how did you like it? Tried it out a couple of years ago and felt like it overcomplicates things for probably 99% of use cases, and the overhead is huge.
[+] [-] bayindirh|2 years ago|reply
The most traditional way is to compile under /usr/local/application_name, and symlinking to /usr/local/(s)bin. Remove the folder and the links, and you're done.
> `apt remove` will often leave other things lying around.
"remove" is designed to leave config and database files in place, assuming that you might want to install it later without losing data. apt has "purge" option for the last decade which removes anything and everything completely.
> Whats the alternative to running a big project like say Sourcegraph through a docker compose instance? You have to get set up ~10 services yourself including Redis, Postgres, some logging thing blah blah. I do not believe this is ever easier than `docker compose up -d`.
Install in a single or on a couple of pet servers. Configure them, forget them. If you fancy, spawn a couple of VMs, and snapshot them daily. While it takes a bit more time, for a single time job, I won't mind.
Docker's biggest curse is it enables many bad practices and advertises as best practices. Why enable SSL on a service while I can add a SSL terminating container? Why tune a piece of software if I can spawn five more copies of that for scaling?, etc.
Docker is nice when it's packaged and documented well, and good for packaging stable meta-utilities (e.g.: A documenting pipeline which runs daily and terminates), but using it as a silver bullet and accepting its bad practices as gold standards is wasting resources, space, time; and creating security problems at the same time.
Basically you are okay with installing containers blindly but not installing services and learning apt purge. it just comes down to familiarity.
Obligatory XKCD: https://xkcd.com/1988/
[+] [-] leonheld|2 years ago|reply
Okay.
> I'm not sure that Docker has saved me more hours than it's cost
I'm not sure what's the alternative for servers here. Containers have certainly saved me of a lot of headache and created very little overhead. N=1 (as it seems to be the OP).
> The problem is the use of Docker as a lowest common denominator [...] approach to distributing software to end users.
Isn't the issue specific for server use? Are you running random images from the internet on your servers?
> In the worst case, some Docker images provide no documentation at all
Well, in the same vein as my last comment, Docker is not a silver bullet for everything. You still have to take care of what you're actually running.
Honestly the discussion is valid, but I think the OP aimed at "the current state of things" and hit a very valuable tool that doesn't deserve some of the targeted cristicism I read here.
edit: my two cents for those who cannot bother and expect just because it's a container, everything will magically be solved: use official images and those from Bitnami. There, you're set.
[+] [-] viraptor|2 years ago|reply
Nixos/nixpkgs: isolated dependencies / services, easy to override if needed, configs follow relatively consistent pattern (main options exposed, others can be passed as text), service files can do isolation by whitelisting paths without going full-blown self-contained-os container.
> Are you running random images from the internet on your servers?
Many home server users do this. In business use, unless you invest lots of time into this, a part of your services is still effectively a random image from the internet.
> and those from Bitnami
Yes, that's a random image from the internet.
[+] [-] alexey-salmin|2 years ago|reply
Well exactly, there's what the author is writing about.
The whole article is dedicated to the problem of Docker being used as a distribution method, that is as a replacement for say Debian package.
So in order to use that software you need to run a Docker image from the internet which is open poorly made and incompatible with your infrastructure. Had a package been available you'd simply do "apt-get install" inside your own image built with your infrastructure in mind.
[+] [-] pointlessone|2 years ago|reply
In other words, random images from the internet.
> You still have to take care of what you’re actually running.
This is the central thesis of OP, though. Pre-made/official images are not very good and docker in general doesn’t provide any means to improve/control quality.
[+] [-] rkagerer|2 years ago|reply
For the longest time I stayed on older versions of .NET so any version of Windows since 2003 could run my software out of the box. Made use of ILMerge or a custom AssemblyResolve handler to bundle support DLL's right into my single-file tools - it wasn't hard.
I have no complaints about Docker, but I do find where I used to be able to download simple zip files and place their contents into my project I now just get a black box Docker link with zero documentation and that makes me sad.
[+] [-] theden|2 years ago|reply
>Perhaps one of the problems with Docker is that it's too easy to use
If you've ever had to make a nonroot docker image (or an image that runs properly with the `--read-only` flag), it's not as trivial and fast to get things going—if it was default, perhaps docker wouldn't have been so successful in getting engineers of all types and levels to adopt it?
It's rare to find tooling in the DevOps/SRE world that's easy to just get started with productively, so docker's low barrier to entry is an exception IMO. Yes, the downside is you get a lot of poorly-made `Dockerfiles` in the wild, but it's also easy to iterate and improve them, given that there's a common ground. It's a curse I suppose, but I'd rather have a well-understood curse than the alternative being an arbitrary amount of bespoke curses.
[+] [-] numbsafari|2 years ago|reply
It’s all fun and games until the bills come due.
[+] [-] xg15|2 years ago|reply
Maybe someone with more knowledge of Linux history can explain this for me, because I never understood it: Why is it so important that there must always only be one single version of a library installed on the entire system? What keeps a distribution from identifying a library by its name and version and allowing application A to use v1.1 and application B to use v1.2 at the same time?
Instead the solution of distros seems to be to enforce a single version and then to bend the entire world around this restriction - which then leads to unhappy developers that try to sidestep distro package management altogether and undermine the (very reasonable) "all software in a distro is known to work together" invariant.
So, why?
[+] [-] Fronzie|2 years ago|reply
[+] [-] rwmj|2 years ago|reply
[+] [-] oddmiral|2 years ago|reply
Fedora had support for modularity: https://docs.fedoraproject.org/en-US/modularity/ . Join the Fedora project, please.
[+] [-] ycombinatrix|2 years ago|reply
I run into this often. Docker networking is a mess.
[+] [-] j45|2 years ago|reply
[+] [-] kobalsky|2 years ago|reply
[+] [-] notatoad|2 years ago|reply
if you don't want to run in docker, a dockerfile is still a perfect setup script. open it up and see what it does, and use that as install instructions.
[+] [-] jamesgeck0|2 years ago|reply
[+] [-] jillesvangurp|2 years ago|reply
It sucked. Deploying software meant dealing with lots of operating system and distribution specific configuration, issues, bugs, etc. All of that had to be orchestrated with complicated scripts. First those were hand written, and later we got things like chef and puppet. Docker wiped most of that out and replaced it with simple build time tools that eliminate the need for having a lot of deploy time tools that take ages to run, are very complex to maintain, etc.
I also love to use it for development a lot. It allows me to use lots of different things that I need without having to bother installing those things. Saves a lot of time.
Docker gives us a nice standard way to run and configure whatever. Mostly configuration only gets hard when the underlying software is hard to configure. That's usually a problem with the software, not with docker. These days if you are building something that requires fiddling with some configuration file, it kind of is broken by design. You should design your configuration with docker in mind and not force people to have to mount volumes just so they can set some properties.
The reason docker is so widespread is that it is so obviously a good idea and there hasn't been anyone that came along with something better that actually managed to get any traction worth talking about. Most of the docker alternatives tend to be compatible with docker to the point that the differences are mostly in how they run dockerized software.
And while I like docker, I think Kubernetes is a hopelessly over engineered and convoluted mess.
[+] [-] what-the-grump|2 years ago|reply
Yes, but they are done once, and forced to ship the docker image. The stupid amount of time we've spent looking for that one package dependency because someone forgot that they installed something to make a project work... Or the classic we setup SSL, no one knows how they setup SSL once they are done, etc.
Docker forces a lot of the infrastructure decisions that devs make in their sandbox to be actually well defined. Not that it makes their choices any more sane, safer, secure. At least someone can take a look at the mess and replicate it, as many time as they want, quickly, break it, fix it, upgrade it, without ever requesting a dev VM build, having sysops install preqs, etc.
Is docker work? Everything is work. Do I think docker should be the default go to? No I'd like for people to use app services and simply perform the build on the app node, but that magic is even harder to debug and troubleshoot.
[+] [-] chasil|2 years ago|reply
The world was a much simpler place so long ago.
[+] [-] cyrnel|2 years ago|reply
I see this as the container world reinventing the wheel of reasonable defaults for software that has long since lost sight of that. Nginx and Apache are two of the worst offenders, which won't just serve files out of a directory without a few dozens lines of config.
[+] [-] alexey-salmin|2 years ago|reply
He complains about Docker being used as a software distribution method, that is as a replacement for say Debian package, pip package, npm package etc.
So in order to use that software you need to run a Docker image from the internet which is often poorly made and is incompatible with your infrastructure. Had a package been available you'd simply do "install" inside your own image built with your infrastructure in mind.
With that I agree completely. Docker and worse even docker-compose are terrible ways of software distribution and should never be used except for demos and for rare cases where software is not distributed to the customer in the normal sense but rather directly deployed into his system.
Docker and docker-compose are still very good methods of software deployment.
[+] [-] solatic|2 years ago|reply
Docker has a lot more crap, but it did dramatically lower the barrier of entry, which is a good thing. The proper response isn't to bemoan the lower barrier of entry, but to attempt to lower the barrier of entry of traditional packaging.
[+] [-] cderpz|2 years ago|reply
If that is a valid complaint, why does he choose two examples where that is not the case? Nextcloud AIO is just one option among many and certainly not the "standard way" of hosting your nextcloud instance. Coincidentally I came from hosting nextcloud the "standard way" and I'm really glad AIO exists and I don't have to manage nonsense like nextcloud ending support for the latest Debian php version. And Home Assistant is mainly distributed as the OS variant with the docker version being the step child afterthought that barely functions.
[+] [-] defanor|2 years ago|reply
> The problem is the use of Docker as a lowest common denominator, or perhaps more accurately lowest common effort, approach to distributing software to end users.
Shipping physical machines probably is even lower than that though. Or even the VMs.
> One of the great sins of Docker is having normalized running software as root.
I am not as familiar with Docker practices, but unfortunately, AFAICT, people did that frequently on regular systems as well, just to not bother with permissions. (Edit: now I recalled people also not following the FHS and storing things in the root directory inside Docker images, but sometimes it was/is similar without containers as well, and inside a container it does not clutter the host system, at least).
> Having "pi" in the name of a software product is a big red flag in my mind, it immediately makes me think "they will not have documented how to run this on a shared device."
This approach is similar to shipping physical machines. Or at least maintaining odd legacy software on a dedicated machine.
I think a rather pessimistic view is that proper packaging switched to Docker or single-purpose machines, but an optimistic one is that those are the unnecessary VMs and larger single-purpose machines that were replaced by Docker and RPi. Maybe there is a little of both going on.
[+] [-] ungamedplayer|2 years ago|reply
This was an easy way to tell if you were dealing with a Muppet. If you saw this you would know that the software was going to be a problem.
[+] [-] forrestthewoods|2 years ago|reply
Docker exists because building and running software is so outrageously complex that it requires a full system image. And it turns out Docker didn't actually solve it after all!
[+] [-] oddmiral|2 years ago|reply
[+] [-] owyn|2 years ago|reply
For me, the best thing about Docker is that it's brought the average developer experience from "oh, I think I have an install.sh for that around somewhere" to mostly repeatable builds that are mostly self documenting. Any time a tool is self documenting it's a win. If you want the cake, you have to write down the recipe. That's huge. It's forcing lazy devs (which we all are) to just write it down. At this point the amount of "weird bearded guy tribal knowledge" that is now documented in a Dockerfile somewhere is a treasure trove.
Things still break all the time for a million dumb reasons but as a least common denominator it's a great place to start. It's not a solution for everything and it sounds like that's what this article is about. Docker+Compose is not great for everything, so don't use it for those situations. But it's so much better than what was before.
[+] [-] Onawa|2 years ago|reply
[+] [-] __MatrixMan__|2 years ago|reply
"Mostly" is going a bit far there.
If you want those things you should use nix to build your docker images, but you're going to have to want them pretty badly.
[+] [-] CR007|2 years ago|reply
That drives many issues and it becomes a snowball soon as inexperienced users start to vent bad practices. In a effort to help other users they often spread more damage.
With Docker I was able to take charge of the infra, which erased all the uncertainties on my next layer but it spawned the uncomfortable need of learning Docker to use my stuff, which users took very reluctant.
The best distribution method is to pack a binary release. Not only the package is lightweight, it doesn't need any fancy instruction. You can keep Docker for your internal use, don't ship it to end users.
[+] [-] zubairq|2 years ago|reply
Take for example running something like 3Scale (an internet gateway) in Docker or Kubernete. It can be a nightmare to configure and run 3Scale using containers with the multiple memory limits and other container specific issues. Far easier to get 3Scale running without containers.
So many software systems were not designed in the Docker era, and going forward many container applications will be designed to be easier to configure/use in the Container world due to a "Container/Docker native" mindset when designing the system in the first place
[+] [-] linuxrebe1|2 years ago|reply
[+] [-] v3ss0n|2 years ago|reply
LXC for example designed container like a VM
[+] [-] teddyh|2 years ago|reply
See also: <https://blog.brixit.nl/developers-are-lazy-thus-flatpak/>
[+] [-] eviks|2 years ago|reply
[+] [-] colordrops|2 years ago|reply