top | item 40353846

(no title)

steve_rambo | 1 year ago

I wish we would rather get rid of Dockerfile in favor of what something like buildah does:

https://github.com/containers/buildah/blob/main/examples/lig...

Since Dockerfile is a rather limited and (IMHO) poorly executed re-implementation of a shell script, why not use shell directly? Not even bash with coreutils is necessary: even posix sh with busybox can do much more than Dockerfile, and you can use something else (like Python) and take it very far indeed.

discuss

order

mass_and_energy|1 year ago

That's like saying "why do we bother with makefiles when we can just make a shell script that invokes the toolchain as needed based on positional arguments?". Well, we certainly could do that but it's over complicated compared to the existing solution and would represent a shift away from what most Docker devs have grown to use efficiently. What's so bad about Dockerfile anyway?

MereInterest|1 year ago

> What's so bad about Dockerfile anyway?

Things I've run into:

* Cannot compose together. Suppose I have three packages, A/B/C. I would like to build each package in an image, and also build an image with all three packages installed. I cannot extract functionality into a subroutine. Instead, I need to make a separate build script, add it to the image, and run it in the build.

* Easy to have unintentional image bloat. The obvious way to install a package in a debian-based container is with `RUN apt-get update` followed by `RUN apt-get install FOO`. However, this causes the `/var/lib/apt/lists` directory to be included in the downloaded images, even if `RUN rm -rf /var/lib/apt/lists/` is included in the Dockerfile. In order to avoid bloating the image, the all three steps of update/install/rm must be in a single RUN command.

Cannot mark commands as order-independent. If I am installing N different packages

* Cannot do a dry run. There is no command that will tell you if an image is up-to-date with the current Dockerfile, and what stages must be rebuilt to bring it up to date.

* Must be sequestered away in a subdirectory. Anything that is in the directory of the dockerfile is treated as part of the build context, and is copied to the docker server. Having a Dockerfile in a top-level source directory will cause all docker commands to grind to a halt. (Gee, if only there were an explicit ADD command indicating which files are actually needed.)

* Must NOT be sequestered away in a subdirectory. The dockerfile may only add files to the image if they are contained in the dockerfile's directory.

* No support for symlinks. Symlinks are the obvious way to avoid the contradiction in the previous two bullet points, but are not allowed. Instead, you must re-structure your entire project based on whether docker requires a file. (The documented reason for this is that the target of a symlink can change. If applied consistently, I might accept this reasoning, but the ADD command can download from a URL. Pretending that symlinks are somehow less consistent than a remote resource is ridiculous.)

* Requires periodic cleanup. A failed build command results in a container left in an exited state. This occurs even if the build occurred in a command that explicitly tries to avoid leaving containers running. (e.g. "docker run --rm", where the image must be built before running.)