top | item 46250527

(no title)

zerotolerance | 2 months ago

I always like finding people advocating for older sage knowledge and bringing it forward for new audiences. That said, as someone who wrote a book about Docker and has lived the full container journey I tend to skip the containerized build all together. Docker makes for great packaging. But containerizing ever step of the build process or even just doing it in one big container is a bit extra. Positioning it as a build scripting solution was silly.

discuss

order

maccard|2 months ago

I’m inclined to agree with you about not building containers. That said, I find myself going around in circles. We have an app that uses a specific toolchain version, how do we install that version on a build machine without requiring an SRE ticket to update our toolchain?

Containers nicely solve this problem. Then your builds get a little slow, so you want to cache things. Now your docker file looks like this. You want to run some tests - now it’s even more complicated. How do you debug those tests? How do those tests communicate with external systems (database/redis). Eventually you end up back at “let’s just containerise the packaging”.

cogman10|2 months ago

You can mount the current directory into docker and run an image of your tool.

Here's an example of that from the docker maven.

`docker run -it --rm --name my-maven-project -v "$(pwd)":/usr/src/mymaven -w /usr/src/mymaven maven:3.3-jdk-8 mvn clean install`

You can get as fancy as you like with things like your `.m2` directory, this just gives you the basics of how you'd do that.

pstuart|2 months ago

Depending on how the container is structured, you could have the original container as a baseline default, and then have "enhanced" containers that use it as a base and overlay the caching and other errata to serve that specialized need.

yjftsjthsd-h|2 months ago

I quite strongly disagree; a Dockerfile is a fairly good way to describe builds, a uniform approach across ecosystems, and the self contained nature is especially useful for building software without cluttering the host with build dependencies or clashing with other things you want to build. I like it so much that I've started building binaries in docker even for programs that will actually run on the host!

solatic|2 months ago

It can indeed be uniform across ecosystems, but it's slow. There's a very serious difference between being on a team where CI takes ~1 minute to run, vs. being on a team where CI takes a half hour or even, gasp, longer. A large part of that is the testing story, sure, but when you're really trying to optimize CI times, then every second counts.

dboreham|2 months ago

Agree. Using a container to build the source that is then packaged as a "binary" in the resulting container always seemed odd to me. imho we should have stuck with the old ways : build the product on a regular computer. That outputs some build artifacts (binaries, libraries, etc). Docker should take those artifacts and not be hosting the compiler and what not.

rcxdude|2 months ago

If anything the build being in a container is the more valuable bit, though mainly because the container usually more repeatable by having a scripted setup itself. Though I dunno why the build and the host would be the _same_ container in the end.

(and of course, nix kinda blows both out the water for consistency)

exe34|2 months ago

nix allows you to build docker containers with anything you can build in nix.

solatic|2 months ago

Agree, and I would go another step to suggest dropping Docker altogether for building the final container image. It's quite sad that Docker requires root to run, and all the other rootless solutions seem to require overcomplicated setups. Rootless is important because, unless you're providing CI as a public service and you're really concerned about malicious attackers, you will get way, way, way more value out of semi-permanent CI workers that can maintain persistent local caches compared to the overhead of VM enforced isolation. You just need an easy way to wipe the caches remotely, and a best-effort at otherwise isolating CI builds.

A lot of teams should think long and hard about just taking build artifacts, throwing them into their expected places in a directory taking the place of chroot, generating a manifest JSON, and wrapping everything in a tar, which is indeed a container.

OptionOfT|2 months ago

I like to build my stuff inside of Docker because it is my moat against changes of the environment.

We have our base images, and in there we install dependencies by version. That package then is the base for our code build. (as apt seemingly doesn't have any lock file support?).

In the subsequent built EVERYTHING is versioned, which allows us to establish provenance all the way up to the base image.

And next to that when we promote images from PR -> main we don't even rebuild the code. It's the same image that gets retagged. All in the name of preserving provenance.

miladyincontrol|2 months ago

I mean personally I find nspawn to be a pretty simple way of doing rootless containers. Replace manifest JSON with a systemd service file and you've got a rootless container that can run on most linux systems without any non-systemd dependencies or strange configuration required. Dont even need to extract the tarball.