top | item 38915336

(no title)

miquong | 2 years ago

For image and layer manipulation, crane is awesome - as is the underlying go-containerregistry library.

It lets you add new layers, or edit any metadata (env vars, labels, entrypoint, etc) in existing images. You can also "flatten" an image with multiple layers into a single layer. Additionally you can "rebase" an image (re-apply your changes onto a new/updated base image). It does all this directly in the registry, so no docker needed (though it's still useful for creating the original image).

https://github.com/google/go-containerregistry/blob/main/cmd...

(updated: better link)

discuss

order

lyxell|2 years ago

This is a great recommendation. It is worth noting that unlike Docker, crane is root- and daemonless which makes it work great in Nix (it's called 'crane' in the Nix repository). This allows for Nix to be used to manage dependencies for both building (e.g. Go) as well as packaging and deploying (e.g. gnu tar, crane).

pbowyer|2 years ago

Is there any performance benefit to having fewer layers? My understanding is that there's no gain by merging layers as the size of the image remains constant.

apt-get|2 years ago

There are some useful cases — for example, if you're taking a rather bloated image as a base and trimming it down with `rm` commands, those will be saved as differential layers, which will not reduce the size of the final image in the slightest. Only merging will actually "register" these deletions.

fishpen0|2 years ago

Less performance and more security. Lots of ameteur images use a secret file or inadvertently store a secret to a layer without realizing an rm or other process in another layer doesn't actually eliminate it. If the final step of your build squashes the filesystem flat again you can remove a lot of potentially exposed metadata and secrets stored in intermediate layers

yrro|2 years ago

If you've got a 50 layer image then each time you open a file, I believe the kernel has to look for that file in all 50 layers before it can fail with ENOENT.

dayjaby|2 years ago

One simple case where the resulting image is bigger than necessary:

``` COPY ./package.deb /tmp/package.deb

RUN dpkg -i /tmp/*.deb && rm -rf /tmp/*.deb ```

This results in two layers, with one layer containing a huge file, thus being part of the final image if you don't do multi-stage builds.

momothereal|2 years ago

I'm working on a tool that does the opposite: to split layers into smaller, deterministic deltas.

mcpherrinm|2 years ago

If files are overwritten or removed in a lower layer, there can be size savings from that.

natebc|2 years ago

some startup performance savings in fewer http requests to fetch the image. small for sure but it's something?