(no title)
kilburn | 8 months ago
If your images use the same base container then the libraries exist only once and you get the same benefits of a non-docker setup.
This depends on the storage driver though. It is true at least for the default and most common overlayfs driver [1]
[1] https://docs.docker.com/engine/storage/drivers/overlayfs-dri...
zdw|8 months ago
Let's say some Heartbleed (which affected OpenSSL, primarily) happens again. With native packages, you update the package, restart a few things that depend on it with shared libraries, and you're patched. OS vendors are highly motivated to do this update, and often get pre-announcement info around security issues so it tends to go quickly.
With docker, someone has to rebuild every container that contains a copy of the library. This will necessarily lag and be delivered in a piecemeal fashion - if you have 5 containers, all of them need their own updates, which if you don't self-build and self-update, can take a while and is substantially more work than `apt get update && reboot`.
Incidentally, the same applies for most languages that prefer/require static linking.
As mentioned elsewhere in the thread, it's a tradeoff, and people should be aware of the tradeoffs around update and data lifecycle before making deployment decisions.
motorest|8 months ago
I think you're grossly overblowing how much work it takes to refresh your containers.
In my case, I have personal projects which have nightly builds that pull the latest version of the base image, and services are just redeployed right under your nose. All it take to do this was to add a cron trigger to the same CICD pipeline.