This reminds me of how some vagrant images run scripts. Maybe all of them. I started using vagrant a month ago and recently noticed that the official debian/buster64 image wants to run a script with sudo.
Yet the generic/debian10 and centos/7 images I otherwise use require no such privilege escalation to function.
It seems unnecessary and dangerous, I refuse to use such images if possible. But I did also setup a sudoers config to allow only the NFS commands that they need, just in case.
Point being that all these new tools we're using involve a lot of trust. Many of them can be treated just like curl piping to bash.
Except you cannot even look at the contents of the file being piped beforehand and hope that the same file is downloaded when you actually pipe it. It's more like running setup.exe using the administrator account.
I've been leveraging docker buildx to create multi architecture images for a few months. It's quite nice and simple, and I've been able to even automate the multiarch builds with GitHub Actions. See an example repo of mine here: https://github.com/jmb12686/docker-cadvisor
I wish there was something like Bazel, Buck, Pants, or Please built on top of docker/crio.
The docker build cache, and dockerizarion of tools, has made building, testing, and deploying software so much simpler. Unfortunately the next step in build systems (in my opinion) still has the mentality that people want to have mutable state on their host systems.
I hope someone extends BuildKit into a system like Bazel. All rules can be executed in containers, all software and deps can be managed in build contexts, you automatically get distributed test runners/builds/cache by talking to multiple docker daemons, etc.
The docker build cache alone feels like magic for long (from scratch) builds. It feels tedious breaking out individual steps but I’ve yet to regret the extra effort.
The original purpose of most computer programmers was to write a program that would solve an immediate technical problem or a business requirement. The programmer was not concerned with the "technical" (though still important) question: What architecture is this CPU going to use? The first time a programmer encountered the question, his reaction was to try to compile a program that would run on the CPU.
I used to think of this process as a sort of reverse engineering exercise. To figure out what a CPU was doing, you needed to understand the architecture of the CPUs used by the people who were designing it. It was as though you were trying to reverse engineer a car engine using a hand-held computer; to understand how the engine worked you needed to understand how the car engine.
Interesting article but I can’t understand why cross compilation is dismissed.
It could have been improved by some performance benchmarks showing cross compilation performance in comparison with this emulation based solution. I find it hard to believe it makes sense to emulate when native performance is available.
Cross compiling is a lot more difficult to set up. Emulation let's you use much of the target system's tools as is. Cross compiling means you have to build all of those tools for the host system.
For example, with the RaspberryPi I can grab a Raspbain image, add binfmt and qemu on my host and with a few small changes to the image chroot in to a ready made build environment for the Pi that's faster and more convenient than compiling on the Pi. Setting up a cross compile environment for the Pi is much harder.
I have used docker with qemu syscall emulation to build various projects for foreign architectures. I really wanted to cross-compile, but the build tools chosen by those projects made it infeasibly difficult.
Anyone know how to get smaller docker images? I thought if I had all the previous layers in the docker registry that an upload would just be the size of the diff of the new layer, but this seems to never work.
Some docker registries isolate the layer cache per account to prevent cache poisoning attacks and data leaks. This means you might only take advantage of the registry caching if you have already pushed the first version of a tagged image.
If you want to get extremely small docker images you might also want to take a look at Google's distroless images and using mutlistage builds.
If you can get all of your images on the same machine to use the same base, you’ll get somewhere. Being careful of layer order, using a script to run the most disk-intensive layers also helps.
I’m having okay luck with alpine base images right now, but app versions are less flexible.
steventhedev|6 years ago
INTPenis|6 years ago
Yet the generic/debian10 and centos/7 images I otherwise use require no such privilege escalation to function.
It seems unnecessary and dangerous, I refuse to use such images if possible. But I did also setup a sudoers config to allow only the NFS commands that they need, just in case.
Point being that all these new tools we're using involve a lot of trust. Many of them can be treated just like curl piping to bash.
jraph|6 years ago
jmb12686|6 years ago
windexh8er|6 years ago
[0] https://blog.linuxserver.io/2019/02/21/the-lsio-pipeline-pro...
pdsouza|6 years ago
gravypod|6 years ago
The docker build cache, and dockerizarion of tools, has made building, testing, and deploying software so much simpler. Unfortunately the next step in build systems (in my opinion) still has the mentality that people want to have mutable state on their host systems.
I hope someone extends BuildKit into a system like Bazel. All rules can be executed in containers, all software and deps can be managed in build contexts, you automatically get distributed test runners/builds/cache by talking to multiple docker daemons, etc.
koolba|6 years ago
Ericson2314|6 years ago
shaklee3|6 years ago
fortran77|6 years ago
I used to think of this process as a sort of reverse engineering exercise. To figure out what a CPU was doing, you needed to understand the architecture of the CPUs used by the people who were designing it. It was as though you were trying to reverse engineer a car engine using a hand-held computer; to understand how the engine worked you needed to understand how the car engine.
javagram|6 years ago
It could have been improved by some performance benchmarks showing cross compilation performance in comparison with this emulation based solution. I find it hard to believe it makes sense to emulate when native performance is available.
jcoffland|6 years ago
For example, with the RaspberryPi I can grab a Raspbain image, add binfmt and qemu on my host and with a few small changes to the image chroot in to a ready made build environment for the Pi that's faster and more convenient than compiling on the Pi. Setting up a cross compile environment for the Pi is much harder.
Docker is totally unnecessary BTW.
rubicks|6 years ago
justicezyx|6 years ago
neuromute|6 years ago
ericb|6 years ago
gravypod|6 years ago
If you want to get extremely small docker images you might also want to take a look at Google's distroless images and using mutlistage builds.
hinkley|6 years ago
I’m having okay luck with alpine base images right now, but app versions are less flexible.
unknown|6 years ago
[deleted]