The argument here I suppose is live on the bleeding edge? Or swap compilers/allocators as necessary? Or use O3? I'm not entirely sure.
It wraps up with:
>>> COMPILE YOUR SOFTWARES. It's going to help you understand it, make your app faster, more secure (by removing the mail gateway of NGiNX for example), and it will show you which softwares are easily maintanable and reliable.
It is PITA is keeping up with compiler changes and library changes of third party software. In this case, throwing in different malloc implementations in there too, not to mention different libc implementations. With all those variables for each component you deploy you're probably less likely to understand what's going in your software.
Maybe for a small app with 2-3 extra pieces or something that needs to really be optimized this is good advice, but it sounds like a lot of work for significant footprints.
It would be nice if the official docker images had some tags that were better optimized, within reason.
> The argument here I suppose is live on the bleeding edge? Or swap compilers/allocators as necessary? Or use O3? I'm not entirely sure.
My point is that when you are using a software, especially open source, you should evaluate it, and understand it. BTW, we are running arch in production, with our own repos.
> It is PITA is keeping up with compiler changes and library changes of third party software. In this case, throwing in different malloc implementations in there too, not to mention different libc implementations. With all those variables for each component you deploy you're probably less likely to understand what's going in your software.
in my example, the libc implementation is always the same! it's just that it's outdated on all main docker images. Furthermore, Redis already use a different malloc implementation (jemalloc), but in the makefile, they support also the standard malloc and tcmalloc, so throwing in another one is very easy.
> Maybe for a small app with 2-3 extra pieces or something that needs to really be optimized this is good advice, but it sounds like a lot of work for significant footprints.
Or when you need security/performance. of course, if you have 2 servers, it's useless, but we have more than 4000 servers running in production on arch, so it's worth it for us.
> It would be nice if the official docker images had some tags that were better optimized, within reason.
or just be up to date
So much advice in software development presupposes that everyone has the same single goal and at least one inexhaustible resource. In this case, time:
- To read and understand all relevant parts of the build pipelines for each performance-critical part of the infrastructure. In a relatively simple web application that could mean at least a web server, a caching framework, a database and a message bus.
- To debug any build failures, which could be plentiful and hard to parse until you're very familiar with the build infrastructure for that particular piece of software.
- To benchmark the build outputs in comparable ways with realistic configuration and inputs.
- To repeat the above whenever any part of the stack changes significantly.
The sad fact is that a lot of software is difficult to compile; isn't documented well, something that is worse for building; won't work well if installed in a non-standard way, whether that is final location, different supporting libs, or different platform; and can take a long time.
I'm happy nowadays when I see there's a binary available, no mucking around with gcc/clang/llvm - just trying to work out which one, let alone which version! - no diving down a rabbit hole of compiling dependencies that then need other dependencies compiled… no deciphering Makefiles that were written in a way that only a C guru can grok, with no comments.
This is one thing where I see Rust (and probably Zig) making headway. A lot of the newer Rust software isn't in package repos yet, but I don't mind doing a Cargo build. It might take a while to compile, but it always seems to just work.
> The sad fact is that a lot of software is difficult to compile; isn't documented well, something that is worse for building; won't work well if installed in a non-standard way, whether that is final location, different supporting libs, or different platform; and can take a long time.
So are you ready to deploy in prod a software that is so hard to compile?
> I'm happy nowadays when I see there's a binary available, no mucking around with gcc/clang/llvm - just trying to work out which one, let alone which version! - no diving down a rabbit hole of compiling dependencies that then need other dependencies compiled… no deciphering Makefiles that were written in a way that only a C guru can grok, with no comments.
But that's my job, as SRE/DevOps/whatever new fancy name!
> Whatever the benefits are, I prefer sanity.
Sanity of having a very old software, with backported features that are only on this distrib? I prefer to trust the engineers from the software that I deploy.
That doesn't match my experience. Two decades ago everything was in constant flux and more unreliable, but nowadays it's rare to find broken builds. It's only difficult if you distribution puts headers and such in separate packages, or splits up stuff so much that it's hard to tell what you actually need. Can't remember the last time I had to intervene to compile a random GitHub project I wanted to try out. It just works.
Please label your bar graph axis, with units. It’s kind of counterproductive to look at a benchmark graph without knowing whether more or less is better.
Other cool things you can do if you compile yourself is use features like auto parallelization[1].
I wouldn't recommend to enable it system wide because it causes issues with programs that fork() due to limitations in gcc's OpenMP library[2], but other than that it works pretty well. For example, I can fully load my 4C/8T CPU using 3 clang processes because compilation is magically spread over multiple threads. I've seen a "single threaded" program (qemu-img) suddenly start using more than a single core to convert disk images into other formats, leading to speedups.
Also things like PGO/FDO in combination with workload specific profiling data can easily give you 10% or more if you are CPU bound.
Please avoid GIFs and memes in your article. It adds nothing to the actual information in the article, but takes the seriousness away and makes it less readable.
I wonder how x86-64-v3 for Arch/v2 for fedora in the near future will change this calculus. Currently you're basically compiling for a Core 2/Athlon 64 era chip, so there's clear wins to be had, but I wonder how much of the benefit can be had just by using software requiring Haswell/Zen1 at minimum
-march=native will still give you better performance. It's not just about the instruction set, but also heuristics taking into account cache size, latencies, topology and other things. Intel for example has this quirk that aligning functions (and other jump targets) at 32 byte boundaries speeds up funct8 calls and jumps. I haven't tested it but I suspect you'd gain more from -mtune=native with the generic x86_64 target than -march=native. Some loops that can be autovectorized with AVX instructions will probably be faster though. But cache size especially is important for deciding if some optimization is beneficial or just leads to stalls due to thrashing.
On my side projects I compile everything myself, but I do not completely agree with this post because Redis is one of easiest/fastest mainstream databases to compile, it can get very time consuming and the returns are not always there.
[+] [-] prpl|4 years ago|reply
It wraps up with:
>>> COMPILE YOUR SOFTWARES. It's going to help you understand it, make your app faster, more secure (by removing the mail gateway of NGiNX for example), and it will show you which softwares are easily maintanable and reliable.
It is PITA is keeping up with compiler changes and library changes of third party software. In this case, throwing in different malloc implementations in there too, not to mention different libc implementations. With all those variables for each component you deploy you're probably less likely to understand what's going in your software.
Maybe for a small app with 2-3 extra pieces or something that needs to really be optimized this is good advice, but it sounds like a lot of work for significant footprints.
It would be nice if the official docker images had some tags that were better optimized, within reason.
[+] [-] wowi42|4 years ago|reply
> The argument here I suppose is live on the bleeding edge? Or swap compilers/allocators as necessary? Or use O3? I'm not entirely sure.
My point is that when you are using a software, especially open source, you should evaluate it, and understand it. BTW, we are running arch in production, with our own repos.
> It is PITA is keeping up with compiler changes and library changes of third party software. In this case, throwing in different malloc implementations in there too, not to mention different libc implementations. With all those variables for each component you deploy you're probably less likely to understand what's going in your software.
in my example, the libc implementation is always the same! it's just that it's outdated on all main docker images. Furthermore, Redis already use a different malloc implementation (jemalloc), but in the makefile, they support also the standard malloc and tcmalloc, so throwing in another one is very easy.
> Maybe for a small app with 2-3 extra pieces or something that needs to really be optimized this is good advice, but it sounds like a lot of work for significant footprints. Or when you need security/performance. of course, if you have 2 servers, it's useless, but we have more than 4000 servers running in production on arch, so it's worth it for us.
> It would be nice if the official docker images had some tags that were better optimized, within reason. or just be up to date
[+] [-] l0b0|4 years ago|reply
- To read and understand all relevant parts of the build pipelines for each performance-critical part of the infrastructure. In a relatively simple web application that could mean at least a web server, a caching framework, a database and a message bus.
- To debug any build failures, which could be plentiful and hard to parse until you're very familiar with the build infrastructure for that particular piece of software.
- To benchmark the build outputs in comparable ways with realistic configuration and inputs.
- To repeat the above whenever any part of the stack changes significantly.
[+] [-] ec109685|4 years ago|reply
[+] [-] thayne|4 years ago|reply
[+] [-] brigandish|4 years ago|reply
I'm happy nowadays when I see there's a binary available, no mucking around with gcc/clang/llvm - just trying to work out which one, let alone which version! - no diving down a rabbit hole of compiling dependencies that then need other dependencies compiled… no deciphering Makefiles that were written in a way that only a C guru can grok, with no comments.
Whatever the benefits are, I prefer sanity.
[+] [-] Rendello|4 years ago|reply
[+] [-] wowi42|4 years ago|reply
So are you ready to deploy in prod a software that is so hard to compile?
> I'm happy nowadays when I see there's a binary available, no mucking around with gcc/clang/llvm - just trying to work out which one, let alone which version! - no diving down a rabbit hole of compiling dependencies that then need other dependencies compiled… no deciphering Makefiles that were written in a way that only a C guru can grok, with no comments.
But that's my job, as SRE/DevOps/whatever new fancy name!
> Whatever the benefits are, I prefer sanity.
Sanity of having a very old software, with backported features that are only on this distrib? I prefer to trust the engineers from the software that I deploy.
[+] [-] binarybanana|4 years ago|reply
[+] [-] lnxg33k1|4 years ago|reply
[+] [-] anthk|4 years ago|reply
[+] [-] powersnail|4 years ago|reply
[+] [-] n8ta|4 years ago|reply
[+] [-] yellow_lead|4 years ago|reply
[+] [-] wowi42|4 years ago|reply
[+] [-] Aeolun|4 years ago|reply
[+] [-] wowi42|4 years ago|reply
[+] [-] binarybanana|4 years ago|reply
I wouldn't recommend to enable it system wide because it causes issues with programs that fork() due to limitations in gcc's OpenMP library[2], but other than that it works pretty well. For example, I can fully load my 4C/8T CPU using 3 clang processes because compilation is magically spread over multiple threads. I've seen a "single threaded" program (qemu-img) suddenly start using more than a single core to convert disk images into other formats, leading to speedups.
Also things like PGO/FDO in combination with workload specific profiling data can easily give you 10% or more if you are CPU bound.
[1]: https://gcc.gnu.org/wiki/AutoParInGCC
[2]: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=42624 (There was a patch to fix this, but it never got merged and doesn't apply to the current version any more, sadly)
[+] [-] b215826|4 years ago|reply
[+] [-] Macha|4 years ago|reply
[+] [-] binarybanana|4 years ago|reply
[+] [-] latenightcoding|4 years ago|reply
[+] [-] wowi42|4 years ago|reply
[+] [-] mathfailure|4 years ago|reply
[+] [-] wowi42|4 years ago|reply
[+] [-] wowi42|4 years ago|reply