(no title)
thefaux | 1 month ago
How much wasted work has been created by compiler authors deciding that they know better than the original software authors and silently break working code, but only in release mode? Even worse, -O0 performance is so bad that developers feel obligated to compile with -O2 or more. I will bet dollars to donuts that the vast majority of the material wins of -O2 in most real world use cases is primarily due to better register allocation and good selective inlining, not all the crazy transformations and eliminations that subtly break your code and rely on UB. Yeah, I'm sure they have some microbenchmarks that justify those code breaking "optimizations" but in practice I'll bet those optimizations rarely account for more than 5% of the total runtime of the code. But everyone pays the cost of horrifically slow build times as well as nearly unbounded developer time loss debugging the code the compiler broke.
Of course, part of the problem is developers hating being told they're wrong and complaining about nanny compilers. In this sense, compiler authors have historically been somewhat similar to sycophantic llms. Rather than tell the programmer that their code is wrong, they will do everything they can to coddle the programmer while behind the scenes executing their own agenda and likely getting things wrong all because they were afraid to honestly tell the programmer there was a problem with their instructions.
No comments yet.