In general I've long been very skeptical of removing optimizations that rely on undefined behavior. People say "I'd happily sacrifice 1% for better theoretical semantics", but theoretical semantics don't pay the bills of compiler writers. Instead, compiler developers are employed by the largest companies, where a 1% win is massive amounts of dollars saved. Any complaint about undefined behavior in C must acknowledge the underlying economics to have relevance to the real world.
As the paper notes, there are plenty of alternative C compilers available to choose from. The reason why GCC and LLVM ended up attaining overwhelming market share is simply that they produce the fastest possible code, because, at the end of the day, that is what users want.
If you want to blame someone, blame the designers of the C language for doing things like making int the natural idiom to iterate over arrays even when size_t would be better. The fact that C programmers continue to write "for (int i = 0; i < n; i++)" to iterate over an array is why signed overflow is undefined, and it is absolutely a critical optimization in practice.
> If you want to blame someone, blame the designers of the C language for doing things like making int the natural idiom to iterate over arrays even when size_t would be better. The fact that C programmers continue to write "for (int i = 0; i < n; i++)" to iterate over an array is why signed overflow is undefined, and it is absolutely a critical optimization in practice.
Well, size_t is unsigned and has defined overflow, so you'd lose the optimization if you switched to it. (Specifically, there's cases where defining overflow means a loop is possibly infinite, which blocks all kinds of optimizations.)
Many languages try to fix this by defaulting to wrap on overflow, but that was a mistake because you rarely actually want that. A better solution is to have a loop iteration statement that doesn't have an explicit "int i" or "i++" written out.
We make a distinction between undefined and implementation defined behaviour for a reason. Saying that certain runtime behaviours result in malformed programs while being impractical to generate explicit checks for is not entirely crazy.
That said, I believe the set of undefined behaviours in our current standards is much, much too large - most of these should rightly be filed in the implementation-defined category instead. It is no longer the 70s and the very same modern compilers that perform more and more extreme optimisations year on year really do not need to account for a giant zoo of quirky experimental architectures; over the decades we've basically settled on a consensus on how pointers, integers, floating-point numbers etc ought to work.
Priorities change though; in particular, security is a rising priority. So it's not inevitable that funded compiler development must focus on wringing every last optimization out of software that might perform well enough if it were simpler. Organizations like NLnet and ISRG, for example, could fund work on getting to a usable Unix-like system that's entirely compiled with one of the simpler C compilers that don't exploit undefined behavior so much. They could justify it with the argument that security is best achieved through simplicity at all levels of the stack, including a simple build toolchain.
There is a large population of C "real programmers" who, when they write a C program that unsurprisingly doesn't work, conclude this must be somebody else's fault. After all, as a real programmer they certainly meant for their program to work, and so the fact it doesn't can't very well be their fault.
Such programmers tend to favour very terse styles, because if you don't write much then it can't be said to be your fault when, invariably, it doesn't do what you intended. It must instead be somebody else's fault for misunderstanding. The compiler is wrong, the thing you wanted was the obviously and indeed only correct interpretation and the compiler is willfully misinterpreting your program as written.
Such programmers of course don't want an error diagnostic. Their program doesn't have an error, it's correct. The compilers are wrong. Likewise newer better languages are unsuitable because the terse, meaningless programs won't compile in such languages. New languages often demand specificity, the real programmer is obliged to spell out what they meant, which introduces the possibility that they're capable of mistakes because what they meant was plain wrong.
The fact that a compiler is allowed/encouraged to silently remove whole sections of code because of some obscure factoid is an amazing source of footguns.
At least the warnings are getting a bit better for some of these.
Without these optimizations, you can't write fast scientific code in C. This was realized back in the early 1980s and it's why those rules were added.
In Fortran the aliasing rules are even stricter: given two arrays passed in as arguments the compiler can assume that they do not overlap, for example. I remember messing that up as a student long ago and getting strange results. The Fortran rule was to enable vectorization, which has been done for many decades.
I don't get how the fact that the compiler can remove or modify the code was thought to be a good idea. I get removing unused functions, but not conditions and changing the flow of the code. If there is unreachable code, best to issue a warning and let the programmer fix it. The compiler should optimize without changing the semantic of the code, even if it contains undefined/unspecified behavior.
To this it's impossible to write C without using a ton of non standard attributes and compiler options to just make it do the correct thing.
Absolutely, those optimisations should be opt-in, otherwise it's impossible to reason about the correctness of your code. At work we had to replace some arithmetic by inline assembly, as there was literally no other way of making the compiler generate the correct expression.
This is a typical whining about UB article, but removing it won't get what you want, in particular your program still won't behave correctly across architectures. Overflow on shift left may be undefined, but how do you want to define it? If you want a "high level assembler", well, the underlying instructions behave differently on ARM, x86 scalar, and x86 SIMD.
The reason they claim program optimizations aren't important is because you can do it by hand for a specific architecture pretty easily, but you'll still want them when porting to a new one, eg if it wants loop counters to go in the opposite direction.
Yes, your program will have different semantics on different architectures. This is already the case with e.g. big vs little endian. But you will be able to reason about the program's semantics rather than going "I sure hope there's no UB in here" and throwing your hands up.
How much of this is driven by modern C++ style? I always assumed optimizers needed to become much more aggressive because template-heavy code results in convoluted IR with tons of unreachable code. And UB-based reasoning is the most effective tool to prove unreachability.
None of it: we're talking about C here, and every point in the article applies to the C standard as it existed when Linus posted his message to comp.os.minux so long ago.
> template-heavy code results in convoluted IR with tons of unreachable code
Does it? Honest question; my impression was that template-heavy code can tend to produce deep call trees, but not necessarily outright unreachable code unless you count instantiations ruled out by SFINAE/std::enable_if/tag dispatching, for which UB-based analyses are not necessary.
In addition, I thought template (meta)programming relied very heavily on compile-time knowledge, which seems to obviate the need for UB-based analyses in many cases.
I'm not particularly experienced, though, so maybe there's a gaping hole I'm missing.
The actual title of the paper is "How ISO C became unusable for operating systems development". Is there a particular reason why the first and last words have been removed here?
To me the stupid thing is the abuse of undefined behavior for changing the semantic of the code. The fact that a behavior is not defined in the standard doesn't mean that on a particular hardware platform it doesn't have a particular meaning (and most C programs doesn't need to be portable, since C it's mainly used for embedded these days and thus you are targeting a particular microcontroller/SOC).
These optimizations leave for C++ folks. C doesn't need all of that, just leave it as the "high level assembler" that it was in the old days, where if I write an instruction I can picture the assembler output in my mind.
Optimizers should not change the code semantics to me. Unfortunately with gcc it's impossible to rely on optimizations, so the only safe option is to turn them off entirely (-O0).
I'm curious what you would replace it with? I can't think of anything actually suitable for most of the low-level operating systems / embedded level things that use C.
I know people recommend rust for this kind of thing, but Rust really isn't appropriate in a lot of cases, especially when dealing with microcontrollers not supported by llvm (ie PIC, 8051 off the top of my head).
This may be changing, but I was also under the impression that Rust can't easily produce as small binaries as C can.
As long as you write compilers for all of the platforms which currently only have C language production quality compilers. That includes porting the whole development ecosystem for those platforms (like libraries) to whatever NextNewShiny language you deem worthy.
Sometimes I do wonder if all these UB optimisations aren't pushed by people aiming to make C and C++ unusable, so that people will be forced to move to other languages.
pcwalton|4 years ago
As the paper notes, there are plenty of alternative C compilers available to choose from. The reason why GCC and LLVM ended up attaining overwhelming market share is simply that they produce the fastest possible code, because, at the end of the day, that is what users want.
If you want to blame someone, blame the designers of the C language for doing things like making int the natural idiom to iterate over arrays even when size_t would be better. The fact that C programmers continue to write "for (int i = 0; i < n; i++)" to iterate over an array is why signed overflow is undefined, and it is absolutely a critical optimization in practice.
userbinator|4 years ago
No, I think it's more because they are free.
In my experience, ICC can be much better at instruction selection while also not being so crazy with exploiting UB.
astrange|4 years ago
Well, size_t is unsigned and has defined overflow, so you'd lose the optimization if you switched to it. (Specifically, there's cases where defining overflow means a loop is possibly infinite, which blocks all kinds of optimizations.)
Many languages try to fix this by defaulting to wrap on overflow, but that was a mistake because you rarely actually want that. A better solution is to have a loop iteration statement that doesn't have an explicit "int i" or "i++" written out.
tsukikage|4 years ago
That said, I believe the set of undefined behaviours in our current standards is much, much too large - most of these should rightly be filed in the implementation-defined category instead. It is no longer the 70s and the very same modern compilers that perform more and more extreme optimisations year on year really do not need to account for a giant zoo of quirky experimental architectures; over the decades we've basically settled on a consensus on how pointers, integers, floating-point numbers etc ought to work.
mwcampbell|4 years ago
tialaramex|4 years ago
There is a large population of C "real programmers" who, when they write a C program that unsurprisingly doesn't work, conclude this must be somebody else's fault. After all, as a real programmer they certainly meant for their program to work, and so the fact it doesn't can't very well be their fault.
Such programmers tend to favour very terse styles, because if you don't write much then it can't be said to be your fault when, invariably, it doesn't do what you intended. It must instead be somebody else's fault for misunderstanding. The compiler is wrong, the thing you wanted was the obviously and indeed only correct interpretation and the compiler is willfully misinterpreting your program as written.
Such programmers of course don't want an error diagnostic. Their program doesn't have an error, it's correct. The compilers are wrong. Likewise newer better languages are unsuitable because the terse, meaningless programs won't compile in such languages. New languages often demand specificity, the real programmer is obliged to spell out what they meant, which introduces the possibility that they're capable of mistakes because what they meant was plain wrong.
boomlinde|4 years ago
bombcar|4 years ago
At least the warnings are getting a bit better for some of these.
not2b|4 years ago
In Fortran the aliasing rules are even stricter: given two arrays passed in as arguments the compiler can assume that they do not overlap, for example. I remember messing that up as a student long ago and getting strange results. The Fortran rule was to enable vectorization, which has been done for many decades.
alerighi|4 years ago
To this it's impossible to write C without using a ton of non standard attributes and compiler options to just make it do the correct thing.
unknown|4 years ago
[deleted]
Asooka|4 years ago
astrange|4 years ago
The reason they claim program optimizations aren't important is because you can do it by hand for a specific architecture pretty easily, but you'll still want them when porting to a new one, eg if it wants loop counters to go in the opposite direction.
vyodaiken|4 years ago
Asooka|4 years ago
pdw|4 years ago
not2b|4 years ago
aw1621107|4 years ago
Does it? Honest question; my impression was that template-heavy code can tend to produce deep call trees, but not necessarily outright unreachable code unless you count instantiations ruled out by SFINAE/std::enable_if/tag dispatching, for which UB-based analyses are not necessary.
In addition, I thought template (meta)programming relied very heavily on compile-time knowledge, which seems to obviate the need for UB-based analyses in many cases.
I'm not particularly experienced, though, so maybe there's a gaping hole I'm missing.
gjm11|4 years ago
Hemospectrum|4 years ago
The submitter may have left out "development" to fit the title in the character limit.
davidgerard|4 years ago
wffurr|4 years ago
alerighi|4 years ago
These optimizations leave for C++ folks. C doesn't need all of that, just leave it as the "high level assembler" that it was in the old days, where if I write an instruction I can picture the assembler output in my mind.
Optimizers should not change the code semantics to me. Unfortunately with gcc it's impossible to rely on optimizations, so the only safe option is to turn them off entirely (-O0).
astrange|4 years ago
gHosts|4 years ago
Or even define the behavior and get your compiler writer to implement it.
ps: If I index past the end of the array... what behaviour are you going to define?
pcwalton|4 years ago
Yes, it does. In fact C needs it more, because of the "for (int i = 0; i < n; i++)" idiom. At least idiomatic C++ code uses iterators.
po1nt|4 years ago
MobiusHorizons|4 years ago
I know people recommend rust for this kind of thing, but Rust really isn't appropriate in a lot of cases, especially when dealing with microcontrollers not supported by llvm (ie PIC, 8051 off the top of my head).
This may be changing, but I was also under the impression that Rust can't easily produce as small binaries as C can.
kjs3|4 years ago
We'll wait...
Asooka|4 years ago