top | item 44905637

(no title)

phaedrus | 6 months ago

When I was trying to improve compile time for my game engine, I ended up using compiled size as a proxy measure. Although it is an imperfect correlation, the fact that compiled size is deterministic across build runs and even across builds on different machines makes it easier to work with than wall clock time.

discuss

order

mlsu|6 months ago

Wait, this is not intuitive at all for me.

If the compiler is working harder wouldn't that result in a more compact binary? Maybe I'm thinking too much from an embedded software POV.

I suppose the compiler does eventually do IO but IO isn't really the constraint most of the time right?

staticfloat|6 months ago

While you can cause the compiler to run longer to squeeze the binary size down, the compiler has a baseline number of compiler passes that it runs over the IR of the program being compiled. These compiler passes generally take time proportional to the input IR length, so a larger program takes longer to compile. Most compiler passes aren't throwing away huge amounts of instructions (dead code elimination being a notable exception, but the analysis to figure out which pieces of dead code can be eliminated still is operating on the input IR). So it's not a perfect proxy, but in general, if the output of your compiler is 2MB of code, it probably took longer to process all the input and spit out that 2MB than if the output of your compiler was 200KB.

johannes1234321|6 months ago

Of course there are the cases where a huge template structure with complex instantiation and constexpr code compiles down to a single constant, but for most parts of the code I would assume there is a proportion from code size, via compile time to binary size.