top | item 46615848

(no title)

treyd | 1 month ago

Code is typically run many more times than it's compiled, so this is a perfectly good tradeoff to make.

discuss

order

embedding-shape|1 month ago

Absolutely, was not trying to claim otherwise. But since we're engineers (at least I like to see myself as one), it's worth always keeping in mind that almost everything comes with tradeoffs, even traits :)

Someone down the line might be wondering why suddenly their Rust builds take 4x the time after merging something, and just maybe remembering this offhand comment will make them find the issue faster :)

cardanome|1 month ago

For release builds yes. For debug builds slow compile times kill productivity.

greener_grass|1 month ago

If you are not willing to make this trade then how much of a priority was run-time performance, really?

torginus|1 month ago

A lot of C++ devs advocate for simple replacements for the STL that do not rely too much on zero-cost abstractions. That way you can have small binaries, fast compiles, and make a fast-debug kinda build where you only turn on a few optimizations.

That way you can get most of the speed of the Release version, with a fairly good chance of getting usable debug info.

A huge issue with C++ debug builds is the resulting executables are unusably slow, because the zero-cost abstractions are not zero cost in debug builds.

arw0n|1 month ago

I think this also massively depends on your domain, familiarity with the code base and style of programming.

I've changed my approach significantly over time on how I debug (probably in part due to Rusts slower compile times), and usually get away with 2-3 compiles to fix a bug, but spend more time reasoning about the code.

kace91|1 month ago

Doesn’t rust have incremental builds to speed up debug compilation? How slow are we talking here?