top | item 45469406

(no title)

pfg_ | 5 months ago

The reason to care about compile time is because it affects your iteration speed. You can iterate much faster on a program that takes 1 second to compile vs 1 minute.

Time complexity may be O(lines), but a compiler can be faster or slower based on how long it takes. And for incremental updates, compilers can do significantly better than O(lines).

In debug mode, zig uses llvm with no optimization passes. On linux x86_64, it uses its own native backend. This backend can be significantly faster to compile (2x or more) than llvm.

Zig's own native backend is designed for incremental compilation. This means, after the initial build, there will be very little work that has to be done for the next emit. It needs to rebuild the affected function, potentially rebuild other functions which depend on it, and then directly update the one part of the output binary that changed. This will be significantly faster than O(n) for edits.

discuss

order

timschmidt|5 months ago

> The reason to care about compile time is because it affects your iteration speed. You can iterate much faster on a program that takes 1 second to compile vs 1 minute.

Color me skeptical. I've only got 30 years of development under the belt, but even a 1 minute compile time is dwarfed by the time it takes to write and reason about code, run tests, work with version control, etc.

Further, using Rust as an example, even a project which takes 5 minutes to build cold only takes a second or two on a hot build thanks to caching of already-built artifacts.

Which leaves any compile time improvements to the very first time the project is cloned and built.

Consequently, faster compile times would not alter my development practices, nor allow me to iterate any faster.

zdragnar|5 months ago

> Consequently, faster compile times would not alter my development practices, nor allow me to iterate any faster.

I think the web frontend space is a really good case for fast compile times. It's gotten to the point that you can make a change, save a file, the code recompiles and is sent to the browser and hot-reloaded (no page refresh) and your changes just show up.

The difference between this experience and my last time working with Ember, where we had long compile times and full page reloads, was incredibly stark.

As you mentioned, the hot build with caching definitely does a lot of heavy lifting here, but in some environments, such as a CI server, having minutes long builds can get annoying as well.

> Consequently, faster compile times would not alter my development practices, nor allow me to iterate any faster.

Maybe, maybe not, but there's no denying that faster feels nicer.

minitech|5 months ago

> Further, using Rust as an example, even a project which takes 5 minutes to build cold only takes a second or two on a hot build thanks to caching of already-built artifacts.

So optimizing compile times isn’t worthwhile because we already do things to optimize compile times? Interesting take.

What about projects for which hot builds take significantly longer than a few seconds? That’s what I assumed everyone was already talking about. It’s certainly the kind of case that I most care about when it comes to iteration speed.

bigstrat2003|5 months ago

Yeah, I agree. Much like how the time you spend thinking about the code massively outweighs the time you spend writing the code, the time you spend writing the code massively outweighs the time you spend compiling the code. I think the fascination with compiler performance is focusing on by far the most insignificant part of development.

magicalhippo|4 months ago

I've worked with Delphi where a recompile takes a few seconds, and I've worked with C++ where a similar recompile takes a long time, often 10 minutes or more.

I found I work very differently in the two cases. In Delphi I use the compiler as a spell checker. With the C++ code I spent much more time looking over the code before compiling.

Sometimes though you're forced to iterate over small changes. Might be some bug hunting where you add some debug code that allows you to narrow things a bit more, add some more code and so on. Or it might be some UI thing where you need to check to see how it looks in practice. In those cases the fast iteration really helps. I found those cases painful in C++.

For important code, where the details matter, then yeah, you're not going to iterate as fast. And sometimes forcing a slower pace might be beneficial, I found.

eyegor|5 months ago

> even a 1 minute compile time is dwarfed by the time it takes to write and reason about code, run tests, work with version control, etc.

You are far from the embedded world if you think 1 minute here or there is long. I have been involved with many projects that take hours to build, usually caused by hardware generation (fpga hdl builds) or poor cross compiling support (custom/complex toolchain requirements). These days I can keep most of the custom shenanigans in the 1hr ballpark by throwing more compute at a very heavy emulator (to fully emulate the architecture) but that's still pretty painful. One day I'll find a way to use the zig toolchain for cross compiles but it gets thrown off by some of the c macro or custom resource embedding nonsense.

Edit: missed some context on lazy first read so ignore the snark above.