I seriously hope the Rust community won't get used to these slow compile times the way the C++ community got used to it. C++ has some really big flaws that can cause compilation time to become unbearable but for some reason boost really loves to abuse these flawed features. Boost is almost as bad as npm. You include a single header and then you end up including thousands of other boost headers. When I looked at the include hierarchy of a single cpp file 99% of the includes were from boost and maybe 30 includes were actually from the C++ project itself. I'm sure you could cut down 80% of the compilation time simply by removing boost and replacing it with different libraries.
On the bright side, the compiler's performance is improving constantly. Timing the compilation of lewton plus its dependencies on the 0.9.4 release commit:
In my experience, Rust compiles orders of magnitude faster than similar C++ code for initial compiles. For incremental compiles, Rust is infinitely faster. C++, even if you use Modules, needs to re-compile, re-instantiate, re-check all the templates all the time. I speculate that this alone is what probably is responsible for the largest difference in compile-times.
Comparing Rust with C here is hard. As others have mentioned, with C, you typically only use libraries that are installed in your system. Rust does not have a global system cache for libraries, and I don't think there is any package manager installing pre-compiled Rust libraries, so at least for the initial compiles, Rust needs to do much more work than C, for Rust dependencies at least. If your Rust project only depends on C libraries, then there is no work to be done of course.
For incremental compiles with C, it is hit or miss. If your C project is properly "modularized" (split into a lot of small enough TUs), then when you change a TU, C only needs to recompile that one TU. In our benchmarks, Rust is faster for that situation, and speculating, this is because the Rust compiler only recompiles the parts of a TU that actually changed.
In practice, however, Rust is much slower than C, because people do not write small Rust TUs, but absurdly huge ones, at least, by C standards. A Rust TU (crate) usually is mapped 1:1 to a C library, which often contains dozens or hundreds of TUs. Just the cost of testing what changed in a Rust crate is often larger than the cost of blindly recompiling the C TU that has a newer timestamp.
If people would split Rust code like they do with C, then this wouldn't happen. But Rust people are too optimistic in that one day the compiler will be able to somehow auto-magically split a crate into multiple TUs as good or even better than what people do for C manually. And do that determination + the incremental compile faster than what takes a C compiler to blindly recompile a tiny amount of code.
For C++, if you have many library headers used repeatedly in many source files, the number 1 suggestion ought to use a precompiled header (which you write yourself with commonly used library headers but none of your application headers). It was mentioned on that page but a bit buried and OP replaced a whole dependency before trying it.
I don't think the same thing would work for Rust because, unlike C++, it doesn't need to constantly reparse files in its depedencies - essentially it already has them automatically. C++ is aiming to move that way in future with modules.
Edit: I just tried turning off precompiled header on one of our larger programs that's heavy on Boost and a couple of our other libraries, and it increased build time from 2:16 to 10:52 (that's minutes and seconds) after fixing a few missing #includes that were covered by the precompiled header.
The article does a good job explaining why a certain project might have a long compile time: dependencies.
But it fails to address how this compares to go or clang of gcc. It mentions those (they take 1/5th the time to compile a similar project), then goes off to explain how it was possible to shave over 75% off the compile time of Rust project, but not if that same trick would shave 75% off a similar go- or c project.
In other words: very good to show us, how, by choosing the correct dependencies compile time decreases. But that does not prove that the Rust compiler isn't slow.
> but not if that same trick would shave 75% off a similar go- or c project.
The question does apply to Go, but not C — because the way to reduce compile time for C projects is to just not compile libraries & modules when you don't need to. That's the return from having a stable ABI.
(C++ might be the most interesting in this regard because there's both things - linked / ABI'd libraries as well as header-only "libraries")
[edit: "has" as in "is actually actively used" - of course you can create C-ABI libraries with Go and Rust too, but it's a rather rare practice]
I have been tinkering with Rust lately and compile times have been an issue since almost the beginning. After searching around for a bit, I discovered that a significant amount of time is spent in the linker, not in the actual compiler.
By switching from the default GNU gold linker to LLVM LLD, I have managed to shave off a few seconds on every build. There's an open issue [0] about switching to LLD. It still has some compatibility issues but it works fine on my machine and albeit tiny project.
I would be curious to see how compile times improve for other people just by linking with LLD. You of course need to install LLD first and then you can tell the chain to use it:
time RUSTFLAGS="-C link-arg=-fuse-ld=lld" cargo build
Most of the stuff your linker has to churn through is actually debug info. If you turn it off, you'll speed up linking whether it's with LLD, gold, or ld.bfd. Do this in Cargo.toml:
Quote: "There is plenty of evidence that suggests rustc is slow relative to compilers for other languages; Go can compile applications with similar complexity in 1/5 the time, and clang or gcc can do the same for applications written in C"
Wait until the author tries Delphi. Then will realize compile times for Go or C are slower then a snail
I had the same experience going from C++ back to C. I was surprised how fast my C code compiled, and how small the resulting executables were, even though I already tried to avoid the worst offenders in my C++ code.
The problem isn't that C compilers are "fast" or C++ compilers are "slow". C++ compilers are very fast, but typical C++ projects just throw way too much work at the compiler (by having too much implementation code in headers, while C headers are typically only declarations).
The underlying problem is that C++ makes it easy (and even encourages through bad practices in the stdlib) to add complexity and hide it under "fancy" interfaces, which indirectly leads to code that's slow to compile and results in a bloated compiler output.
It's possible to create C++ projects which compile just as fast, and result in just as small binary code as C projects (basically "embedded C++", but this means giving up most high-level features that C++ added on top of C (such as most of the C++ stdlib).
In short: C makes it hard to add accidental complexity under the hood, while C++ makes it easy. Add a cargo-style dependency manager to the mix, and the "accidental complexity" problem grows exponentially.
You could say that Rusts great dependency management is both a blessing and a problem at the same time. C programmers, often motivated by resource constraints, are much less likely to use third party dependencies - not just to save resources but also because it is just much harder to do it and to do it well. They end up just rewriting the parts they need themselves. Because those parts are likely a small subset of a full-featured library, binary size and compile times are smaller at the expense of actual time spent writing code.
The way I read it, the conclusion is that Rust compiler is indeed slow when there are dependencies, even as "simple" as argument parsing?
Now I would like to read why is that, and is it only about the "rebuild time" or also about "make time", once the dependency is built and hasn't been changed. If it's about the rebuild time, I agree that it's not important. If it's a penalty for every use of dependency, then for my tastes it does reduce the number of iterations one can do during the fixed development time.
Clap (the argument parser in discussion) does a little more: parsing to correct types, colorful output, generation of shell completions and man pages, fuzzy matching (did you mean X?) and so on.
That aside, I don't know why clap in particular was the problem.
I consider clap is a different type of dependency from lalrpop and regex. Clap is part of the application interface and most likely does not change that often. Lalrpop and regex are part of the domain logic and probably change fairly often.
Today, we can separate a Rust application into "app" and "lib". Most of the change would happen inside lib and app would only need to be compiled for infrequent changes and releases. This allows one to use all the fancy features of clap and mostly ignore the compile time cost.
It is annoying to separate a Rust application into "app" and "lib" though, so most people do not do it. They instead opt to replace clap with gumdrop. I think both approaches are valid.
To make fair comparison, you could translate C code very literally to Rust code, not using any "advanced" features or different libs (meaning no_std). Then you could see apples to apples how it'd compare to clang. Probably slower still, but at least you'd know that the slowness wouldn't be due complex generics or macros or something.
There is an experiment that anyone interested in compiler speed and the joy of fast iteration programming should do : trying a real fast C compiler: TCC (from Fabrice Bellard)
On my computer, most of the time the compilation is finished before I have time to release the return key...
And the generated code, while not highly optimized, is largely fast enough to test/iterate.
> ...this is a dissent to a common criticism that the compiler is slow. In this discussion, I will argue that these claims are misleading at best.
I think the author failed here.
The article shows the rust compiler is slow but you might be able to mitigate the issue by switching to lighter-weight dependencies or removing dependencies and instead write the parts you need yourself.
These actions are expensive in time spent, involve tradeoffs, and are available only in some cases, so don't generally address the issue of compile times. That is, these are potential mitigations for slow compile times.
But "we" also include the c/c++/rust developers here.
This is a puzzle that is hard to crack.
WHY "system language" developers near always produce the MOST slow languages on earth (C, C++, Rust, ....).
And given time, CONTINUE to keep everything wrong with them?
C/C++/rust are among the slowest languages in actual use today. Is kinda ironic them claim to build "fast" stuff and the tool, itself, is slow like hell.
Is even more sad considering that Pascal mop the floor with everything, decades ago, and still do.
> I believe there is more to it than that. I believe it's more of a human problem than an algorithmic one.
Totally. Somehow, the C/C++ mindset can't build fast languages, not matter how many decades of wasted compile times are behind us.
Is uber funny that you read how certain dude spend A LOT of effort optimizing his code to get a few nanoseconds here and there for the project is building, but somehow, the same love is not give for our core tools.
Is funny that that guy is also me. I waste DAYS trying to cut the build times of my pipeline. DAYS.
Rust is the ONLY lang in this 20 years that make me do busy work of that kind. BTW, I have used more than other +12
--
Before get angry, remember that Pascal, ADA, Go, exist. And them are orders of magnitude faster than C/C++/Rust. This is not a problem of "but c/rust do that stuff and must be slow", no... is because them, BY DESIGN, are suboptimal tools.
--
Then why you use rust? Because, in contrast with C/C++, at least give a lot of extra benefits that are not present elsewhere (with the exception of ADA).
We can love AND hate something, at the same time. Funny humans.
The systems languages you reference have to do memory management by design. It's very difficult to write a scalable, maintainable, performant operating system or high-performance application in Pasca, ADA, or Golang. While they are all decent, I'm referring to "low-level" as in device drivers or "high-performance" as in game engines. Aside from some toy microcontrollers running go or some userland fuchsia stuff, there's a reason serious stuff still uses those languages. They're slow because they intentionally trade compile time for run-time performance. If I write in such a language, I've got a pretty good idea of how it will run; I don't have to worry about "Oh no, what's the GC gonna do now."
I was talking to my wife about why something around the house was slow to get done. Well, Rust wasn't having that. For a full 30-mins I was on full defense trying to figure out how Rust got into the house. It's everywhere. And it likes to talk! I'm gonna have to get to know it better, because it seems I have no choice. Ok, satire out of the way, let's get to the real points.
The article was a let down at the end. Yes, less code and less dependencies equals quicker builds. No kidding. It does not deal with Rust itself, however.
But at commercial scale this is a real concern. We're a big C++ shop ... but for reasons that cannot be blamed solely on C++ itself, I quit C++ for the last 18 months for GO weary of the 4-day long build times, the huge executable sizes, and fighting the C++ compiler. The only thing worse than a C++ compiler in a bad mood is Oracle's PL-SQL parser (dumb and silent) or when your wife is mad at you.
Problems committed by programmers:
- builds do not use PCH so C++ builds are indeed slow
- very old legacy libraries have circular dependencies and sorting that out is annoying
- certain important, hard to avoid legacy calls hit into a library chain of dependencies that quickly lead to very large executable sizes
- linking depends on .a files because having tasks link .so files is unreliable in production. Too many tasks, too many libraries, too many versions, too many deployments just too much lead to dependency mismatches leading to cores ... We still link archive libraries not shared but over the last years build teams have taken making the archive files so compiling is usually limited to app code plus whatever self-owned library code was modified. (Not talking language or OS libraries here; that's disciplined. Talking app libraries & tasks for which there are 10000s).
Thus on the whole we can blame the organization not C++ for not being as good as it should at commercial level, very large scale software management. Getting rid of legacy code or refactoring it is another weakness: for the above reasons the effort is often big, and there's always so much new work to do making management of resources tough.
We can blame C++ for other things however:
- the language is far too complicated
- the language involves some duplication of effort between headers and source files
- C++ inherits C's problems with #defines, macros, and other nasty things that break builds at scale with bad things in global scope. Just having a pre-processor is bad.
C++ maybe should have been link compatible with C but it dragged in the whole edifice. Other OOP languages were smarter.
- C++ errors arising out of templates or certain other STL areas produce messages which are 2K+ each. It takes time to see what the hell when wrong. C++ often confuses you with the details. I hope GO generics does NOT do that.
- Making code allocator aware and capable i.e. to make things behave properly for performance, memory testing when everything should use a non-default allocator, debugging, and so on is real effort. Memory errors are not that easy to track down in C++ for something that appears to care about safety.
- C++ language complexity drifts into design complexity immediately. If you're <5yrs in C++ you can't really attack a serious project without having Meyer's C++ books side by side.
- A lot of what makes a C++ class or method "good" i.e. narrow contract, assertion checking, unit testing comes, if it comes at all, from the organization's engineering culture not from C++ itself. In other languages it's more baked into the language.
Of course GO avoids many of these problems by design to say nothing of quick build times. But GO built on 20 years of industrial experience while others created it along the way.
Other OOP languages may be better, but sorry the die has been cast. Ditching C/C++ for something better will still likely need a back door to C because there's too much of it to drop it entirely. Those working close to the kernel will need a C backdoor as well.
[+] [-] imtringued|5 years ago|reply
I even found a reddit submission that shows how absurd boost is: https://www.reddit.com/r/cpp_questions/comments/2hzobl/reduc...
[+] [-] est31|5 years ago|reply
* 8.042s on rustc 1.20.0 from 2017-08-27
* 5.061s on rustc 1.30.0 from 2018-10-24
* 4.789s on rustc 1.36.0 from 2019-07-03
* 3.197s on rustc 1.44.0 from 2020-06-01
[+] [-] fluffything|5 years ago|reply
Comparing Rust with C here is hard. As others have mentioned, with C, you typically only use libraries that are installed in your system. Rust does not have a global system cache for libraries, and I don't think there is any package manager installing pre-compiled Rust libraries, so at least for the initial compiles, Rust needs to do much more work than C, for Rust dependencies at least. If your Rust project only depends on C libraries, then there is no work to be done of course.
For incremental compiles with C, it is hit or miss. If your C project is properly "modularized" (split into a lot of small enough TUs), then when you change a TU, C only needs to recompile that one TU. In our benchmarks, Rust is faster for that situation, and speculating, this is because the Rust compiler only recompiles the parts of a TU that actually changed.
In practice, however, Rust is much slower than C, because people do not write small Rust TUs, but absurdly huge ones, at least, by C standards. A Rust TU (crate) usually is mapped 1:1 to a C library, which often contains dozens or hundreds of TUs. Just the cost of testing what changed in a Rust crate is often larger than the cost of blindly recompiling the C TU that has a newer timestamp.
If people would split Rust code like they do with C, then this wouldn't happen. But Rust people are too optimistic in that one day the compiler will be able to somehow auto-magically split a crate into multiple TUs as good or even better than what people do for C manually. And do that determination + the incremental compile faster than what takes a C compiler to blindly recompile a tiny amount of code.
[+] [-] quietbritishjim|5 years ago|reply
I don't think the same thing would work for Rust because, unlike C++, it doesn't need to constantly reparse files in its depedencies - essentially it already has them automatically. C++ is aiming to move that way in future with modules.
Edit: I just tried turning off precompiled header on one of our larger programs that's heavy on Boost and a couple of our other libraries, and it increased build time from 2:16 to 10:52 (that's minutes and seconds) after fixing a few missing #includes that were covered by the precompiled header.
[+] [-] amelius|5 years ago|reply
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] berkes|5 years ago|reply
But it fails to address how this compares to go or clang of gcc. It mentions those (they take 1/5th the time to compile a similar project), then goes off to explain how it was possible to shave over 75% off the compile time of Rust project, but not if that same trick would shave 75% off a similar go- or c project.
In other words: very good to show us, how, by choosing the correct dependencies compile time decreases. But that does not prove that the Rust compiler isn't slow.
[+] [-] eqvinox|5 years ago|reply
The question does apply to Go, but not C — because the way to reduce compile time for C projects is to just not compile libraries & modules when you don't need to. That's the return from having a stable ABI.
(C++ might be the most interesting in this regard because there's both things - linked / ABI'd libraries as well as header-only "libraries")
[edit: "has" as in "is actually actively used" - of course you can create C-ABI libraries with Go and Rust too, but it's a rather rare practice]
[+] [-] frankjr|5 years ago|reply
By switching from the default GNU gold linker to LLVM LLD, I have managed to shave off a few seconds on every build. There's an open issue [0] about switching to LLD. It still has some compatibility issues but it works fine on my machine and albeit tiny project.
I would be curious to see how compile times improve for other people just by linking with LLD. You of course need to install LLD first and then you can tell the chain to use it:
[0] https://github.com/rust-lang/rust/issues/39915[+] [-] est31|5 years ago|reply
[+] [-] giovannibajo1|5 years ago|reply
[+] [-] unnouinceput|5 years ago|reply
Wait until the author tries Delphi. Then will realize compile times for Go or C are slower then a snail
[+] [-] brutt|5 years ago|reply
[0]: https://vlang.io/ .
[+] [-] flohofwoe|5 years ago|reply
The problem isn't that C compilers are "fast" or C++ compilers are "slow". C++ compilers are very fast, but typical C++ projects just throw way too much work at the compiler (by having too much implementation code in headers, while C headers are typically only declarations).
The underlying problem is that C++ makes it easy (and even encourages through bad practices in the stdlib) to add complexity and hide it under "fancy" interfaces, which indirectly leads to code that's slow to compile and results in a bloated compiler output.
It's possible to create C++ projects which compile just as fast, and result in just as small binary code as C projects (basically "embedded C++", but this means giving up most high-level features that C++ added on top of C (such as most of the C++ stdlib).
In short: C makes it hard to add accidental complexity under the hood, while C++ makes it easy. Add a cargo-style dependency manager to the mix, and the "accidental complexity" problem grows exponentially.
[+] [-] JoeAltmaier|5 years ago|reply
[+] [-] g_airborne|5 years ago|reply
[+] [-] indogooner|5 years ago|reply
[+] [-] jfkebwjsbx|5 years ago|reply
What is harder? It is a single command to install most dependencies.
Using a C dependency is super easy from most languages, including C itself, of course.
> binary size and compile times are smaller at the expense of actual time spent writing code.
Nobody writing C professionally is doing that for those reasons unless the library is trivial.
[+] [-] rightbyte|5 years ago|reply
[+] [-] acqq|5 years ago|reply
Now I would like to read why is that, and is it only about the "rebuild time" or also about "make time", once the dependency is built and hasn't been changed. If it's about the rebuild time, I agree that it's not important. If it's a penalty for every use of dependency, then for my tastes it does reduce the number of iterations one can do during the fixed development time.
[+] [-] vbrandl|5 years ago|reply
That aside, I don't know why clap in particular was the problem.
[+] [-] hermanradtke|5 years ago|reply
Today, we can separate a Rust application into "app" and "lib". Most of the change would happen inside lib and app would only need to be compiled for infrequent changes and releases. This allows one to use all the fancy features of clap and mostly ignore the compile time cost.
It is annoying to separate a Rust application into "app" and "lib" though, so most people do not do it. They instead opt to replace clap with gumdrop. I think both approaches are valid.
[+] [-] fernmyth|5 years ago|reply
[+] [-] zokier|5 years ago|reply
[+] [-] stephc_int13|5 years ago|reply
On my computer, most of the time the compilation is finished before I have time to release the return key...
And the generated code, while not highly optimized, is largely fast enough to test/iterate.
[+] [-] jmull|5 years ago|reply
I think the author failed here.
The article shows the rust compiler is slow but you might be able to mitigate the issue by switching to lighter-weight dependencies or removing dependencies and instead write the parts you need yourself.
These actions are expensive in time spent, involve tradeoffs, and are available only in some cases, so don't generally address the issue of compile times. That is, these are potential mitigations for slow compile times.
[+] [-] mamcx|5 years ago|reply
But "we" also include the c/c++/rust developers here.
This is a puzzle that is hard to crack.
WHY "system language" developers near always produce the MOST slow languages on earth (C, C++, Rust, ....).
And given time, CONTINUE to keep everything wrong with them?
C/C++/rust are among the slowest languages in actual use today. Is kinda ironic them claim to build "fast" stuff and the tool, itself, is slow like hell.
Is even more sad considering that Pascal mop the floor with everything, decades ago, and still do.
> I believe there is more to it than that. I believe it's more of a human problem than an algorithmic one.
Totally. Somehow, the C/C++ mindset can't build fast languages, not matter how many decades of wasted compile times are behind us.
Is uber funny that you read how certain dude spend A LOT of effort optimizing his code to get a few nanoseconds here and there for the project is building, but somehow, the same love is not give for our core tools.
Is funny that that guy is also me. I waste DAYS trying to cut the build times of my pipeline. DAYS.
Rust is the ONLY lang in this 20 years that make me do busy work of that kind. BTW, I have used more than other +12
--
Before get angry, remember that Pascal, ADA, Go, exist. And them are orders of magnitude faster than C/C++/Rust. This is not a problem of "but c/rust do that stuff and must be slow", no... is because them, BY DESIGN, are suboptimal tools.
--
Then why you use rust? Because, in contrast with C/C++, at least give a lot of extra benefits that are not present elsewhere (with the exception of ADA).
We can love AND hate something, at the same time. Funny humans.
[+] [-] mmm_grayons|5 years ago|reply
[+] [-] bartwe|5 years ago|reply
[+] [-] 7532yahoogmail|5 years ago|reply
The article was a let down at the end. Yes, less code and less dependencies equals quicker builds. No kidding. It does not deal with Rust itself, however.
But at commercial scale this is a real concern. We're a big C++ shop ... but for reasons that cannot be blamed solely on C++ itself, I quit C++ for the last 18 months for GO weary of the 4-day long build times, the huge executable sizes, and fighting the C++ compiler. The only thing worse than a C++ compiler in a bad mood is Oracle's PL-SQL parser (dumb and silent) or when your wife is mad at you.
Problems committed by programmers:
- builds do not use PCH so C++ builds are indeed slow
- very old legacy libraries have circular dependencies and sorting that out is annoying
- certain important, hard to avoid legacy calls hit into a library chain of dependencies that quickly lead to very large executable sizes
- linking depends on .a files because having tasks link .so files is unreliable in production. Too many tasks, too many libraries, too many versions, too many deployments just too much lead to dependency mismatches leading to cores ... We still link archive libraries not shared but over the last years build teams have taken making the archive files so compiling is usually limited to app code plus whatever self-owned library code was modified. (Not talking language or OS libraries here; that's disciplined. Talking app libraries & tasks for which there are 10000s).
Thus on the whole we can blame the organization not C++ for not being as good as it should at commercial level, very large scale software management. Getting rid of legacy code or refactoring it is another weakness: for the above reasons the effort is often big, and there's always so much new work to do making management of resources tough.
We can blame C++ for other things however:
- the language is far too complicated
- the language involves some duplication of effort between headers and source files
- C++ inherits C's problems with #defines, macros, and other nasty things that break builds at scale with bad things in global scope. Just having a pre-processor is bad. C++ maybe should have been link compatible with C but it dragged in the whole edifice. Other OOP languages were smarter.
- C++ errors arising out of templates or certain other STL areas produce messages which are 2K+ each. It takes time to see what the hell when wrong. C++ often confuses you with the details. I hope GO generics does NOT do that.
- Making code allocator aware and capable i.e. to make things behave properly for performance, memory testing when everything should use a non-default allocator, debugging, and so on is real effort. Memory errors are not that easy to track down in C++ for something that appears to care about safety.
- C++ language complexity drifts into design complexity immediately. If you're <5yrs in C++ you can't really attack a serious project without having Meyer's C++ books side by side.
- A lot of what makes a C++ class or method "good" i.e. narrow contract, assertion checking, unit testing comes, if it comes at all, from the organization's engineering culture not from C++ itself. In other languages it's more baked into the language.
Of course GO avoids many of these problems by design to say nothing of quick build times. But GO built on 20 years of industrial experience while others created it along the way.
Other OOP languages may be better, but sorry the die has been cast. Ditching C/C++ for something better will still likely need a back door to C because there's too much of it to drop it entirely. Those working close to the kernel will need a C backdoor as well.