top | item 29065511

GCC Rust Monthly Report – October 2021

82 points| philberty | 4 years ago |thephilbert.io

17 comments

order

cletus|4 years ago

A fallacy of youth is to overreact to things. This means you buy into hype too easily and overstate doom and gloom. After getting disappointed by hype repeatedly you start to become skeptical and hopefully not jaded.

I've been watching Rust for years now and I honestly think it's the most exciting thing in decades in low-level programming. My personal opinion is that C++ is an unsalvageable Frankenlanguage. Memory safety is simply too important going forward and it's one thing Rust is designed for from the ground up.

Mistakes have been made and these particularly impact compile-time [1]. I'm not an expert in this field but reading this it sounds like it's difficult to walk back at this point.

I don't know how GCC's approach will differ here but I'm excited that it exists and continues to receive significant investment and matures. This can only be good for the Rust ecosystem.

[1]: https://pingcap.com/blog/rust-compilation-model-calamity

mlindner|4 years ago

That blog post is mostly a history of the Rust compiler than it is actually about Rust compile times, which have been continuously getting better. I wouldn't put a lot of stock in it.

jfbaro|4 years ago

Hey people, first, congrats for the great work on GCC for RUST, second I have a basic question here from someone who is not experienced with low level languages. Would it be possible (and beneficial to the community) to have a "compiler as a service" in the cloud (either GCC or LLVM based) that would have the most powerful hardware setup available to compile RUST? Really cheap/free per seconds of compilation... so anyone would be able to compile Rust faster and also ANY improvement would de added to this Service... and once new and more powerful hardware is available it could be shared and used by the community. I know we still have to work on improving the compilation times, but maybe having a shared compilation pipeline that can be used by everyone could somehow alleviate the pain a little. Thanks

epage|4 years ago

There is sccache (https://github.com/mozilla/sccache) so a first step would be looking to see why it isn't used more to see how to lower that barrier.

Another idea is crate-build caching so local and CI can pull down a pre-built dependency, rather than building locally. This would need to handle rust versions, feature flags, architectures, compiler settings, etc. This would most help CI since the result would get cached locally

The last idea I'm aware of in this area is watt (https://github.com/dtolnay/watt). If the design and implementation was finished to allow proc-macros (and maybe `build.rs` scripts) to opt-in to a sandboxed wasm environment, we could have a local and networked binary cache for these which would dramatically improve Rust build times (and security). Some people outright avoid proc-macros because of the build-time impact.

jcranmer|4 years ago

Distributed compilation isn't as helpful as it might seem at first glance. In order to compile something remotely, you need to slurp all the source files and the relevant environment information locally, send it to a remote compilation process, wait for it to compile, and get the results back. This extra network overhead is going to eat up a lot of the potential speed benefits you might get on a beefier machine, and limited bandwidth is likely to limit the maximum width of the parallelism you might encounter.

A better alternative than distributed compilation is artifact caching. Effectively, instead of shipping .rs files all over the place to compile them, just ship .o files around. Rust already has a decently canonical solution for this in the form of sccache, which was developed by Mozilla for caching the builds of Firefox using Amazon S3 as the storage (hence the name--S3 ccache [compiler cache]).

kristianpaul|4 years ago

That compiler service is usually put together as part of a Continuous Integration process where if hosted in the cloud you can launch beefy instances on demand and or spot (aws jargon) as part of your development workflow/loop.

option_greek|4 years ago

I don't mind my laptop not screaming obscenities at me every time I do a cargo build/run. The local changes will have to be synced frequently enough for this to work. May be have the remote compiler named "cargor" and let is seamlessly run the compilation step in cloud.

However this might not really work well at all unless the network connection is insanely fast as the build sizes are huge with rust (especially in debug modes).

bool3max|4 years ago

That sounds terrible.

egnehots|4 years ago

I know it's very early, but I wonder if the compile times could be faster with GCC Rust?

philberty|4 years ago

It is indeed early to say, but the design of the compiler pipeline is very different to rustc, it is a more traditional pass based system with plenty of side table lookups. Some of the notions are similar we are using HIR but we are not using MIR, GCC's generic IR is very similar.

So we have AST->HIR->GCC-Generic->GCC

where as rustc is: AST->HIR->THIR->MIR->LLVM-IR->LLVM

lpapez|4 years ago

From my anectodal experience latest GCC tends to be sligthly/noticeably faster than Clang when compiling C++. I predict the same for Rust - slightly faster but still slow overall since its such a complex language.