top | item 39754342

(no title)

kapilsinha | 1 year ago

I appreciate this detailed explanation! To preface, I took on macro expansion caching because it was a clear solution for the slow builds I saw in some projects (taking the "project-specific" approach I mentioned in the blog). That said, my medium-term vision for CodeRemote isn't to develop a superior closed-source Rust compiler fork; instead, it is to provide remote tooling for faster development (i.e. bring small dev teams' tooling up closer to Google's level):

(a) pre-configured build servers, so a developer doesn't need to be equipped with the latest Mac to be functional

(b) shared cache within a developer team

(c) potential avenues in CI/CD, testing, etc.

I'd love to hear your thoughts on this goal! Does some aspect address a development pain point of yours? I also want to address your points above, and I agree with lots of them:

1. I agree that especially in the wider Rust community, closed source is not a long-term solution. For what it's worth, I have been considering making the source available -- though likely not with a super copyleft license, at least initially. Do you think making the source available would alleviate people's safety concerns?

2. I have thought about the mold/sold linker quite a bit, especially because the author changed his license on sold from a commercial license to MIT. And defaults are indeed very powerful (this is discussed in a recent change to strip debug info by default, https://kobzol.github.io/rust/cargo/2024/01/23/making-rust-b...) because most people aren't bothered to (or don't know to) change them. Generally one expects the defaults to be the best for the general case, but this is decidedly not true for Rust compiler settings.

I think there may be a difference, though, between catering to the general public and catering to specific users/clients. For the folks where link time is truly atrocious, I assume they make the effort to use mold (though again, there is good reason to be skeptical given that sold switched to an MIT license). I assume a similar willingness if/when someone's macro expansions are slow.

3. You're right, this macro caching feature is not complex! I'm sure someone like bjorn3 could code it up rather quickly. It does me no good trying to solo out-perform the rustc experts. But I think there is a lack of people improving the Rust dev experience. That's where I want to operate.

discuss

order

kobzol|1 year ago

The Rust bhild config defaults are pretty fine for the general case. It's just that not everyone has the general case :) In Rust the normal distribution will be quite flattened.

kapilsinha|1 year ago

Ha yes that is a diplomatic way to put it. To the commenter's point though, I too question some of the defaults. mold does seem to be objectively better than the default linker. Stripping debuginfo does seem like it would have been a better default, which is why it was made so recently! Pipelined compilation also falls into this latter category, so perhaps because there is (understandably) just a delay until adoption to stable Rust.

I know you mean it as a figure of speech, but I would consider the complexity (and build time) distribution for Rust to be heavy-tailed and skewed right, more so than a flattened normal.

nindalf|1 year ago

> Does some aspect address a development pain point of yours?

I've worked in a company with a large codebase, but we did all development on the server. Devex teams automatically set up artefact caching so builds would be fast. I guess this is what you're trying to replicate. I think others have tried it as well. Maybe it would be worth reaching out to Josh Triplett (https://github.com/joshtriplett) who was working on something similar.

In my current company (using Go) or personal projects (using Rust) I don't feel this pain at all. For the codebases I work on, Apple's M series chips are great.

> source available

I don't think this would help. Ultimately, it's a trust issue. Most people, me included, are used to getting Rust through rustup or their package manager. Downloading and using a fork is too much friction, and any security expert will strongly advise against it. Using a compiler binary built by someone else requires a lot of trust.

Being on a fork has other issues - there might be subtle issues like miscompilations that only I experience, instead of benefiting from the bug reports filed by the whole community. The compiler built by the Rust team is tested against every open source crate, there's no way a fork could do that kind of testing. Speed is another issue - 60% of all Rust developers are on the latest stable release (lib.rs/stats), so a majority upgrades asap. Waiting for a rebase and a release is just too late for all these people.

If anyone could have succeeded with a fork, it is the Sealed Rust initiative by Ferrous Systems. They were targeting customers who prefer to stick with one version of a toolchain for years. It would have been technically feasible for them to fork, add their special sauce and pass it on. But even they decided against it, upstreaming all their changes. I think it comes down to the work of maintaining a fork in perpetuity and the distrust of closed source.

> I assume a similar willingness if/when someone's macro expansions are slow.

I think by this time next year most users will see a 30-40% increase in compilation perf thanks to parallel frontend, cranelift and switch to lld. I'm sure no one would say no to additional speedups, but they won't be desperate for increased performance.

> there is a lack of people improving the Rust dev experience

I think the number of people working on the three projects I mentioned, GCC backend, GCC frontend, cargo improvements (like the strip debug info change you mentioned) - that's a lot of improvement. This is the general stuff, so I'm not counting the specific work like making Rust better for embedded or Linux kernel development. Seems like a lot of good work happening in parallel.

I don't want to discourage you, but I'll leave you with this - even if you found enough people to adopt your forked changes (source available or not) and pay for it, that would just be a signal to the open source project that its a really good idea to clean room reimplement your changes. Competing with an incumbent is hard, competing with an open source incumbent when you're not open source - I've never seen it done successfully.

As a Rust user, I'd love to see you contribute this back to Rust and for Rust to become better. But I also understand your need to make a living. Best of luck.

the8472|1 year ago

> Ultimately, it's a trust issue. Most people, me included, are used to getting Rust through rustup or their package manager.

$corporate usually has a lot less trust issues once there's a contractual relationship with the proprietary provider. But that hurdle also means it's more difficult to get a foot in the door, especially when an open alternative exists.