top | item 44263897

(no title)

CrendKing | 8 months ago

Why can't Cargo have a system like PyPI where library author uploads compiled binary (even with their specific flags) for each rust version/platform combination, and if said binary is missing for certain combination, fallback to local compile? Imagine `cargo publish` handle the compile+upload task, and crates.io be changed to also host binaries.

discuss

order

cesarb|8 months ago

> Why can't Cargo have a system like PyPI where library author uploads compiled binary

Unless you have perfect reproducible builds, this is a security nightmare. Source code can be reviewed (and there are even projects to share databases of already reviewed Rust crates; IIRC, both Mozilla and Google have public repositories with their lists), but it's much harder to review a binary, unless you can reproducibly recreate it from the corresponding source code.

woodruffw|8 months ago

I don’t think it’s that much of a security nightmare: the basic trust assumption that people make about the packaging ecosystem (that they trust their upstreams) remains the same whether they pull source or binaries.

I think the bigger issues are probably stability and size: no stable ABI combined with Rust’s current release cadence means that every package would essentially need to be rebuilt every six weeks. That’s a lot of churn and a lot of extra index space.

pjmlp|8 months ago

Yet other ecosystems handle it just fine, regardless of security concerns, by having signed artifacts and configurable hosting as an option.

nicoburns|8 months ago

> Unless you have perfect reproducible builds

Or a trusted build server doing the builds. There is a build-bot building almost every Rust crate already for docs.rs.

epage|8 months ago

It runs counter to Cargos curreat model where the top-level workspace has complete control over compilation, including dependencies and compiler flags. I've been floating an idea of "opaque dependencies" that are like python depending on C libraries or a C++ library dependening on a dynamic library.

littlestymaar|8 months ago

That would work for debug builds (and that's something that I would appreciate) but not for release, as most of the time you want to compile for the exact CPU you're targeting not just for say “x86 Linux” to make sure your code is optimized properly using SIMD instructions.

surajrmal|8 months ago

A trustworthy distributed cache would also work very well for this in practice. Cargo works with sccache. Using bazel + rbe can work even better.