top | item 32260619

Rust is actually portable

355 points| ahgamut | 3 years ago |ahgamut.github.io

103 comments

order
[+] mihaigalos|3 years ago|reply
> I’d like a bit more flexibility in specifying what I want cargo to do.

Check out Bazel for Rust.

It allows:

* caching of artifacts.

* shareable caches between {developers, build jobs} based on hashes.

* remote distributed builds (on very many cores).

https://github.com/google/cargo-raze

[+] spockz|3 years ago|reply
I really really want to like Bazel but I’m having issues with having the amount of bookkeeping I need to do in Bazel build files. And then someone comes along saying something along the lines of “oh but we just use <somecli>” to do the updating of Bazel for us… sometimes that is even internal tooling.

Something else is that most projects tend to build everything from source, even protobuf dependencies, so it takes me an hour to get the initial build of envoy done.

[+] wongarsu|3 years ago|reply
There's also mozilla's sccache, which integrates with cargo (by wrapping rustc) to cache artifacts. A local cache is 2 lines of config in your .cargo/config.toml, and if you want to you can have shared caches in Redis or S3/Azure/GCP.

Not nearly as flexible or powerful as Bazel, but also vastly simpler to setup if all you want is caching.

https://github.com/mozilla/sccache

[+] hazz99|3 years ago|reply
I’m using Bazel to build my rust project (Using the rules_rust rules) and it’a become quite a pain to use in concert with docker.

This is not a complaint about Bazel specifically, its fantastic, and easily my favourite build system bar none.

However it cannot cross compile Rust. This means if I’m developing on my MacBook, and I want to compile a Rust binary and put it in an Ubuntu docker container, I can’t do it on my host machine. I need to copy the source into the container and build it there, using multistage builds.

But this is -extremely slow- because it cannot take advantage of Rusts build caching. I’m talking 10-15 minutes for my small Rust project.

Has anyone run into this? How do you work around it?

I’ve considered running a Bazel remote execution server on a local Ubuntu VM, but this feels like so much extra complexity just to use Rust, Bazel and containers.

[+] kaba0|3 years ago|reply
I am always surprised how good and not-well-known google’s tooling is. Another example would be its closure compiler (with the accompanying j2cl/j2objc tools, which are all ridiculously cool).
[+] daniel_rh|3 years ago|reply
I think the newer version of this is https://github.com/bazelbuild/rules_rust which lets you either vendor the dependencies or pull them from your Cargo.toml directly every time.

Per the article: bazel + rules_rust should have the flexibility to override the linker flags that Cargo may take as required since that would be a property of the bazel toolchain used.

It's a nice amalgamation of how cargo works and how bazel works.

In general bazel supports hermetic builds, multiple toolchains, cross complilation, and ways to compile multi-language projects.

I still wish that Cargo.toml didn't support build.rs as it can cause a lot of system-dependent problems that bazel sidesteps entirely by being hermetic.

[+] echelon|3 years ago|reply
I do not know why you're downvoted for this (crazy!?) -- this is exactly what I wanted to know [1]. I have a Rust monorepo with a bunch of "library"-type crates and about a dozen binaries (jobs, servers, userland programs)

I need this in my life.

[1] https://news.ycombinator.com/item?id=29745426

[+] dundarious|3 years ago|reply
These seem orthogonal to the flexibility desired in the post. My understanding is cargo-raze doesn't provide a way to trim -lm from the link line, for example -- it doesn't seem like it would provide any such features over what cargo provides.
[+] reiniermaas|3 years ago|reply
> It allows: > > * caching of artifacts. > > * shareable caches between {developers, build jobs} based on hashes.

This sounds like something that nix is optimised for. The inputs into building each package is captured so having different feature flags would just create different artifacts.

[+] anothernewdude|3 years ago|reply
Last thing I want is more things to use Bazel. I can do without the headache. Perhaps Cargo can improve.
[+] csomar|3 years ago|reply
I wish I knew about this earlier! It's interesting that this project doesn't have a much higher visibility. Also wondering, what's the current relationship of this project with Google? (if you are involved with it)
[+] livinglist|3 years ago|reply
Wow, I didn’t know Basel was this powerful, gotta try it out now.
[+] csomar|3 years ago|reply
I feel some of the OP points. I was working on a profiling agent lately, and one of the issues was running it on multiple platforms (just the four big ones linux/mac-x86/arm) on FFI (because it'll be run directly from python/ruby/etc...) and preferably having the thing just work without having to install or configure any dependencies.

Like OP I hit two walls: libunwind, and linking. For libunwind, I ended up downloading/compiling manually; and for linking there is auditwheel[1]. Although it is a Python tool, I did actually end up using it for Ruby (by creating a "fake python package", and then copying the linked dependencies).

It was at that time that I learned about linking for dynamic libraries, patchelf and there is really no single/established tool to do this. I thought there should be something but most people seem to install the dependencies with any certain software. I also found, the hard way, that you still have to deal with gcc/c when working with Rust. It does isolate you from many stuff, but for many things there is no work around.

There is a performance hit to this strategy, however, since shared dynamic libraries will be used by all the running programs that need them; whereas my solution will run its own instance. It made me wonder if wasm will come up with something similar without affecting portability.

Finally, the project is open source and you can browse the code here: https://github.com/pyroscope-io/pyroscope-rs

[1]: https://github.com/pypa/auditwheel

[+] forrestthewoods|3 years ago|reply
Is compiling once and running on 6 platforms really that compelling? One of Rust’s super powers is that it’s really easy to write code once that can be compiled N times for N platforms without making any changes.

I’m all about writing code once. But compiling a few times doesn’t seem like that big of a deal to me?

The article says it runs on “six operating systems” but I can’t find them listed?

[+] AgentME|3 years ago|reply
I'm not sure if the "actually portable executable" stuff is really practical for anything in its current state, but I find it neat the way the development of the project encourages people to try to find unifying abstractions between software environments and practice writing build tools for new software environments.
[+] fulafel|3 years ago|reply
Go users seem to value a single static executable quite highly, a single portable executable would be even better in the same direction.
[+] pritambarhate|3 years ago|reply
The python article mentions those: https://ahgamut.github.io/2021/07/13/ape-python/

> This post describes a proof-of-concept Python executable (2.7.18 and 3.6.14) built on Cosmopolitan Libc, which allows it to run on six different operating systems (Linux, Mac, Windows, NetBSD, FreeBSD, OpenBSD)

[+] FullyFunctional|3 years ago|reply
I too couldn’t find the magix six but furthermore, the portability dimension I’m most excited about is the transposed one: microprocessor architectures. I daily compile the swap between x64/arm64/rv64 without a hitch and know that there are other options too, but it always Unix (90% Linux).

It would have been nice if the OP had spent a few words on the motivation here.

[+] ahgamut|3 years ago|reply
Good catch! I updated the post to mention the six operating systems (Linux, Windows, MacOS, FreeBSD, NetBSD, OpenBSD).
[+] petesergeant|3 years ago|reply
> Cosmopolitan Libc ... that runs natively on Linux + Mac + Windows + FreeBSD + OpenBSD + NetBSD + BIOS

So some subset of those

[+] logankeenan|3 years ago|reply
I’ve found rust incredibly portable. I’ve hacked around running the same server side app on the web (WASM), PC/Mac/Linux, iOS, and Android. Another project is a web app running on iOS and Android leveraging a SQLite DB.
[+] zaphar|3 years ago|reply

     I’d change a configuration flag, some part of std would break because my 
    flag was wrong, and I’d learn something new about Rust and how std worked.
The project was probably worth doing just because of this. Breaking things in a safe environment is such a great way to learn how it all works.
[+] techdragon|3 years ago|reply
While I love this sort of portability and in particular how it just makes Rust even more useful to me.

The library this is built on does have a bit of a weakness with respect to GUI software https://github.com/jart/cosmopolitan/issues/35 if this can be fixed this will be an amazing tool for building simple cross platform utilities and tools.

[+] manholio|3 years ago|reply
Aside form neatness factor and hacker street cred, I don't exactly get the practical point for the vast majority of software. What am I to do with such a binary? Do I put it live on my website and allow my clients to download it? If I leave it with an .exe extension so that it runs in the Windows shell, wouldn't that confuse users of other platforms? What if I need a directory structure as 99% of programs do? Do I use a zip or a tgz? In the first case, how do I preserve permissions on Unix targets? Do I need to instruct my clients into how to use tgz on the command line and/or create permissions?

Software distribution is by its nature a very platform specific problem; even if we accept the premise of an x64 world, an universal binary solves just a very small portion of the practical problems of portable deployment.

Ironically, the best use case I can imagine is creating an universal binary installer that can run on any Unix system and then proceed to make platform-specific decisions and extract files contained in itself, sort of like Windows binary installers work. But that's an utterly broken distribution model compared to modern package managers.

[+] childintime|3 years ago|reply
Just one question, as suggested by the title: the rust compiler itself could be made portable using this? I guess not, because of its use of multi-threading.
[+] rkangel|3 years ago|reply
You probably could, but that would be less useful than you think.

There are two machines that you care about with a compiler: the machine the compiler is running on ("Host"), and the machine the compiler is producing code for ("Target").

Generally we use a compiler with the same Host and Target - if you use Rust on x64-Windows you get a binary that runs on x64-Windows. If you use it on ARM-Linux you get a binary that runs on ARM-Linux. What you are talking about is making a compiler that would run on all Hosts, but it would take different work to make it be able to produce code for all Targets. So you'd produce a compiler that targeted x86-Windows and it would run on x86-Linux but still produce code for x86-Windows. It would also NOT be able to run on ARM-Linux.

[For completeness there's actually three machines we talk about with compilers - in addition to Host and Target there is also "Build". This allows you to cross-compile your compiler. For example you want to build your compiler on x86, you want the resulting compiler to run on ARM, and when it runs it produces code for RISC-V. Here Build is x86, Host is ARM and Target is RISC-V.]

[+] ajross|3 years ago|reply
> I just built a Rust executable that runs on six operating systems

I help maintain a kernel in C that runs on nine architectures, some of which don't even have LLVM backends, much less stable rust toolchains.

"Portable" means rather different things. This blog post is focused on the easy stuff.

[+] DenseComet|3 years ago|reply
They didn't build six executables that run on six operating systems. Rather, it's a single nativity compiled executable that runs on six different operating systems unmodified.

Cosmopolitan is an incredibly cool project that does more than you think.

https://github.com/jart/cosmopolitan

[+] 8jy89hui|3 years ago|reply
While this project might focus on the “easy” definition of portable, I’ve never seen this done before. This post was both interesting and informative.

I don’t think you comparing your (unlinked and unnamed) kernel to this is very constructive. It feels like you’re gate-keeping.

[+] aabbcc1241|3 years ago|reply
node.js is also portable with pkg
[+] feffe|3 years ago|reply
Nitpicking on terminology. Portable used to mean that software can run on another platform with minimal modifications. Typically by relying on abstraction layers that then has multiple implementations. It's cool that a single executable can run on both Windows and some Unixes but that's something else than what portable used to mean.

portable = able to port