If you don't mind I'm super curious as to what approach you ended up taking. Did you use rules_foreign_cc to build the ninja files they generate? Or generating BUILD files directly? Or something completely different? Sounds like a really cool project!
> I cant speak much about the system, it just works,
What systems don't just work by this criteria?
Just because something is statically linked vs dynamically linked, as long as you are within "normal expected operating conditions", does it really make a "just works vs doesn't work" quality difference?
Doesn't linking everything statically imply that the base image -- and memory, at runtime -- will be bloated by many copies of libc and other common libraries? I do like the simplicity of static linking but it sort of seems to go against the idea of avoiding "bloat".
A linker typically only includes the parts of the library it needs for each binary so some parts will definately have many copies of the same code when you statically link but it will not make complete copies.
But I wouldnt consider this bloat. To me it is just a better seperation of concerns. To me bloat would be to have a system that has to keep track of all library dependencies instead, both from a packaging perspective but also in runtime. I think it depends where you are coming from. To me static linking is just cleaner. I dont care much for the extra memory it might use.
I have seen this sort of statement on HN before. I am guessing that the persons who propagate this idea have never actually experimented with replacing dynamically-linked programs having numerous dependencies with statically-compiled ones. It's a theory that makes sense in the abstract, but they have not actually tested it.
Though it is not a goal of mine to save storage space by using static binaries, and I actually expect to lose space as a tradeoff, I have actually saved storage space in some cases by using static binaries. This comes from being able to remove libraries from /usr/lib. TBH, I am not exactly sure why this is the case. Perhaps in part because one might be storing large libraries containing significant numbers of functions that one's programs never use.
For me using static binaries works well. Even "common" libraries can be removed in some cases by using a multi-call/crunched binary like busybox. This might not work for everyone. I think much depends on what selection of programs the computer owner prefers. (Namely, the dependencies required by those programs.)
Static linked binaries are a generally lot smaller than a dynamically linked library and its dependencies, especially with link-time optimizations and inlining.
You wouldn't want have 100 tools statically link the entirety of chromium, but for normal C library sizes you don't get bloat. The preference for dynamic libraries in Linux distros is just so they can roll out patch updates in one place instead of rebuilding dependents.
musl is significantly smaller and "less bloat" than glibc, so even with a statically linked program, it still remains small in both system memory and storage.
Not necessarily. Bloat is one reason why originally dynamic linking was rolled out but the bigger benefit (to manufacturers) was to be able to update libraries without updating the applications. This has been the source of much trouble (dependency hell) and statically linked binaries suffer none of these issues. It's not like every application uses all of every library and an efficient linker is able to see which parts of the library it needs to link and which parts it can safely leave out.
Once it’s loaded in memory, if Kernel Samepage Merging is enabled it might not be as bad, but would love to hear of somebody has any thoughts
https://docs.kernel.org/admin-guide/mm/ksm.html
I know lots of compilers/linkers don't optimize for it but it should be possible to 'tree shake' libraries so only the parts that are used by an application are included. That would shake off a lot of the 'bloat'.
I guess each of the copies of libc can be optimized away and only the functions the specific binary calls will be left (and the compiler should be allowed to optimize past the library boundary), so maybe this balances the issues a bit.
Not that I really know anything about it, ask jart
For a real statically-linked linux system, the main issue is GPU support: you must relink all apps _really using_ a GPU, that to include the required GPU drivers.
With sound, alsa, it is fine since there is IPC/shared-memory based mixing that whatever the playback/capture devices [dmix/dsnoop]. Static linking is reasonable. (pulseaudio[012] IPC interfaces are bloaty kludges, hardly stable in time, 0..1..2.., not to be trusted compared to the hardcore stability of alsa one able to do a beyond good enough job *and* _real_ in-process low latency hardware access at the same time).
x11 and wayland are IPC based, then no issue here neither.
But for the GPU, we would need a wayland vulkan3D-inspired set of IPC/shared-memory interfaces (with a 3D enabled wayland compositor). For compute, the interfaces would be de-coupled from the wayland compositor (shared dma-buffers).
The good part of this would be to free our system interfaces from the ultra complex ELF (one could choose an excrutiatingly simple executable file format, aka a modern executable file format, but will need compilers/linkers support to help legacy support).
There is a middle ground though: everything statically linked, except the apps requiring the GPU driver (for that ELF is grotesquely overkill), still provided as a shared library.
What is the advantage of using the croc C compiler instead of e.g. TCC?
I wasn't aware of Netsurf (https://www.netsurf-browser.org/); this is really amazing. But it seems to use Duktape as the JS engine, so performance might be an issue.
cproc supports C11, tcc only goes up to c99. There is also something to be said for cproc using QBE which is slowly growing backends like risc-v etc which tcc doesnt support afaik.
Judging by the screenshots, it can render BBC, its own website, and Wikipedia. Well, it might be able to render others, we just can't tell from the shots. But we can tell those three websites work with all sorts of different window decorations.
There's a (dead) comment lamenting that you can't access Github with javascript turned off. The Oasis repo seems to be mirrored on sourcehut, though, so if that's more acceptable:
This is very very cool. I love the bloat free nature of the thing, especially velox (the WM). Samurai (build system) also looks pretty interesting. I've not managed to work out quite how samurai works, or truthfully, why it differs from ninja, but this project is exactly the kind of brain food I intend on learning a lot from.
One of the reasons I've switched some builds over to musl over glibc, is that I found that glibc linking is brittle if you're going to run a binary over multiple distros in various container environments. Particularly if you want one binary to work on linux across RH and Debian/Ubuntu derived distros or even different ages of distro.
The real comparison is: musl does not provide any preprocessor macro to tell you what libc you're using.
And it has so many weird quirks that you need to work around.
***
Static linking makes linking more painful, especially regarding global constructors (which are often needed for correctness or performance). This is not a musl-specific issue, but a lot of people are interested in both.
Just do your builds on the oldest supported system, and dynamic linking works just fine. You can relative-rpath your non-libc dependencies if they would be a pain to install, though think twice about libstdc++.
***
The major advantage of MUSL is that if you're writing a new OS, it's much easier to port.
> What is the comparison between using musl and traditional glibc?
you get weird bugs and failures that don't happen with glibc (like the incomplete dns resolving routines that would fail under some conditions) but you can brag about saving 30-40 mb of disk space.
this project seems to be compromising on quality overall, in the name of having smaller size.
Even BearSSL, by their own website is beta-quality: "Current version is 0.6. It is now considered beta-quality software" (from https://bearssl.org/).
Speaking from heavy experimentation and experience, [0] glibc has some more optimized routines but musl has significantly less bloat. If you are haphazardly calling libc functions left and right for everything and have a generally unoptimized code base, your code may fare better better with glibc. But musl’s smaller codebase is a win for faster startup and micro optimizations otherwise - and that’s without lto where it stands to gain more.
It's unclear to me what "100%" refers to here, but surely it does not include the Linux kernel or drivers? (I've recently read conversations about how difficult this would be.)
I could imagine there were unexpected efficiencies. Although dynamic libraries should be able to share an address space, I think with static libraries, the linked might strip out unused routines.
[+] [-] dijit|2 years ago|reply
I had the plan to build oasis with bazel for some immutable OS images that could run as kubernetes nodes. I succeeded with a little pointing.
[+] [-] malux85|2 years ago|reply
[+] [-] eek2121|2 years ago|reply
EDIT: that was meant to be a joke, I forgo HN doesn't support emojies.
[+] [-] gravypod|2 years ago|reply
[+] [-] colatkinson|2 years ago|reply
[+] [-] public_void|2 years ago|reply
[+] [-] MuffinFlavored|2 years ago|reply
What systems don't just work by this criteria?
Just because something is statically linked vs dynamically linked, as long as you are within "normal expected operating conditions", does it really make a "just works vs doesn't work" quality difference?
[+] [-] kentonv|2 years ago|reply
[+] [-] jezze|2 years ago|reply
But I wouldnt consider this bloat. To me it is just a better seperation of concerns. To me bloat would be to have a system that has to keep track of all library dependencies instead, both from a packaging perspective but also in runtime. I think it depends where you are coming from. To me static linking is just cleaner. I dont care much for the extra memory it might use.
[+] [-] 1vuio0pswjnm7|2 years ago|reply
Though it is not a goal of mine to save storage space by using static binaries, and I actually expect to lose space as a tradeoff, I have actually saved storage space in some cases by using static binaries. This comes from being able to remove libraries from /usr/lib. TBH, I am not exactly sure why this is the case. Perhaps in part because one might be storing large libraries containing significant numbers of functions that one's programs never use.
For me using static binaries works well. Even "common" libraries can be removed in some cases by using a multi-call/crunched binary like busybox. This might not work for everyone. I think much depends on what selection of programs the computer owner prefers. (Namely, the dependencies required by those programs.)
[+] [-] Shorel|2 years ago|reply
This seems a weird thing to complain about =)
[+] [-] Gazoche|2 years ago|reply
[+] [-] arghwhat|2 years ago|reply
You wouldn't want have 100 tools statically link the entirety of chromium, but for normal C library sizes you don't get bloat. The preference for dynamic libraries in Linux distros is just so they can roll out patch updates in one place instead of rebuilding dependents.
[+] [-] zshrc|2 years ago|reply
[+] [-] jacquesm|2 years ago|reply
[+] [-] javierhonduco|2 years ago|reply
[+] [-] liampulles|2 years ago|reply
[+] [-] bzzzt|2 years ago|reply
[+] [-] Gabrys1|2 years ago|reply
Not that I really know anything about it, ask jart
[+] [-] thanatos519|2 years ago|reply
... oh wait, the apps have to hint that it's possible. Nebbermind.
[+] [-] sylware|2 years ago|reply
With sound, alsa, it is fine since there is IPC/shared-memory based mixing that whatever the playback/capture devices [dmix/dsnoop]. Static linking is reasonable. (pulseaudio[012] IPC interfaces are bloaty kludges, hardly stable in time, 0..1..2.., not to be trusted compared to the hardcore stability of alsa one able to do a beyond good enough job *and* _real_ in-process low latency hardware access at the same time).
x11 and wayland are IPC based, then no issue here neither.
But for the GPU, we would need a wayland vulkan3D-inspired set of IPC/shared-memory interfaces (with a 3D enabled wayland compositor). For compute, the interfaces would be de-coupled from the wayland compositor (shared dma-buffers).
The good part of this would be to free our system interfaces from the ultra complex ELF (one could choose an excrutiatingly simple executable file format, aka a modern executable file format, but will need compilers/linkers support to help legacy support).
There is a middle ground though: everything statically linked, except the apps requiring the GPU driver (for that ELF is grotesquely overkill), still provided as a shared library.
[+] [-] Rochus|2 years ago|reply
What is the advantage of using the croc C compiler instead of e.g. TCC?
I wasn't aware of Netsurf (https://www.netsurf-browser.org/); this is really amazing. But it seems to use Duktape as the JS engine, so performance might be an issue.
[+] [-] helloimhonk|2 years ago|reply
[+] [-] willy_k|2 years ago|reply
Working link: https://www.netsurf-browser.org
[+] [-] cpach|2 years ago|reply
[+] [-] mike_hock|2 years ago|reply
Every single link on that page is dead.
https://www.netsurf-browser.org/about/screenshots/
Judging by the screenshots, it can render BBC, its own website, and Wikipedia. Well, it might be able to render others, we just can't tell from the shots. But we can tell those three websites work with all sorts of different window decorations.
[+] [-] schemescape|2 years ago|reply
I'm curious how it compares to, say, Alpine with a similar set of packages.
[+] [-] jackothy|2 years ago|reply
[+] [-] ratrocket|2 years ago|reply
https://git.sr.ht/~mcf/oasis
[+] [-] __s|2 years ago|reply
oasis's predecessor would be https://dl.suckless.org/htmlout/sta.li
[+] [-] sigsev_251|2 years ago|reply
[1]: https://github.com/michaelforney/cproc
[+] [-] hkt|2 years ago|reply
Many, many props to Michael Forney.
[+] [-] eterps|2 years ago|reply
[+] [-] sluongng|2 years ago|reply
Is there performance differences between the two?
I have been seeing musl used more and more in both Rust and Zig ecosystems lately.
[+] [-] digikata|2 years ago|reply
[+] [-] o11c|2 years ago|reply
And it has so many weird quirks that you need to work around.
***
Static linking makes linking more painful, especially regarding global constructors (which are often needed for correctness or performance). This is not a musl-specific issue, but a lot of people are interested in both.
Just do your builds on the oldest supported system, and dynamic linking works just fine. You can relative-rpath your non-libc dependencies if they would be a pain to install, though think twice about libstdc++.
***
The major advantage of MUSL is that if you're writing a new OS, it's much easier to port.
[+] [-] znpy|2 years ago|reply
you get weird bugs and failures that don't happen with glibc (like the incomplete dns resolving routines that would fail under some conditions) but you can brag about saving 30-40 mb of disk space.
this project seems to be compromising on quality overall, in the name of having smaller size.
Even BearSSL, by their own website is beta-quality: "Current version is 0.6. It is now considered beta-quality software" (from https://bearssl.org/).
[+] [-] ComputerGuru|2 years ago|reply
[0]: https://neosmart.net/blog/a-high-performance-cross-platform-...
Edit:
Sorry, the correct link is this one: https://neosmart.net/blog/using-simd-acceleration-in-rust-to...
[+] [-] skywal_l|2 years ago|reply
[+] [-] joveian|2 years ago|reply
https://wiki.musl-libc.org/functional-differences-from-glibc...
[+] [-] lproven|2 years ago|reply
https://news.ycombinator.com/item?id=32458744
[+] [-] nightowl_games|2 years ago|reply
[+] [-] notfed|2 years ago|reply
It's unclear to me what "100%" refers to here, but surely it does not include the Linux kernel or drivers? (I've recently read conversations about how difficult this would be.)
[+] [-] Koshkin|2 years ago|reply
[+] [-] m463|2 years ago|reply
I could imagine there were unexpected efficiencies. Although dynamic libraries should be able to share an address space, I think with static libraries, the linked might strip out unused routines.
also, it might be faster
[+] [-] trinsic2|2 years ago|reply
[+] [-] malux85|2 years ago|reply
[+] [-] attentive|2 years ago|reply
Wouldn't it be better to pile on OpenWRT?
[+] [-] speedgoose|2 years ago|reply
[+] [-] lordwiz|2 years ago|reply