top | item 42804835

Bunster: Compile bash scripts to self contained executables

207 points| thunderbong | 1 year ago |github.com

82 comments

order
[+] shrx|1 year ago|reply
It should be possible to run bash scripts on any system supported by jart's cosmopolitan library [1], which provides a platform-agnostic bash executable [2].

[1] https://justine.lol/cosmo3/

[2] https://cosmo.zip/pub/cosmos/bin/

[+] shakna|1 year ago|reply
Yes and no.

It does work - but there are some OS-specific stuff that can still pop up and explode on you. There are different guarantees around open/write on Windows and *nix, and cosmopolitan doesn't 100% paper over those gaping differences. It doesn't change the underlying file locking behaviour of the file system, for example. You can run into thread-time guarantees and other streaming problems, when piping from one thing to another.

[+] Phlogistique|1 year ago|reply
The README fails to address the elephant in the room, which is that usually shell scripts mainly call external commands; as far as I can tell there is no documentation of which built-ins are supported?

That said, in a similar vein, you could probably create a bundler that takes a shell script and bundles it with busybox to create a static program.

[+] nodesocket|1 year ago|reply
I also wondered this as well. How is something like "cat file.json | jq '.filename' | grep out.txt" implement into Go?
[+] zamalek|1 year ago|reply
I assume this is what they are talking about here:

> Standard library: we aim to add first-class support for a variety of frequently used/needed commands as builtins. you no longer need external programs to use them.

That's not going to be an easy task, and would basically entail porting those commands to go.

[+] hezag|1 year ago|reply
Disclamer: the elephant in the room has nothing to do with ElePHPant, the PHP mascot.
[+] mixedmath|1 year ago|reply
I'm confronted with a similar problem frequently. I have a growing bash script and it's slowly growing in complexity. Once bash scripts become sufficiently long, I find editing them later to be very annoying.

So instead, at some point I change the language entirely and write a utility in python/lua/c/whatever other language I want.

As time goes on, my limit for "sufficient complexity" to justify leaving bash and using something like python has dropped radically. Now I follow the rule that as soon as I do something "nontrivial", it should be in a scripting language.

As a side-effect, my bash scripting skills are worse than they once were. And now the scope of what I consider "trivial" is shrinking!

[+] ComputerGuru|1 year ago|reply
My problem with python is startup time, packaging complexity (either dependency hell or full blown venv with pipx/uv). I’ve been rewriting shell scripts to either Makefiles (crazy but it works and is rigorous and you get free parallelism) or rust “scripts” [0] depending on their nature (number of outputs, number of command executions, etc)

Also, using a better shell language can be a huge productivity (and maintenance and sanity) boon, making it much less “write once, read never”. Here’s a repo where I have a mix of fish-shell scripts with some converted to rust scripts [1].

[0]: https://neosmart.net/blog/self-compiling-rust-code/

[1]: https://github.com/mqudsi/ffutils

[+] fieu|1 year ago|reply
I have exactly the same issue. I maintain a project called discord.sh which sends Discord webhooks via pure Bash (and a little bit of jq and curl). At some point I might switch over to Go or C.

https://github.com/fieu/discord.sh

[+] NoMoreNicksLeft|1 year ago|reply
Yesterday, I had a problem where wget alone could do 98% of what I wanted. I could restrict which links it followed, but the files I needed to retrieve were a url parameter passed in with a header redirect at the end. I spent an hour relearning all the obscure stuff in wget to get that far. The python script is 29 lines, and it turns out I can just target a url that responds with json and dig the final links out of that. Usually though, yeh, everything starts as a bash script.
[+] maccard|1 year ago|reply
I agree. My limit is pretty much one you start branching or looping, it should be in another tool. If that seems low to you, that’s the point
[+] bigstrat2003|1 year ago|reply
I definitely agree. Bash is such an unpleasant language to work with, with so many footguns, that I reach for a language like Python as soon as I'm beyond 10 lines or so.
[+] AtlasBarfed|1 year ago|reply
Isn't this perfect for LLM?

You know, assuming they transpile well, I haven't tried a solid one yet.

I wonder if kernel code rewrites in rust with Llama (obviously reviewed are up to snuff.

[+] skulk|1 year ago|reply
If you want portable shell-scripts that come with their dependencies bundled, Nix also has a solution: writeShellApplication[0] (and more simpler ones like writeShellScript).

    writeShellApplication {
      name = "show-nixos-org";

      runtimeInputs = [ curl w3m ];

      text = ''
        curl -s 'https://nixos.org' | w3m -dump -T text/html
      '';
    }
writeShellApplication will call shellcheck[1] on your script and fail to build if there are any issues reported, which I think is the only sane default.

[0]: https://nixos.org/manual/nixpkgs/stable/#trivial-builder-wri...

[1]: https://www.shellcheck.net/

[+] samtheprogram|1 year ago|reply
So it compiles to a single executable that I can send to someone who isn’t on Nix?

Because if I wanted a portable shell script, I’d just write shell and check if something is executable in my path.

This just looks like Nix-only stuff that exists in an effort to be ultra declarative, and in order to use it you’d need to be on Nix.

[+] azeirah|1 year ago|reply
Nix is the best.

If you're reading this and wondering how you can use this for yourself?

You don't need nixos at all. You can install nix on any linux-like system, including on MacOS

[+] johnvaluk|1 year ago|reply
Is it possible to override shellcheck? It's a valuable tool that I use all the time, but it reports many false positives. It's not unusual for junior developers to introduce bugs in scripts because they blindly follow the output of shellcheck.
[+] gchamonlive|1 year ago|reply
I still haven't come around to using nix in my daily workflow. My concern is high entry bar, obscure errors and breaking changes, but also excessive use of storage either because that's how it works or because I won't know how to manage the store well.

How's nix these days? How long would you expect someone with years of Linux management experience (bash, ansible, terraform, you name it, either onprem or on cloud) to get comfortable with nix? And what's would be the best roadmap to start introducing nix slowly in my workflow?

[+] rounce|1 year ago|reply
Well you're still leaning on Nix to provide the dependencies. All `writeShellApplication` will do is prepend the `PATH` variable with the `bin` directories of the provided `runtimeInputs`, it still just spits out a bash script, not a binary that includes bash, the script, and the other dependencies. I reckon it's quite possible for someone to lean on Nix to implement producing an all-in-one binary though.
[+] abathur|1 year ago|reply
If your shell scripts/libraries are a little more complex, resholve can also help package them a little more reliably.

(I'd say it's overkill for your example here, but it blocks on missing dependencies and can support tricky cases such as modular shell libraries that expect different implementations of the same command.)

[+] sammnaser|1 year ago|reply
I don't see what problem this solves, especially in its current form only supporting Unix. Bash scripts are already portable enough across Unix environments, the headaches come from dependency versioning (e.g. Mac ships non-GNU awk, etc). Except with this, when something breaks, I don't even get to debug bash (which is bad enough), but a binary compiled from Go transpiled from bash.
[+] nightowl_games|1 year ago|reply
One of the most critical elements of a shell script is that the source can be easily examined.

Bringing this into your system seems like a huge liability.

The syntax of shell scripts is terrible, but we write it to do simple things easily without needing more external tools.

git-bash on windows is generally good enough to do the kind of things most shell scripts do.

This tool feels like the worst of both worlds: bash syntax + external dependency.

[+] BeetleB|1 year ago|reply
Oh. I was just about to comment that it may be easier to understand what it does by decompiling the binary than by looking at the actual unreadable Bash language ;-)
[+] koolba|1 year ago|reply
Does it support eval?

Because then you could compile something like

    #!/usr/bin/env bash
    eval “$@“
And get a statically compiled bash!
[+] Imustaskforhelp|1 year ago|reply
What does this do mate? (I tried to run it and it failed)
[+] epic9x|1 year ago|reply
Portability and other constraints I've discovered with the shell have always been a sign I need to reach for different tool. Bash is so often a "glue" language where accessibility and readability are it's primary feature right after the immediate utility of whatever it's automating. Writing POSIX compatible scripts is probably safer and can be validated with projects like shellcheck.

That said - this is a neat project and I've seen plenty of "enterprise" use-cases where this kind of thing could be useful.

[+] jonathaneunice|1 year ago|reply
Ambitious.

Given the great diversity of shell scripting needed (even if just bash) across different variants of Linux and Unix and different platform versions, debugging the resulting transpiled executables is not something I'd be keen to take on. You'd want to be an expert in the Go ecosystem at minimum, and probably already committed to moving your utility programming into Go.

[+] gtsop|1 year ago|reply
It is a very interesting technical feat to be able to do that... but should you do it?

My gut feeling says no. Unless I am missing something.

[+] josephcsible|1 year ago|reply
> Password and Expiration Lock: Surprisingly, some people have asked for this feature. Basically, It allows you to choose an expiry date at build time. the generated program will not work after that date. Also you can choose to lock the script using a password. whenever you try to run it, it prompts for the password.

Support for that makes me sad. It's antithetical to everything FOSS is.

[+] xyzzy_plugh|1 year ago|reply
This is has nothing to do with FOSS. Self-detonating code is a great idea, something my peers and I often joke about but rarely actually implement (though I have done depreciations that are similar).

Here's some FOSS just for you:

   /* Copyright (c) 2025 xyzzy_plugh all rights reserved.
   
   Usage of the works is permitted provided that this instrument is retained with the works, so that any entity that uses the works is notified of this instrument.
   
   DISCLAIMER: THE WORKS ARE WITHOUT WARRANTY.
   */
   if(time(NULL) > 1767225600) exit(1);
[+] rednafi|1 year ago|reply
Neat project. Can’t say I’ve ever been in a situation where I thought, “If only this shell script were a standalone binary.” By the time I get to that point, I’ve usually outgrown shell syntax and just jump straight to Go.

Still, I can see this being really handy for people who don’t speak Go or Rust but want to throw together a quick-and-dirty shell script and still need a standalone binary.

[+] extraduder_ire|1 year ago|reply
I have. At one point I wanted to set a bash script to setuid/setgid.

By the time I read up on why that didn't work and how to "fix" it, I decided it was a bad idea and tried something else.

[+] stabbles|1 year ago|reply
A big advantage of shell scripts is that they're scripts and you can peek in the sources or run with `-x` to see what it does.
[+] ur-whale|1 year ago|reply
I'm not able to fathom the security implications of this but my gut tells me ... ugh.
[+] vander_elst|1 year ago|reply
Are there performance drawbacks in particular with long pipelines (e.g. something like `cat | grep | sed | bc | paste | ...`)?
[+] ComputerGuru|1 year ago|reply
To the contrary. They’re all run in parallel and the (standard) output goes directly from one to the next without being buffered by the shell. Unix overhead for process creation is very low compared to others, doing the same under, for example, Windows, would be more expensive.

But if you have to run n processes, much better to run them in a single pipeline like that.

(Source: I’m a shell developer. Fish-shell ftw!)

[+] IshKebab|1 year ago|reply
This is fucking dumb. Sorry but this is just a paragon of everything wrong with Unix.

The only reason to use shell in the first place is because I can't use a binary compiled from a sane language.

This... Wow. This is like not having your cake and not eating it.

The shitness of Bash combined with the non-portability of binaries! Sign me up!

It's the opposite of https://amber-lang.com/ which tries to (not sure it succeeds) provide a sane language with the portability of shell (ignoring Windows).

That's a sensible project. This is just... Why does this exist?

[+] Alifatisk|1 year ago|reply
Very cool, but since this transpiles Shell to Go, what makes this difficult to port to Windows?
[+] forgotpwd16|1 year ago|reply
Seems one project's goal is to convert frequently used commands to builtins. So maybe because currently converted scripts still use external programs that are usually only available in Unix.