This is the way. Shell makes for a terrible scripting language, that I start regretting choosing usually around the time I have to introduce the first `if` into my "simple" scripts, or have to do some more complex string manipulation.
At least nowadays LLMs can rewrite Bash to JS/Python/Ruby pretty quickly.
This is exactly the frustration that lead me to write Rad [0] (the README leads with an example). I've been working on it for over a year and the goal is basically to offer a programming language specifically for writing CLIs. It aims for declarative args (no Bash ops parsing each time), automatic --help generation, friendly (Python-like) syntax, and it's perfect for dev build scripts. I'll typically have something like this:
#!/usr/bin/env rad
---
Dev automation script.
---
args:
build b bool # Build the project
test t bool # Run tests
lint l bool # Run linter
run r bool # Start dev server
release R bool # Release mode
filter f str? # Test filter pattern
filter requires test
if build:
mode = release ? "--release" : ""
print("Building ({release ? 'release' : 'debug'})...")
$`cargo build {mode}`
if lint:
print("Linting...")
$`cargo clippy -- -D warnings`
if test:
f = filter ? "-- {filter}" : ""
print("Running tests{filter ? ' (filter: {filter})' : ''}...")
$`cargo test {f}`
if run:
bin = release ? "target/release/server" : "target/debug/server"
$`./{bin}`
Usage: ./dev -b (build), ./dev -blt -f "test_auth" (build, lint, test auth), ./dev -r (just run).
I consider luajit a much better choice than bash if both maintainability and longterm stability are valued. It compiles from source in about 5 seconds on a seven year old laptop and only uses c99, which I expect to last basically indefinitely.
For some quality of "run", because I'm hella sure that it has quite a few serious bugs no matter what, starting from escapes or just a folder being empty/having files unlike when it was written, causing it to break in a completely unintelligible way.
> This is the way. Shell makes for a terrible scripting language, that I start regretting choosing usually around the time I have to introduce the first `if` into my "simple" scripts, or have to do some more complex string manipulation.
I suppose it can be nice if you are already in a JS environment, but wouldn't the author's need be met by just putting their shell commands into a .sh file? This way is more than a little over-engineered with little benefit in return for that extra engineering.
The reasons (provided by the author) for creating a Make.ts file is completely met by popping your commands into a .sh file.
With the added advantage that I don't need to care about what else needs to be installed on the build system when I check out a project.
The benefit is you can easily scale the complexity of the file. An .sh file is great for simple commands, but with a .ts file with Deno you can pull in a complex dependency with one line and write logic more succinctly.
The differences between different environments can vary a lot... many shell scripts rely on certain external programs being available and consistent... this is less true across windows an mac and can vary a lot.
I've found that Deno with TS specifically lets me be much more consistent working on projects with workers across Windows, Mac and Linux/WSL.
I've been working a lot in fairly complex shell scripts lately (though not long— not much over 1000 lines). Some of them are little programs that run locally, and others drive a composable cloud-init module for Terraform that lets lets users configure various features of EC2 hosts on multiple Linux distribution without writing any shell scripts themselves or relying on any configuration management framework beyond cloud-init itself. With the right tooling, it's not as bad as you'd think.
For both scripts, everything interesting is installed via Nix, so there's little reliance on special casing various distros', built-in package managers.
In both cases, all scripts have to pass ShellCheck to "build". They can't be deployed or committed with obvious parse errors or ambiguities around quoting or typos in variable names.
In the case of the scripts that are tools for developers, the Bash interpreter, coreutils, and all external commands are provided by Nix, which hardcodws their full path into the scripts. The scripts don't care if you're on Linux or macOS— they don't even care what's on your PATH (or if it's empty). They embrace "modern" Bash features and use whatever CLI tools provide the most readable interface.
Is it my favorite language? No. But it often has the best ROI, and portability and most gotchas are solved pretty well if you know what tools to use, especially if your scripts are simple.
Agreed. The shell is great for chaining together atomic operations on plaintext. That is to say, it is great for one liners doing that. The main reason probably isn't how it all operates on plain text but how easy it makes it to start processes, do process substitution, redirections, etc.
As soon as you have state accumulating somewhere, branching or loops it becomes chaotic too quickly.
I generally use AWK as my scripting language, or often just write the whole thing directly in AWK. It doesn't change, is always installed on all POSIX platforms, easily interfaces with the command line, and is an easy to learn small language.
amterp|1 month ago
[0] https://github.com/amterp/rad
oguz-ismail2|1 month ago
kh_hk|1 month ago
g947o|1 month ago
pzmarzly|1 month ago
- when ls started quoting filenames with spaces (add -N)
- when perl stopped being installed by default in CentOS and AlmaLinux (had to add dnf install -y perl)
- when egrep alias disappeared (use grep -E)
norir|1 month ago
greener_grass|1 month ago
The best way is a scripting language with locked-down dependency spec inside the script. Weirdly .NET is leading the way here.
gf000|1 month ago
lelanthran|1 month ago
I suppose it can be nice if you are already in a JS environment, but wouldn't the author's need be met by just putting their shell commands into a .sh file? This way is more than a little over-engineered with little benefit in return for that extra engineering.
The reasons (provided by the author) for creating a Make.ts file is completely met by popping your commands into a .sh file.
With the added advantage that I don't need to care about what else needs to be installed on the build system when I check out a project.
I just don't see the advantages.
dsherret|1 month ago
tracker1|1 month ago
I've found that Deno with TS specifically lets me be much more consistent working on projects with workers across Windows, Mac and Linux/WSL.
frizlab|1 month ago
[0] https://github.com/xcode-actions/swift-sh
pxc|1 month ago
For both scripts, everything interesting is installed via Nix, so there's little reliance on special casing various distros', built-in package managers.
In both cases, all scripts have to pass ShellCheck to "build". They can't be deployed or committed with obvious parse errors or ambiguities around quoting or typos in variable names.
In the case of the scripts that are tools for developers, the Bash interpreter, coreutils, and all external commands are provided by Nix, which hardcodws their full path into the scripts. The scripts don't care if you're on Linux or macOS— they don't even care what's on your PATH (or if it's empty). They embrace "modern" Bash features and use whatever CLI tools provide the most readable interface.
Is it my favorite language? No. But it often has the best ROI, and portability and most gotchas are solved pretty well if you know what tools to use, especially if your scripts are simple.
pjmlp|1 month ago
sureglymop|1 month ago
As soon as you have state accumulating somewhere, branching or loops it becomes chaotic too quickly.
wmwragg|1 month ago
camilomatajira|1 month ago