First impression is that I wouldn't go near it until it settles down a bit — it was on v1.7 in May and now on version 3 already.
Great that it's under active development, but changes to my scripts are infrequent and slow. I wouldn't want to be touching an old script on a server somewhere saying "How do I do X in ZX again?" and everything I find online is now for v15 while I'm still on v3.
Maybe if it had plans around a LTS version I'd take a look at that stage.
To be honest though I like doing things in bash. You can get pretty far & it's 100% portable. If something's too complicated for bash it's probably time it's not just a simple CLI script anymore, in my experience.
If you wrote your script against v1.7 and now they're on v3, you can still run your script. The v1.7 source and binary haven't disappeared. It's still available and work exactly the same as when you wrote your script.
As annoying as having to write "await" in front of everything is (probably should have a variant of $ that's synchronous), the ability to effortlessly implement parallelism is something that all other common scripting (and many "full") languages seem to lack.
Even Go, that prides itself on the good support for parallelism, fails once you actually want go get results back.
If this were paired with some way to also display the status/output of multiple commands in parallel, this would be the ultimate toolkit to write scripts.
If you look at apt, for example: Does it really have to wait for the last package to be downloaded before installing the first? Is there a good reason, on modern SSD-based computers, to not parallelize the installation of multiple packages that don't depend on each other?
> Even Go, that prides itself on the good support for parallelism, fails once you actually want go get results back.
It's doable if you're willing to build a very thin abstraction over whatever you're trying to parallelize keeping in mind all possible edgecases [1] and how you'd like to handle them.
A generic `run N things in parallel and collect their results` function would be nice, but would probably end up being quite unwieldy in comparison to just writing out a purpose-specific function for what you exactly need.
[1] - Error handling, timeouts/cancellation, maximum amount of processes running in parallel, wait-for-full-join vs. returning results as they appear, introspection of currently running subordinates, subordinate output streaming/buffering/logging, ... A lot of things to consider when you're trying to build something universal, but that are easy to solve/ignore when you're building something purpose-specific with known constraints.
Given Promises and callbacks have always been pervasive in JS—that it’s always been heavily asynchronous—I can’t imagine thinking async/await was a mistake. It’s a drastic ergonomic improvement over both, and it also drastically improves debugging and error stacks.
Adding it to a language where IO is typically blocking with other ergonomic concurrency mechanisms, that I can imagine being somewhat controversial.
It would be good if async weren’t infectious, but as a sibling comment said that was already the case with Promises and callbacks.
It would maybe also be better if concurrency was the default behavior and keeping a reference to a promise was the explicit syntax, but I could see downsides to that. It potentially hides/encourages excessive use of expensive calls. And implicit concurrency is a potential source of many bugs.
I haven't looked into Zx yet, but at least for JavaScript that "what color is your function" article also applied before they added the syntactic sugar. If your function was async via a direct callback then the calling function would have to accept a callback too.
Those are small examples but for more complex scripts I expect you'd mostly have synchronous code, for example to process strings, validate input, etc. and the "await" will become the exception rather than the norm.
In my first job I gave a try at using Python for scripts.
It was a terrible idea, as I now had 3 problems instead of 1:
1 - Writing scripts, plus:
2 - Installing Python in every host that needed to run those scripts
3 - Maintaining Python and the necessary dependencies up to date in each host / container.
(2) and (3) are trivial when done just once in your own computer, but end up being a huge time sink for larger heterogeneous environments with (possibly) network and security barriers. Moreover, if you are using containers, installing Python will make images unecessarily large.
Granted, I'm not a JS person, but I imagine one will have the same issues.
Even though bash has its shortcomings, I really appreciate how low maintenance it is as long as you keep scripts small and sane.
That's why Perl is my go-to choice for glue scripts. It's almost always installed by default, it has stayed consistent for decades without any breaking changes, its regex support is by far the best around, and its almost as good as the shell at gluing commands together.
JS is marginally better than Python for this, because tools for runtime and package version management are better for Node than for Python (proper lockfiles, local package folders, npm bundled with Node for installation, etc.).
Still, zx isn't as nice as Ruby, which has backticks for calling shell as part of the language. So, you don't have to have a separate package installed, just the ruby runtine.
But Perl is even better! Just like Ruby, it comes with backtick shell execution build-in, but unlike Ruby Perl is ubiquitous and is pretty strict about backward compatibility. So, your scripts will just run on all machines you have, no matter what distro you use and how old the machine is (within reason).
Luckily today you can side step many of these issues. You can compile a copy of python + modules + pex into a single file with some open source tooling and get a flat file you just need to copy to your hosts.
Bash just sort happened to be the lowest common denominator in most unix style environments. Because it just sorta ended up being the default. Unix ended up being this different thing where each distro seems to be kind of unique. Then on top of that they are very easy to customize. So while one distro you can assume a particular python level the next disto python is totally optional. Mix in containers and docker images and you have a new level of 'what may or may not be there'. Ah but you may say 'oh just use tech XYZ' that may or may not have the exact same issue as using python in its place. So either 'deal with it' and install what is needed and keep it up to date. Or 'play the LCD game' you use only the bare min and hope you can build something useful enough in bash when another lang may be more appropriate.
I'm having a hard time believing that Google uses this much internally. Is github.com/google just a place where Googlers can put personal projects to get additional exposure?
I was under the impression that `github.com/google` was largely projects being worked on by googlers where the code is owned by google, but the code isn't always being used by google. Say 20% time work, side projects built using google resources, etc.
The copyright of the thing Googlers build on their oswn time are usually owned by Google and so it ends up on the Google org if they want it on GitHub. They usually have a disclaimer like "this is not a official Google product..."
I suppose it's easy to poke fun at combining the "best" of shell scripting and JS. But, the examples of using await() with child processes and pipes are pretty nice, and less verbose than the Perl or Python equivalents. It does seem well designed for anything where you're orchestrating parallel runs of commands.
For Perl, there is now Future::AsyncAwait.[0] Mojolicius has already adopted it, so things are moving forward at the speed of the magnetic poles shift. :)
What I don’t like about backticks in Ruby is that they “ignore” errors in commands you run. It’s up to the program author to remember to check $? for the last executed command’s exit status. And guess how many times the average Ruby script using this feature implements error handling? Usually it’s totally forgotten.
To be safe, abstractions that make it easy to shell out must also:
- escape all variable interpolations by default using “shellwords” or similar.
- throw exceptions on error by default.
Backticks in Ruby are very easy to use, but aren’t safe.
Zx is good if JavaScript is only language you know. If you have ability to program or learn other programming languages, Python with packages like Shell and Plumbum might be better choices for shell scripting tasks that outgrow shell.
- Python code with Shell is easier to read ana write
- No nees to carry async/await keywords thru the code
Maybe, but the main advantage of `zx` (at least for me) is that I don't have to manage another language environment. I can keep everything contained within Node :)
Maybe but Python is sloooow and its type annotation situation is pretty poor.
I would recommend using Deno. It has the following advantages:
* Uses Typescript which gives you a great static typing system.
* Runs via V8 so it's about a million times faster than Python.
* Single statically linked binary with no project setup files required (package.json/dependencies.txt) makes it about a million times easier to deploy than either Node or Python. Especially on Windows.
* You can still use third party libraries even without a project file and you get IDE integration.
It's the clear winner at this point. Someone has even made a version of zx for it:
I feel like the await keyword everywhere is a bit verbose. One might use something like the gpp preprocessor (hopefully the macros could be defined in a separate file):
After reading about various vulnerabilities which are the direct result of backwards JS language features - most recently prototype pollution - I’d be hesitant to use JS for writing scripts.
This looks horrible to me. The mixing of bash commands in Javascript looks absolutely terrible and running node itself for scripting is a total mess(node_modules, versions).
I usually go with bash + set -euxo pipefail and if it becomes too complicated I switch to Python.
GoLang/Rust are the perfect tool when distributing tools as a single binare. Where in the world does JS fit in?
[+] [-] another-dave|4 years ago|reply
Great that it's under active development, but changes to my scripts are infrequent and slow. I wouldn't want to be touching an old script on a server somewhere saying "How do I do X in ZX again?" and everything I find online is now for v15 while I'm still on v3.
Maybe if it had plans around a LTS version I'd take a look at that stage.
To be honest though I like doing things in bash. You can get pretty far & it's 100% portable. If something's too complicated for bash it's probably time it's not just a simple CLI script anymore, in my experience.
[+] [-] Stampo00|4 years ago|reply
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] SevenSigs|4 years ago|reply
Would you feel better if they called version three v1.9 instead?
[+] [-] tgsovlerkhgsel|4 years ago|reply
Even Go, that prides itself on the good support for parallelism, fails once you actually want go get results back.
If this were paired with some way to also display the status/output of multiple commands in parallel, this would be the ultimate toolkit to write scripts.
If you look at apt, for example: Does it really have to wait for the last package to be downloaded before installing the first? Is there a good reason, on modern SSD-based computers, to not parallelize the installation of multiple packages that don't depend on each other?
[+] [-] q3k|4 years ago|reply
It's doable if you're willing to build a very thin abstraction over whatever you're trying to parallelize keeping in mind all possible edgecases [1] and how you'd like to handle them.
A generic `run N things in parallel and collect their results` function would be nice, but would probably end up being quite unwieldy in comparison to just writing out a purpose-specific function for what you exactly need.
[1] - Error handling, timeouts/cancellation, maximum amount of processes running in parallel, wait-for-full-join vs. returning results as they appear, introspection of currently running subordinates, subordinate output streaming/buffering/logging, ... A lot of things to consider when you're trying to build something universal, but that are easy to solve/ignore when you're building something purpose-specific with known constraints.
[+] [-] otabdeveloper4|4 years ago|reply
[+] [-] rurban|4 years ago|reply
[+] [-] reddit_clone|4 years ago|reply
IMO, it is pretty much the best-of-all-worlds language that is still accessible. (Meaning not exotic like Haskell or Erlang..)
The concurrency is beautifully painless.
[+] [-] Siira|4 years ago|reply
[+] [-] otabdeveloper4|4 years ago|reply
Oh yeah, right, because the first thing that comes to mind when writing bash scripts is "I sure wish there was more 'await' noise in all this code!"
Anyways: 'async' in the Python and Javascript sense was a mistake, future generations will think we were insane to adopt it.
https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...
[+] [-] eyelidlessness|4 years ago|reply
Adding it to a language where IO is typically blocking with other ergonomic concurrency mechanisms, that I can imagine being somewhat controversial.
It would be good if async weren’t infectious, but as a sibling comment said that was already the case with Promises and callbacks.
It would maybe also be better if concurrency was the default behavior and keeping a reference to a promise was the explicit syntax, but I could see downsides to that. It potentially hides/encourages excessive use of expensive calls. And implicit concurrency is a potential source of many bugs.
[+] [-] Vinnl|4 years ago|reply
[+] [-] laurent123456|4 years ago|reply
[+] [-] tusharsadhwani|4 years ago|reply
[+] [-] cfeliped|4 years ago|reply
It was a terrible idea, as I now had 3 problems instead of 1:
1 - Writing scripts, plus:
2 - Installing Python in every host that needed to run those scripts
3 - Maintaining Python and the necessary dependencies up to date in each host / container.
(2) and (3) are trivial when done just once in your own computer, but end up being a huge time sink for larger heterogeneous environments with (possibly) network and security barriers. Moreover, if you are using containers, installing Python will make images unecessarily large.
Granted, I'm not a JS person, but I imagine one will have the same issues.
Even though bash has its shortcomings, I really appreciate how low maintenance it is as long as you keep scripts small and sane.
[+] [-] qalmakka|4 years ago|reply
[+] [-] andrewl-hn|4 years ago|reply
Still, zx isn't as nice as Ruby, which has backticks for calling shell as part of the language. So, you don't have to have a separate package installed, just the ruby runtine.
But Perl is even better! Just like Ruby, it comes with backtick shell execution build-in, but unlike Ruby Perl is ubiquitous and is pretty strict about backward compatibility. So, your scripts will just run on all machines you have, no matter what distro you use and how old the machine is (within reason).
[+] [-] gravypod|4 years ago|reply
[+] [-] sumtechguy|4 years ago|reply
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] kjgkjhfkjf|4 years ago|reply
[+] [-] Dobbs|4 years ago|reply
[+] [-] tym0|4 years ago|reply
[+] [-] tyingq|4 years ago|reply
[+] [-] thatwasunusual|4 years ago|reply
[0] https://metacpan.org/pod/Future::AsyncAwait
[+] [-] maga|4 years ago|reply
https://gist.github.com/zandaqo/93004fb265146a95aadb28ec851a...
[+] [-] satyanash|4 years ago|reply
[+] [-] jitl|4 years ago|reply
To be safe, abstractions that make it easy to shell out must also:
- escape all variable interpolations by default using “shellwords” or similar.
- throw exceptions on error by default.
Backticks in Ruby are very easy to use, but aren’t safe.
[+] [-] chubot|4 years ago|reply
[+] [-] eurekin|4 years ago|reply
[+] [-] antifa|4 years ago|reply
[+] [-] azkae|4 years ago|reply
[+] [-] jeswin|4 years ago|reply
If you want the exact opposite of this (that is, to use JS from the shell), try https://bashojs.org
For example:
[+] [-] miohtama|4 years ago|reply
- Python code with Shell is easier to read ana write
- No nees to carry async/await keywords thru the code
- Powerful plain text manipulation in the stdlib
https://pypi.org/project/python-shell/
https://plumbum.readthedocs.io/en/latest/
[+] [-] dogma1138|4 years ago|reply
Not sure how I feel about needing Node.JS to run shell scripts…
[+] [-] SahAssar|4 years ago|reply
That is pretty subjective
[+] [-] alexghr|4 years ago|reply
[+] [-] wdroz|4 years ago|reply
[0] -- https://github.com/xonsh/xonsh
[+] [-] IshKebab|4 years ago|reply
I would recommend using Deno. It has the following advantages:
* Uses Typescript which gives you a great static typing system.
* Runs via V8 so it's about a million times faster than Python.
* Single statically linked binary with no project setup files required (package.json/dependencies.txt) makes it about a million times easier to deploy than either Node or Python. Especially on Windows.
* You can still use third party libraries even without a project file and you get IDE integration.
It's the clear winner at this point. Someone has even made a version of zx for it:
https://deno.land/x/[email protected]
[+] [-] gizdan|4 years ago|reply
[+] [-] hnarn|4 years ago|reply
um… what?
[+] [-] mikevm|4 years ago|reply
[+] [-] TeMPOraL|4 years ago|reply
[+] [-] ggambetta|4 years ago|reply
[+] [-] ilaksh|4 years ago|reply
[+] [-] jpxw|4 years ago|reply
Maybe I’m just being paranoid.
[+] [-] shell0x|4 years ago|reply
I usually go with bash + set -euxo pipefail and if it becomes too complicated I switch to Python.
GoLang/Rust are the perfect tool when distributing tools as a single binare. Where in the world does JS fit in?
[+] [-] lixtra|4 years ago|reply
[1] https://github.com/google/zx
[+] [-] bokchoi|4 years ago|reply
https://github.com/babashka/babashka