I'd rather that most devs don't touch that signal. Using that binding and having a GUI or CLI program continue hanging because the dev screwed up the cleanup is a real pain. And someone writing a Bash script is highly likely for doing something "very clever" with that signal to make my life harder.
Or if you're going to do something with it, at least make it clear you're trolling me. Show me a text add that forces me to choose my favorite Korean boy band before I can exit, or something in that vein.
I'm working on a book about Bash scripting which is currently in the review phase, and it includes most of these. For graceful exit I recommend `trap cleanup EXIT` rather than specifically trapping SIGINT, mostly because the special exit signal is triggered no matter why the script is interrupted. I wouldn't normally recommend pulling out variables into a separate files until those variables are used by more than one script. I'd be interested in the rationale for why that helps refactoring.
Yes... but do not take that as a "cargo cult script shebang".
If you're a sysadmin writing a script for a company with 2k linux servers, that has a policy of "we only use linux version Foo X"... and we do not use other bash in the system than /bin/bash (no bash compiled by hand, no multiple versions of bash, etc)... then portability via "env" does not make sense.
If you have two laptops and a raspberry at home, with debian or arch, and you write a script for yourself... then portability via "env" does not make sense.
And last but not least... using env is slower.
See:
strace -fc /bin/bash -c ':'
Vs
strace -fc /usr/bin/env bash -c ':'
On my system, that's 92 syscalls and 3 errors, Vs 152 syscalls and 8 errors.
Just to start procesing.
Diferent levels of system bloat (environment, library paths, etc) can give different results than my example.
And as others said... if you're not using GNU/bash syntax and the script is really simple, the best for portbility is to go with /bin/sh.
strace -fc /bin/sh -c ':'
On my system 41 syscalls and 1 error... (and less RAM, CPU and pagefaults).
If you're not using associative arrays, array indexes, non POSIX builtin options, and other bash extensions... if the script is just to join a few commands and variables... it pays the effort to write it in simple sh, both, for portability and performance.
This is a great list. Also while reading about 'readonly' bash variables I ran across this amazing project which lets you call native functions from bash [0]. My mind is spinning from the possibilities...
Huge +1 to using long form options in scrips, even if you’re the sole maintainer of the script. Also if you have a command that takes many flags, breaking them out onto new lines can help keep it readable
1. The "if myfunc" problem -- error checking is skipped. This is specified by POSIX, but it's undesirable.
2. The f() { test -d /tmp && echo "exists" } problem. The exit code of the function is probably not what you expect!
3. the local x=$(false) problem. This happens in all shells because of the definition of $?. (local is a shell builtin with its own exit code, not part of the language) This one is already fixed with shopt -s more_errexit in Oil.
4. the x=$(false) problem. This is a bash problem with command substitution. For example, dash and zsh don't have this problem. Test case:
bash -c 'set -e; x=$(false); echo should not get here'
Newer versions of bash fix it with inherit_errexit. Oil also implements inherit_errexit and turns it on by default! (in Oil, not OSH)
-----
So 1 and 2 relate to the confusing combination of shell functions and errexit.
And 3 and 4 relate to command subs. Oil has fixed these with opt-in OSH shell options (on by default in Oil), but not 1 and 2.
If you know of any other problems, please chime in on the bug!
> 2. The f() { test -d /tmp && echo "exists" } problem. The exit code of the function is probably not what you expect!
The exit code is 0 (assuming /tmp/, stdout, /bin/test and /bin/echo are all working correctly; with /tmp1 it's 1), as expected; is this referencing a bug in sh and/or bash that I've fixed locally and then forgotten about?
(Also, I'm pretty sure it should be:
f() { test -d /tmp && echo "exists"; }
unless the parse error for missing ';' was your point (I haven't bothered to fix that one, but maybe Oil has).)
If it’s not by default there’s a reason. Bash is literally running commands in a shell session. Think terminal session. When a command fails, would you want the terminal session to end? That’d be annoying.
Same theory for unset variable. Referencing an undefined variable shouldn’t break your session. Why initialize it anyway? It’s more code to change if you don’t use it if you have to initialize it when it might not be needed. And, you’d have to call the script with A= just to check A wasn’t defined, and in the process now you have A assigned to an empty string, instead of only defaulting to one when called, which uses more memory and execution time.
The pipeline doesn’t die because && and || and parens are seriously helpful for one-liners.
Don’t think of it as a script. Think of it as a script for a shell.
inherit_errexit
If set, command substitution inherits the value of the errexit option,
instead of unsetting it in the subshell environment. This option is
enabled when posix mode is enabled.
So must I presume older versions were already doing that without needing an option set?
Specifying PS4 when using the -x flag can be even more helpful while debugging. The variable PS4 denotes the value is the prompt printed before the command line is echoed when the -x option is set and defaults to : followed by space.
I almost always write POSIX shell instead of bash for compatibility; it would be nice to see collections for tips and tricks specifically for POSIX shell. I know, for example, that -o pipefail doesn't exist in plain POSIX shell. I wonder what's are the best practices when you can't use it.
I also write my scripts to stay /bin/sh compatible. If this is not enough, then a real scripting language should be used, not bash.
But I very much agree that lack of pipefail is painful. If I know that output on the left of the pipe is small, I read it into a variable and then use printf | right part. If the output can be big, I use a helper function to emulate it that I copy-paste.
The problem actually has more to do with the definition of $? than the set -e behavior itself. And the fact that POSIX specifies that the error a the LHS of && is ignored (a fundamental confusion between true/false and success/error)
The exit code of the function is not what you expect, or the exit code of the subshell is not what you expect.
I made a note of it on the bug ... still thinking about what the solution to that one is.
(The other solutions are inherit_errexit, more_errexit, and a "catch" builtin.)
I generally use them and think that overall, they have more benefits than drawbacks, but the odd time where I run into one of the pitfalls, debugging usually takes a while.
ShellCheck already does a pretty good job of pointing out incorrect variable names, too.
I wonder how much Clippy is to blame for the visual motif used in this comic strip.
Common motifs, elsewhere, for a shell are a dollar sign and an underscore or a greater than sign and an underscore. (The latter is somewhat odd for a shell, given that it more resembles the prompts on Microsoft/IBM command interpreters, and not the PS1 prompts of Unix shells, which are commonly dollar symbols, hashes, or percent signs rather than greater than.)
> [...] it does something very different than sh -x — sh -x will just print out lines, this stops before* every single line and lets you confirm that you want to run that line*
>> you can also customize the prompt with set -x
export PS4='+(${BASH_SOURCE}:${LINENO}) '
set -x
With a markdown_escape function, could this make for something like a notebook with ```bash fenced code blocks with syntax highlighting?
I like the notion of 'set -e' and at the same time I hate it.
First, because it behaves inconsistently across shells/versions [1] and second, because it doesn't always work as expected. For example, when you depend on the 'set -e' behavior in a function and call the function from within a condition, the 'set -e' has no effect at all. So you better don't count on 'set -e'.
But don't expect me to follow my own advice, as not using 'set -e' isn't a good option either...
Try it out on your shell scripts and let me know what happens :) OSH has the "broken" POSIX/bash behavior to maintain compatibility, while Oil opts you in to the better error semantics.
I thought people shouldn't post anything about bash on HN? The minute you post something about bash immediately you will draw out a whole bunch of folks from the wood works talking about how bash sucks and should never be used for anything more than 3 or 4 lines and how they replaced bash with python or some thing else, immediately in turn drawing out a bunch of other folks talking about how bash should be replaced with power shell and how you can parse objects better ... .
I had a really fun project earlier in the year prototyping a load testing tool for a blockchain in Bash while 4 other developers wrote a ‘better’ one in Haskell. Bash can get results quickly, although it’s not maintainable! Still, a decent kloc or two of bash with performance results within the sprint.
I had become annoyed by the Python bigots who will tell us about how easy to read their language is because its notation is clean, how all its functionality is "intuitive", how any combination of Python-based Rube-Goldberg Machine systems is the best.
This is usefull. I know a guy who didnt treat unset vairable, so in the script where he would remove with rm -rf /$dir-old-back his script removed all directories. The problem is that all his backups were in an external drive mounted in a directory, so he removed all his backups also. A hell month for him .
I dont understand its popularity. Normally sane people who like unit testing, CICD, SOLID principles, quality tools end up with a bunch of crappy scripts holding everything together. Please avoid.
It's fantastic for simple tasks, which is why so many people use it. It also turns out that many complex tasks can be reduced to a collection of simple tasks, otherwise known as "a bunch of crappy scripts".
I encourage all my teams to avoid built-in CI/CD features and plugins and just script what they want in a Docker container. It ends up being easier to maintain, breaks less often, and is more portable.
I think the only reason people use the shell for scripting is that
ls -lsa /tmp
is simpler to write than
execute(["ls", "-lsa", "/tmp"])
or even
execute("ls -lsa /tmp")
> Normally sane people who like unit testing, CICD, SOLID principles, quality tools end up with a bunch of crappy scripts holding everything together. Please avoid.
[+] [-] exdsq|5 years ago|reply
- Use shellcheck (static analysis/linter) https://www.shellcheck.net/
- Use shunit2 (unit tests) https://github.com/kward/shunit2
- Use 'local' or 'readonly' to annotate your variables
- Trap ctrl+c to gracefully exit (details here https://www.tothenew.com/blog/foolproof-your-bash-script-som...)
- Stick to long-form options for readability (--delete over -d for example)
- #!/usr/bin/env > #!/bin/bash for portability
- Consider setting variable values in a file and importing it at the top of your script to improve refactoring
- You will eventually forget how your scripts work - seriously consider if Bash is your best option for anything that needs to last a while!
[+] [-] jancsika|5 years ago|reply
I'd rather that most devs don't touch that signal. Using that binding and having a GUI or CLI program continue hanging because the dev screwed up the cleanup is a real pain. And someone writing a Bash script is highly likely for doing something "very clever" with that signal to make my life harder.
Or if you're going to do something with it, at least make it clear you're trolling me. Show me a text add that forces me to choose my favorite Korean boy band before I can exit, or something in that vein.
[+] [-] asicsp|5 years ago|reply
* https://mywiki.wooledge.org/BashFAQ
* https://mywiki.wooledge.org/BashGuide/Practices
* https://mywiki.wooledge.org/BashPitfalls
* https://devmanual.gentoo.org/tools-reference/bash/index.html
[+] [-] hivacruz|5 years ago|reply
[+] [-] l0b0|5 years ago|reply
[+] [-] txutxu|5 years ago|reply
Yes... but do not take that as a "cargo cult script shebang".
If you're a sysadmin writing a script for a company with 2k linux servers, that has a policy of "we only use linux version Foo X"... and we do not use other bash in the system than /bin/bash (no bash compiled by hand, no multiple versions of bash, etc)... then portability via "env" does not make sense.
If you have two laptops and a raspberry at home, with debian or arch, and you write a script for yourself... then portability via "env" does not make sense.
And last but not least... using env is slower.
See:
Vs On my system, that's 92 syscalls and 3 errors, Vs 152 syscalls and 8 errors.Just to start procesing.
Diferent levels of system bloat (environment, library paths, etc) can give different results than my example.
And as others said... if you're not using GNU/bash syntax and the script is really simple, the best for portbility is to go with /bin/sh.
On my system 41 syscalls and 1 error... (and less RAM, CPU and pagefaults).If you're not using associative arrays, array indexes, non POSIX builtin options, and other bash extensions... if the script is just to join a few commands and variables... it pays the effort to write it in simple sh, both, for portability and performance.
[+] [-] gkfasdfasdf|5 years ago|reply
[0]: https://github.com/taviso/ctypes.sh
[+] [-] corytheboyd|5 years ago|reply
[+] [-] amarshall|5 years ago|reply
[+] [-] mehrdadn|5 years ago|reply
[+] [-] chubot|5 years ago|reply
https://github.com/oilshell/oil/issues/709
Summary of problems:
1. The "if myfunc" problem -- error checking is skipped. This is specified by POSIX, but it's undesirable.
2. The f() { test -d /tmp && echo "exists" } problem. The exit code of the function is probably not what you expect!
3. the local x=$(false) problem. This happens in all shells because of the definition of $?. (local is a shell builtin with its own exit code, not part of the language) This one is already fixed with shopt -s more_errexit in Oil.
4. the x=$(false) problem. This is a bash problem with command substitution. For example, dash and zsh don't have this problem. Test case:
Newer versions of bash fix it with inherit_errexit. Oil also implements inherit_errexit and turns it on by default! (in Oil, not OSH)-----
So 1 and 2 relate to the confusing combination of shell functions and errexit.
And 3 and 4 relate to command subs. Oil has fixed these with opt-in OSH shell options (on by default in Oil), but not 1 and 2.
If you know of any other problems, please chime in on the bug!
[+] [-] a1369209993|5 years ago|reply
The exit code is 0 (assuming /tmp/, stdout, /bin/test and /bin/echo are all working correctly; with /tmp1 it's 1), as expected; is this referencing a bug in sh and/or bash that I've fixed locally and then forgotten about?
(Also, I'm pretty sure it should be:
unless the parse error for missing ';' was your point (I haven't bothered to fix that one, but maybe Oil has).)[+] [-] asicsp|5 years ago|reply
* https://wizardzines.com/comics/environment-variables
* https://wizardzines.com/comics/brackets-cheatsheet
* https://wizardzines.com/comics/bash-quotes
* https://wizardzines.com/comics/bash-if-statements/
[+] [-] _where|5 years ago|reply
Same theory for unset variable. Referencing an undefined variable shouldn’t break your session. Why initialize it anyway? It’s more code to change if you don’t use it if you have to initialize it when it might not be needed. And, you’d have to call the script with A= just to check A wasn’t defined, and in the process now you have A assigned to an empty string, instead of only defaulting to one when called, which uses more memory and execution time.
The pipeline doesn’t die because && and || and parens are seriously helpful for one-liners.
Don’t think of it as a script. Think of it as a script for a shell.
[+] [-] polyrand|5 years ago|reply
[+] [-] kasabali|5 years ago|reply
[+] [-] chinigo|5 years ago|reply
[+] [-] cryptonector|5 years ago|reply
From the bash manual page:
| If a compound command other than a subshell returns a non-zero status because a command failed while -e was being ignored, the shell does not exit.
POSIX says the same thing, so this is true of all POSIX-y shells.
This means you really have to check for errors you care about, and `set -e` is useless. Ugh!
[+] [-] pzmarzly|5 years ago|reply
You can disable it again with set +x (same goes for +e, +u and afaik +o pipefail)
Also, please use shellcheck - https://www.shellcheck.net/
EDIT: Also, please don't modify scripts while they are running.
[+] [-] 0xmohit|5 years ago|reply
A number of useful debugging tips are listed at <https://wiki.bash-hackers.org/scripting/debuggingtips>.
[+] [-] aidenn0|5 years ago|reply
[+] [-] GolDDranks|5 years ago|reply
[+] [-] _0w8t|5 years ago|reply
But I very much agree that lack of pipefail is painful. If I know that output on the left of the pipe is small, I read it into a variable and then use printf | right part. If the output can be big, I use a helper function to emulate it that I copy-paste.
[+] [-] sigzero|5 years ago|reply
https://www.shellscript.sh/
There is a Facebook group and I have emailed the author as well with questions. Nice guy.
[+] [-] asicsp|5 years ago|reply
Check out https://freebsdfrau.gitbook.io/serious-shell-programming/
[+] [-] still_grokking|5 years ago|reply
I think this changed lately[1]. No clue where's implemented, though.
[1] https://www.austingroupbugs.net/view.php?id=789
[+] [-] peterwwillis|5 years ago|reply
[+] [-] m463|5 years ago|reply
cleanup="true"
later:
or slightly uglier I like one-line functions:[+] [-] mehrdadn|5 years ago|reply
[+] [-] cryptonector|5 years ago|reply
[+] [-] Sebb767|5 years ago|reply
[+] [-] chubot|5 years ago|reply
https://news.ycombinator.com/item?id=24740842
The problem actually has more to do with the definition of $? than the set -e behavior itself. And the fact that POSIX specifies that the error a the LHS of && is ignored (a fundamental confusion between true/false and success/error)
The exit code of the function is not what you expect, or the exit code of the subshell is not what you expect.
I made a note of it on the bug ... still thinking about what the solution to that one is.
(The other solutions are inherit_errexit, more_errexit, and a "catch" builtin.)
[+] [-] loevborg|5 years ago|reply
[+] [-] bewuethr|5 years ago|reply
I generally use them and think that overall, they have more benefits than drawbacks, but the odd time where I run into one of the pitfalls, debugging usually takes a while.
ShellCheck already does a pretty good job of pointing out incorrect variable names, too.
[+] [-] JdeBP|5 years ago|reply
Common motifs, elsewhere, for a shell are a dollar sign and an underscore or a greater than sign and an underscore. (The latter is somewhat odd for a shell, given that it more resembles the prompts on Microsoft/IBM command interpreters, and not the PS1 prompts of Unix shells, which are commonly dollar symbols, hashes, or percent signs rather than greater than.)
* https://www.redbubble.com/i/sticker/zsh-by-zoerab/20363330.E...
* https://commons.wikimedia.org/wiki/File:Bash_Logo_black_and_...
* https://commons.wikimedia.org/wiki/File:PowerShell_5.0_icon....
* https://icon-library.net/icon/commands-icon-5.html
* https://icon-library.com/icon/bash-icon-10.html
* https://dribbble.com/shots/6101482-Bash-Automation
[+] [-] pferde|5 years ago|reply
http://redsymbol.net/articles/unofficial-bash-strict-mode/
EDIT: The comic strip would be better in three rows of two panels - a row for each set flag.
[+] [-] westurner|5 years ago|reply
> TIL that you can use the "DEBUG" trap to step through a bash script line by line
> [...] it does something very different than sh -x — sh -x will just print out lines, this stops before* every single line and lets you confirm that you want to run that line*>> you can also customize the prompt with set -x
With a markdown_escape function, could this make for something like a notebook with ```bash fenced code blocks with syntax highlighting?[+] [-] arendtio|5 years ago|reply
First, because it behaves inconsistently across shells/versions [1] and second, because it doesn't always work as expected. For example, when you depend on the 'set -e' behavior in a function and call the function from within a condition, the 'set -e' has no effect at all. So you better don't count on 'set -e'.
But don't expect me to follow my own advice, as not using 'set -e' isn't a good option either...
[1] https://www.in-ulm.de/~mascheck/various/set-e/
[+] [-] chubot|5 years ago|reply
Try it out on your shell scripts and let me know what happens :) OSH has the "broken" POSIX/bash behavior to maintain compatibility, while Oil opts you in to the better error semantics.
[+] [-] harisund|5 years ago|reply
[+] [-] michaelcampbell|5 years ago|reply
[+] [-] exdsq|5 years ago|reply
[+] [-] heresie-dabord|5 years ago|reply
But then came the PowerShell people...
[+] [-] major505|5 years ago|reply
[+] [-] x87678r|5 years ago|reply
I dont understand its popularity. Normally sane people who like unit testing, CICD, SOLID principles, quality tools end up with a bunch of crappy scripts holding everything together. Please avoid.
[+] [-] peterwwillis|5 years ago|reply
I encourage all my teams to avoid built-in CI/CD features and plugins and just script what they want in a Docker container. It ends up being easier to maintain, breaks less often, and is more portable.
[+] [-] chubot|5 years ago|reply
[+] [-] amelius|5 years ago|reply
I think the only reason people use the shell for scripting is that
is simpler to write than or even > Normally sane people who like unit testing, CICD, SOLID principles, quality tools end up with a bunch of crappy scripts holding everything together. Please avoid.Totally agree!
[+] [-] tpoacher|5 years ago|reply
[+] [-] JeremyBanks|5 years ago|reply