I find it easier to understand in terms of the Unix syscall API. `2>&1` literally translates as `dup2(1, 2)`, and indeed that's exactly how it works. In the classic unix shells that's all that happens; in more modern shells there may be some additional internal bookkeeping to remember state. Understanding it as dup2 means it's easier to understand how successive redirections work, though you also have to know that redirection operators are executed left-to-right, and traditionally each operator was executed immediately as it was parsed, left-to-right. The pipe operator works similarly, though it's a combination of fork and dup'ing, with the command being forked off from the shell as a child before processing the remainder of the line.
Though, understanding it this way makes the direction of the angled bracket a little odd; at least for me it's more natural to understand dup2(2, 1) as 2<1, as in make fd 2 a duplicate of fd 1, but in terms of abstract I/O semantics that would be misleading.
Another fun consequence of this is that you can initialize otherwise-unset file descriptors this way:
$ cat foo.sh
#!/usr/bin/env bash
>&1 echo "will print on stdout"
>&2 echo "will print on stderr"
>&3 echo "will print on fd 3"
$ ./foo.sh 3>&1 1>/dev/null 2>/dev/null
will print on fd 3
It's a trick you can use if you've got a super chatty script or set of scripts, you want to silence or slurp up all of their output, but you still want to allow some mechanism for printing directly to the terminal.
The danger is that if you don't open it before running the script, you'll get an error:
$ ./foo.sh
will print on stdout
will print on stderr
./foo.sh: line 5: 3: Bad file descriptor
This is probably one of the reasons why many find POSIX shell languages to be unpleasant. There are too many syntactical sugars that abstract too much of the underlying mechanisms away, to the level that we don't get it unless someone explains it. Compare this with Lisps, for example. There may be only one branching construct or a looping construct. Yet, they provide more options than regular programming languages using macros. And this fact is not hidden from us. You know that all of them ultimately expand to the limited number of special forms.
The shell syntactical sugars also have some weird gotchas. The &2>&1 question and its answer are a good example of that. You're just trading one complexity (low level knowledge) for another (the long list of syntax rules). Shell languages break the rule of not letting abstractions get in the way of insight and intuitiveness.
I know that people will argue that shell languages are not programming languages, and that terseness is important for the former. And yet, we still have people complaining about it. This is the programmer ego and the sysadmin ego of people clashing with each other. After all, nobody is purely just one of those two.
And just like dup2 allows you to duplicate into a brand new file descriptor, shells also allow you to specify bigger numbers so you aren’t restricted to 1 and 2. This can be useful for things like communication between different parts of the same shell script.
> The pipe operator works similarly, though it's a combination of fork and dup'ing
Any time the shell executes a program it forks, not just for redirections. Redirections will use dup before exec on the child process. Piping will be two forks and obviously the `pipe` syscall, with one process having its stdout dup'd to the input end of the pipe, and the other process having its stdin dup'd to the output end.
Honestly, I find the BASH manual to be excellently written, and it's probably available on your system even without an internet connection. I'd always go there than rely on stack overflow or an LLM.
> Though, understanding it this way makes the direction of the angled bracket a little odd; at least for me it's more natural to understand dup2(2, 1) as 2<1, as in make fd 2 a duplicate of fd 1, but in terms of abstract I/O semantics that would be misleading.
Since they're both just `dup2(1, 2)`, `2>&1` and `2<&1` are the same. However, yes, `2<&1` would be misleading because it looks like you're treating stderr like an input.
The comments on stackoverflow say the words out of my mouth so I'll just copy & paste here:
> but then shouldn't it rather be &2>&1?
> & is only interpreted to mean "file descriptor" in the context of redirections. Writing command &2>& is parsed as command & and 2>&1
That's where all the confusion comes from. I believe most people can intuitively understand > is redirection, but the asymmetrical use of & throws them off.
Interestingly, Powershell also uses 2>&1. Given an once-a-lifetime chance to redesign shell, out of all the Unix relics, they chose to keep (borrow) this.
Although PowerShell borrows the syntax, it (as usual!) completely screws up the semantics. The examples in the docs [1] show first setting descriptor 2 to descriptor 1 and then setting descriptor 1 to a newly opened file, which of course is backwards and doesn't give the intended result in Unix; e.g. their example 1:
dir C:\, fakepath 2>&1 > .\dir.log
Also, according to the same docs, the operators "now preserve the byte-stream data when redirecting output from a native command" starting with PowerShell 7.4, i.e. they presumably corrupted data in all previous versions, including version 5.1 that is still bundled with Windows. And it apparently still does so, mysteriously, "when redirecting stderr output to stdout".
The way I read it, the prefix to the > indicates which file descriptor to redirect, and there is just a default that means no indicated file descriptor means stdout.
So, >foo is the same as 1>foo
If you want to get really into the weeds, I think 2>>&1 will create a file called 1, append to a file descriptor makes no sense (or maybe, truncate to a file descriptor makes no sense is maybe what I mean), but why this is the case is probably an oversight 50 years ago in sh, although i'd be surprised if this was codified anywhere, or relied upon in scripts.
I agree that it adds to the confusion, but note that `file1>file2` also wouldn’t work (in the sense of “send the output currently going to file1 to file2”) and isn’t symmetrical in that sense as well. Or take `/dev/stderr>/dev/stdout` as the more direct equivalent.
It's really jarring to see this wave of nostalgia for "the good old days" appear since ~2025. Suddenly these rose tinted glasses have dropped and everything before LLM usage became ubiquitous was a beautiful romantic era of human collaboration, understanding and craftsmanship.
I still acutely remember the gatekeeping and hostility of peak stack overflow, and the inanity of churning out jira tickets as fast as possible for misguided product initiatives. It's just wild yo
> It feels so much better to ask humans a question then the machine
I could not disagree more! With pesky humans, you have all sorts of things to worry about:
- is my question stupid? will they think badly of me if i ask it?
- what if they dont know the answer? did i just inadvertantly make them look stupid?
- the question i have is related to their current work... i hope they dont see me as a threat!
and on and on. asking questions in such a manner as to elicit the answer, without negative externalities, is quite the art form as i'm sure many stack overflow users will tell you. many word orderings trigger a 'latent space' which activates the "umm, why are you even doing this?" with the implication begin "you really are stupid!", totally useless to the question-asker and a much more frustrating time-waster than even the most moralizing LLM.
with LLMs, you don't have to play these 'token games'. you throw your query at it, and irrespective of the word order, word choice, or the nture of the question - it gives you a perfectly neutral response, or at worst politely refuses to answer.
Great if you know where to look, but most people who ask themselves the question don't know they have to look up the bash manual in the "redirection" section.
The usual thing (before LLMs) is to Google the question, but for the question to appear in Google, someone has to ask it first, and here we are.
Also the Stackoverflow answers give different perspectives, context, etc... rather than just telling you what it does, which is useful to someone unfamiliar with how redirections work. As I said, someone who doesn't know about "2>&1" is unlikely to be an expert given how common the pattern is, so a little hand holding doesn't hurt.
> At least allow us to use names instead of numbers.
You can for the destination. That's the whole reason you need the "&": to tell the shell the destination is not a named file (which itself could be a pipe or socket). And by default you don't need to specify the source fd at all. The intent is that stdout is piped along but stderr goes directly to your tty. That's one reason they are separate.
And for those saying "<" would have been better: that is used to read from the RHS and feed it as input to the LHS so it was taken.
Which is about the same as `2>&1` but with a friendlier name for STDOUT. And this way `2> /dev/stdout`, with the space, also works, whereas `2> &1` doesn't which confuses many. But it's behavior isn't exactly the same and might not work in all situations.
And of course I wish you could use a friendlier name for STDERR instead of `2>`
I quite like how archaic it is. I am turned off by a lot of modern stuff. My shell is nice and predictable. My scripts from 15 years ago still work just fine. No, I don't want it to get all fancy, thanks.
They're more like capabilities or handles than pointers. There's a reason in Rust land many systems use handles (indices to a table of objects) in absence of pointer arithmetic.
In the C API of course there's symbolic names for these. STDIN_FILENO, STDOUT_FILENO, etc for the defaults and variables for the dynamically assigned ones.
> At least allow us to use names instead of numbers
Many people probably think in terms of "fd 0" and "fd 1" instead of "standard in" and "standard out", but should you wish to use names at least on modern Linux/BSD systems do:
Process substitution and calling it file redirect is a bit misleading because it is implemented with named pipes which becomes relevant when the command tries to seek in them which then fails.
Also the reason why Zsh has an additional =(command) construct which uses temporary files instead.
It's a shame that unix tools don't support file descriptors better. The ability to pass a file (or stream, or socket etc) directly into a process is so powerful, but few commands actually support being used this way and require filenames (or hostnames, etc) instead. Shell is so limited in this regard too.
It would be great to be able to open a socket in bash[^1] and pass it to another program to read/write from without having an extra socat process and pipes running (and the buffering, odd flush behaviour, etc.). It would be great if programs expected to receive input file arguments as open fds, rather than providing filenames and having the process open them itself. Sandboxing would be trivial, as would understanding the inputs and outputs of any program.
It's frustrating to me because the underlying unix system supports this so well, it's just the conventions of userspace that get in the way.
[^1]: I know about /dev/tcp, but it's very limited.
Claude’s answer, which is the only one that clicked for me:
Normally when you do something like command > file.txt, you’re only capturing the normal output — errors still go to your screen.
2>&1 is how you say: “send the error pipe into the same place as the normal output pipe.”
Breaking it down without jargon:
• 2 means “the error output”
• > means “send it to”
• &1 means “wherever the normal output is currently going” (the & just means “I’m referring to a pipe, not a file named 1”)
> • 2 means “the error output” • > means “send it to” • &1 means “wherever the normal output is currently going” (the & just means “I’m referring to a pipe, not a file named 1”)
If you want it with the correct terminology:
2 means "file descriptor 2", > means "assign the previous mentioned to the following", &2 means "file descriptor 1" (and not file named "1")
As someone who use LLMs to generate, among others, Bash script I recommend shellcheck too. Shellcheck catches lots of things and shall really make your Bash scripts better. And if for whatever reason there's an idiom you use all the time that shellcheck doesn't like, you can simply configure shellcheck to ignore that one.
Somewhat off topic, but related: I worked at this place that made internet security software. It ran on Windows, and on various flavors of Unix.
One customer complained about our software corrupting files on their hard disk. Turns out they had modified their systems so that a newly-spawned program was not given a stderr. That is, it was not handed 0, 1, and 2 (file descriptors), but only 0 and 1. So whenever our program wrote something to stderr, it wrote to whatever file had been the first one opened by the program.
We talked about fixing this, briefly. Instead we decided to tell the customer to fix their broken environment.
Humans used this combination extensively for decades too. I'm no aware of any other simple way to grep both stdout and stderr from a process. (grep, or save to file, or pipe in any other way).
I found the explanation useful, about "why" it is that way. I didn't realize the & before the 1 means to tell it is the filedescriptor 1 and not a file named 1.
I've also found llms seem to love it when calling out to tools, I suppose for them having stderr interspersed messaged in their input doesn't make much difference
I regularly refer to [the unix shell specification][1] to remember the specifics of ${foo%%bar} versus ${foo#bar}, ${parameter:+word} versus ${parameter:-word}, and so on.
It also teaches how && and || work, their relation to [output redirection][3] and [command piping][2], [(...) versus {...}][4], and tricky parts like [word expansion][5], even a full grammar. It's not exciting reading, but it's mostly all there, and works on all POSIXy shells, e.g. sh, bash, ksh, dash, ash, zsh.
This is why I dislike sites like stackoverflow. If I needed a quick lookup the v7 manpage explains it better, the v6 doesn't have it, but that's because unix didn't have bourne shell til V7
I understood the point of the question was how shells work seems very context driven. An & here means something different to an & there.
IFS=\| read A B C <<< "first|second|third"
the read is executed and the IFS assignment is local to the one command
echo hello this
will "hello this", even though in the assignment above the space was important
an & at the end of a line is run the task background and in the middle of the redirect isn't.
All these things can be learned, but it's hard to explain the patterns, I think.
It means redirect file descriptor 2 to the same destination as file descriptor 1.
Which actually means that an undelrying dup2 operation happens in this direction:
2 <- 1 // dup2(2, 1)
The file description at [1] is duplicated into [2], thereby [2] points to the same object. Anything written to stderr goes to the same device that stdout is sending to.
The notation follows I/O redirections: cmd > file actually means that a descriptor [n] is first created for the open file, and then that descriptor's decription is duplicated into [1]:
I enjoyed the commenter asking “Why did they pick such arcane stuff as this?” - I don’t think I touch more arcane stuff than shell, so asking why shell used something that is arcane relative to itself is to me arcane squared.
I love myself a little bit of C++. A good proprietary C++ codebase will remind you that people just want to be wizards, solving their key problem with a little bit of magic.
I've only ever been tricked into working on C++...
I know the underlying call, but I always see the redirect symbols as indicating that "everything" on the big side of the operator fits into a small bit of what is on the small side of the operator. Like a funnel for data. I don't know the origin, but I'm believing my fiction is right regardless. It makes <(...) make intuitive sense.
The comment about "why not &2>&1" is probably the best one on the page, with the answer essentially being that it would complicate the parser too much / add an unnecessary byte to scripts.
> I am thinking that they are using & like it is used in c style programming languages. As a pointer address-of operator. [...] 2>&1 would represent 'direct file 2 to the address of file 1'.
I had never made the connection of the & symbol in this context. I think I never really understood the operation before, treating it just as a magic incantation but reading this just made it click for me.
No, the shell author needed some way to distinguish file descriptor 1 from a file named "1" (note that 2>1 means to write stderr to the file named "1"), and '&' was one of the few available characters. It's not the address of anything.
To be consistent, it would be &2>&1, but that makes it more verbose than necessary and actually means something else -- the first & means that the command before it runs asynchronously.
Always wondered how the parser managed the ambiguity between & for file descriptors and & to start background tasks. (And without a good mental model, I kept forgetting where to put the & correctly in redirects)
Treating ">&" as a distinct operator actually makes an elegant solution here. I like the idea.
So if i happen to know the numbers of other file descriptors of the process (listed in /proc), i can redirect to other files opened in the current process? 2>&1234? Or is it restricted to 0/1/2 by the shell?
Would probably be hard to guess since the process may not have opened any file once it started.
I've used (see: example) to handle applications that just dump pointless noise into stdout/stderr, which is only useful when the binary crashes/fails. Provided the error is marked by a non-zero return code, this will then correctly display the stdout/stderr (provided there is <64KiB of it).
I understand how this works, but wouldn’t a more clear syntax be:
command &2>&1
Since the use of & signifies a file descriptor. I get what this ACTUALLY does is run command in the background and then run 2 sending its stout to stdout. That’s completely not obvious by the way.
I always wondered if there ever was a standard stream for stdlog which seems useful, and comes up in various places but usually just as an alias to stderr
While you're still thinking about it, make sure to bookmark the "redirections" section of the manual. [0] Also useful might be the "pipelines" section [1] to remind you of the "|&" operator.
This is one of those places where Bash diverges from POSIX. The standard says `echo &>/dev/null' is two commands, namely `echo &' and `>/dev/null', but Bash interprets it as redirect both stdout and stderr of `echo' to `/dev/null' both in normal and POSIX mode.
Cool tip - never knew this. I always figured piping to `tee` is a must in order to view-and-save command output at the same time. Turns out I can do "command >&1 >file.txt" instead!
wahern|4 days ago
Though, understanding it this way makes the direction of the angled bracket a little odd; at least for me it's more natural to understand dup2(2, 1) as 2<1, as in make fd 2 a duplicate of fd 1, but in terms of abstract I/O semantics that would be misleading.
jez|4 days ago
The danger is that if you don't open it before running the script, you'll get an error:
goku12|4 days ago
The shell syntactical sugars also have some weird gotchas. The &2>&1 question and its answer are a good example of that. You're just trading one complexity (low level knowledge) for another (the long list of syntax rules). Shell languages break the rule of not letting abstractions get in the way of insight and intuitiveness.
I know that people will argue that shell languages are not programming languages, and that terseness is important for the former. And yet, we still have people complaining about it. This is the programmer ego and the sysadmin ego of people clashing with each other. After all, nobody is purely just one of those two.
emmelaich|4 days ago
Which is lost when using more modern or languages foreign to Unix.
kccqzy|4 days ago
momentoftop|4 days ago
Any time the shell executes a program it forks, not just for redirections. Redirections will use dup before exec on the child process. Piping will be two forks and obviously the `pipe` syscall, with one process having its stdout dup'd to the input end of the pipe, and the other process having its stdin dup'd to the output end.
Honestly, I find the BASH manual to be excellently written, and it's probably available on your system even without an internet connection. I'd always go there than rely on stack overflow or an LLM.
https://www.gnu.org/software/bash/manual/bash.html#Redirecti...
ifh-hn|4 days ago
ontouchstart|4 days ago
https://man7.org/linux/man-pages/man2/dup.2.html
and
https://man.archlinux.org/man/dup2.2.en
A lot of bots are reading this. Amazing.
jolmg|4 days ago
Since they're both just `dup2(1, 2)`, `2>&1` and `2<&1` are the same. However, yes, `2<&1` would be misleading because it looks like you're treating stderr like an input.
niobe|4 days ago
manbash|4 days ago
And I also disagree, your suggestion is not easier. The & operator is quite intuitive as it is, and conveys the intention.
raincole|4 days ago
> but then shouldn't it rather be &2>&1?
> & is only interpreted to mean "file descriptor" in the context of redirections. Writing command &2>& is parsed as command & and 2>&1
That's where all the confusion comes from. I believe most people can intuitively understand > is redirection, but the asymmetrical use of & throws them off.
Interestingly, Powershell also uses 2>&1. Given an once-a-lifetime chance to redesign shell, out of all the Unix relics, they chose to keep (borrow) this.
jcotton42|4 days ago
ptx|4 days ago
[1] https://learn.microsoft.com/en-us/powershell/module/microsof...
xeyownt|4 days ago
You redirect stdout with ">" and stderr with "2>" (a two-letter operator).
If you want to redirect to stdout / stderr, you use "&1" or "&2" instead of putting a file name.
cesaref|4 days ago
So, >foo is the same as 1>foo
If you want to get really into the weeds, I think 2>>&1 will create a file called 1, append to a file descriptor makes no sense (or maybe, truncate to a file descriptor makes no sense is maybe what I mean), but why this is the case is probably an oversight 50 years ago in sh, although i'd be surprised if this was codified anywhere, or relied upon in scripts.
layer8|4 days ago
zwischenzug|4 days ago
solomonb|4 days ago
rkachowski|4 days ago
I still acutely remember the gatekeeping and hostility of peak stack overflow, and the inanity of churning out jira tickets as fast as possible for misguided product initiatives. It's just wild yo
numbers|4 days ago
jamesnorden|4 days ago
globular-toast|4 days ago
webdevver|4 days ago
I could not disagree more! With pesky humans, you have all sorts of things to worry about:
- is my question stupid? will they think badly of me if i ask it?
- what if they dont know the answer? did i just inadvertantly make them look stupid?
- the question i have is related to their current work... i hope they dont see me as a threat!
and on and on. asking questions in such a manner as to elicit the answer, without negative externalities, is quite the art form as i'm sure many stack overflow users will tell you. many word orderings trigger a 'latent space' which activates the "umm, why are you even doing this?" with the implication begin "you really are stupid!", totally useless to the question-asker and a much more frustrating time-waster than even the most moralizing LLM.
with LLMs, you don't have to play these 'token games'. you throw your query at it, and irrespective of the word order, word choice, or the nture of the question - it gives you a perfectly neutral response, or at worst politely refuses to answer.
ontouchstart|4 days ago
https://www.gnu.org/software/bash/manual/html_node/Redirecti...
GuB-42|4 days ago
The usual thing (before LLMs) is to Google the question, but for the question to appear in Google, someone has to ask it first, and here we are.
Also the Stackoverflow answers give different perspectives, context, etc... rather than just telling you what it does, which is useful to someone unfamiliar with how redirections work. As I said, someone who doesn't know about "2>&1" is unlikely to be an expert given how common the pattern is, so a little hand holding doesn't hurt.
shevy-java|4 days ago
Google search literally is useless for these days, for Average Joe.
amelius|4 days ago
File descriptors are like handing pointers to the users of your software. At least allow us to use names instead of numbers.
And sh/bash's syntax is so weird because the programmer at the time thought it was convenient to do it like that. Nobody ever asked a user.
zahlman|4 days ago
xenadu02|4 days ago
You can for the destination. That's the whole reason you need the "&": to tell the shell the destination is not a named file (which itself could be a pipe or socket). And by default you don't need to specify the source fd at all. The intent is that stdout is piped along but stderr goes directly to your tty. That's one reason they are separate.
And for those saying "<" would have been better: that is used to read from the RHS and feed it as input to the LHS so it was taken.
agentdrek|4 days ago
varenc|4 days ago
And of course I wish you could use a friendlier name for STDERR instead of `2>`
nusl|4 days ago
csours|4 days ago
Which means that reading someone else's shell script (or awk, or perl, or regex) is INCREDIBLY inconvenient.
fulafel|4 days ago
In the C API of course there's symbolic names for these. STDIN_FILENO, STDOUT_FILENO, etc for the defaults and variables for the dynamically assigned ones.
Dylan16807|4 days ago
You can use /dev/stdin, /dev/stdout, /dev/stderr in most cases, but it's not perfect.
xorcist|4 days ago
Many people probably think in terms of "fd 0" and "fd 1" instead of "standard in" and "standard out", but should you wish to use names at least on modern Linux/BSD systems do:
themafia|4 days ago
Sure. Here's what that looked like:
https://en.wikipedia.org/wiki/Job_Control_Language
kristopolous|4 days ago
I want to be able to route x independent input and y independent output trivially from the terminal
Proper i/o routing
It shouldn't be hard, it shouldn't be unsolved, and it shouldn't be esoteric
spiralcoaster|4 days ago
HackerThemAll|4 days ago
What should be the syntax according to contemporary IT people? JSON? YAML? Or just LLM prompt?
gdevenyi|4 days ago
jballanc|4 days ago
arjie|4 days ago
Calzifer|4 days ago
Also the reason why Zsh has an additional =(command) construct which uses temporary files instead.
wmanley|4 days ago
It would be great to be able to open a socket in bash[^1] and pass it to another program to read/write from without having an extra socat process and pipes running (and the buffering, odd flush behaviour, etc.). It would be great if programs expected to receive input file arguments as open fds, rather than providing filenames and having the process open them itself. Sandboxing would be trivial, as would understanding the inputs and outputs of any program.
It's frustrating to me because the underlying unix system supports this so well, it's just the conventions of userspace that get in the way.
[^1]: I know about /dev/tcp, but it's very limited.
gnabgib|4 days ago
murphyslaw|4 days ago
[1]: https://www.oreilly.com/library/view/essential-system-admini...
james_marks|4 days ago
Normally when you do something like command > file.txt, you’re only capturing the normal output — errors still go to your screen.
2>&1 is how you say: “send the error pipe into the same place as the normal output pipe.” Breaking it down without jargon: • 2 means “the error output” • > means “send it to” • &1 means “wherever the normal output is currently going” (the & just means “I’m referring to a pipe, not a file named 1”)
DonaldPShimoda|4 days ago
This response is essentially just the second answer to the linked question (the response by dbr) with a bunch of the important words taken out.
And all it cost you to get it was more water and electricity than simply clicking the link and scrolling down — to say nothing of the other costs.
NekkoDroid|4 days ago
If you want it with the correct terminology:
2 means "file descriptor 2", > means "assign the previous mentioned to the following", &2 means "file descriptor 1" (and not file named "1")
r4bbb1t|4 days ago
[deleted]
csours|4 days ago
It's very, very easy to get shell scripts wrong; for instance the location of the file redirect operator in a pipeline is easy to get wrong.
TacticalCoder|4 days ago
AnimalMuppet|4 days ago
One customer complained about our software corrupting files on their hard disk. Turns out they had modified their systems so that a newly-spawned program was not given a stderr. That is, it was not handed 0, 1, and 2 (file descriptors), but only 0 and 1. So whenever our program wrote something to stderr, it wrote to whatever file had been the first one opened by the program.
We talked about fixing this, briefly. Instead we decided to tell the customer to fix their broken environment.
vessenes|4 days ago
It redirects STDERR (2) to where STDOUT is piped already (&1). Good for dealing with random CLI tools if you're not a human.
WhyNotHugo|4 days ago
ElijahLynn|4 days ago
anitil|4 days ago
MathMonkeyMan|4 days ago
It also teaches how && and || work, their relation to [output redirection][3] and [command piping][2], [(...) versus {...}][4], and tricky parts like [word expansion][5], even a full grammar. It's not exciting reading, but it's mostly all there, and works on all POSIXy shells, e.g. sh, bash, ksh, dash, ash, zsh.
[1]: https://pubs.opengroup.org/onlinepubs/7908799/xcu/chap2.html
[2]: https://pubs.opengroup.org/onlinepubs/7908799/xcu/chap2.html...
[3]: https://pubs.opengroup.org/onlinepubs/7908799/xcu/chap2.html...
[4]: https://pubs.opengroup.org/onlinepubs/7908799/xcu/chap2.html...
[5]: https://pubs.opengroup.org/onlinepubs/7908799/xcu/chap2.html...
nurettin|4 days ago
ndsipa_pomu|4 days ago
rezonant|4 days ago
lgeorget|4 days ago
The question was how to remember it's "2>&1" and not "2&>1". If you think of "&1" as the address/destination of, the syntax is quite natural.
ucarion|4 days ago
keithnz|4 days ago
casey2|4 days ago
https://man.cat-v.org/unix_7th/1/sh#:~:text=%3C%26digit%0A%2...
Seriously when it comes to unix RTFM RTFM RTFM and you'll get the top comment on SO and HN rolled into one.
ptaffs|4 days ago
kazinator|4 days ago
Which actually means that an undelrying dup2 operation happens in this direction:
The file description at [1] is duplicated into [2], thereby [2] points to the same object. Anything written to stderr goes to the same device that stdout is sending to.The notation follows I/O redirections: cmd > file actually means that a descriptor [n] is first created for the open file, and then that descriptor's decription is duplicated into [1]:
wodenokoto|4 days ago
Normal_gaussian|4 days ago
I've only ever been tricked into working on C++...
Normal_gaussian|4 days ago
The comment about "why not &2>&1" is probably the best one on the page, with the answer essentially being that it would complicate the parser too much / add an unnecessary byte to scripts.
emmelaich|4 days ago
time4tea|4 days ago
But also | isnt a redirection, it takes stdout and pipes it to another program.
So, if you want stderr to go to stdout, so you can pipe it, you need to do it in order.
bob 2>&1 | prog
You usually dont want to do this though.
inigyou|4 days ago
maxeda|4 days ago
I had never made the connection of the & symbol in this context. I think I never really understood the operation before, treating it just as a magic incantation but reading this just made it click for me.
jibal|4 days ago
To be consistent, it would be &2>&1, but that makes it more verbose than necessary and actually means something else -- the first & means that the command before it runs asynchronously.
charcircuit|4 days ago
gaogao|4 days ago
unknown|4 days ago
[deleted]
xg15|4 days ago
Treating ">&" as a distinct operator actually makes an elegant solution here. I like the idea.
zem|4 days ago
[0] https://stackoverflow.com/questions/3618078/pipe-only-stderr...
nikeee|4 days ago
Would probably be hard to guess since the process may not have opened any file once it started.
hugmynutus|4 days ago
It is not. You can use any arbitrary numbers provided they're initialized properly. These values are just file descriptors.
For Example -> https://gist.github.com/valarauca/71b99af82ccbb156e0601c5df8...
I've used (see: example) to handle applications that just dump pointless noise into stdout/stderr, which is only useful when the binary crashes/fails. Provided the error is marked by a non-zero return code, this will then correctly display the stdout/stderr (provided there is <64KiB of it).
viraptor|4 days ago
> Would probably be hard to guess since the process may not have opened any file once it started.
You need to not only inspect the current state, but also race the process before the assignments change.
nodesocket|4 days ago
command &2>&1
Since the use of & signifies a file descriptor. I get what this ACTUALLY does is run command in the background and then run 2 sending its stout to stdout. That’s completely not obvious by the way.
dheera|4 days ago
command &stderr>&stdout
kalterdev|4 days ago
1: https://p9f.org/sys/doc/rc.html
adzm|4 days ago
jibal|4 days ago
knfkgklglwjg|4 days ago
k3vinw|4 days ago
On the other hand, pipe “|” is brilliant!
NoSalt|4 days ago
antonvs|3 days ago
everyone|4 days ago
joelthelion|4 days ago
otikik|4 days ago
whatever1|4 days ago
simoncion|4 days ago
[0] <https://www.gnu.org/software/bash/manual/bash.html#Redirecti...>
[1] <https://www.gnu.org/software/bash/manual/bash.html#Pipelines...>
tempodox|4 days ago
oguz-ismail2|4 days ago
jolmg|4 days ago
hinkley|4 days ago
Look man, I didn’t invent this stupid shit, and I’m not telling you it’s brilliant, so don’t kill the messenger.
I thought I’d seen somewhere that zsh had a better way to do this but I must have imagined it. Or maybe I’m confusing it with fish.
JackAcid|4 days ago
unknown|4 days ago
[deleted]
alwillis|3 days ago
kuon|4 days ago
aichen_dev|4 days ago
[deleted]
parasti|4 days ago
datawars|4 days ago
[deleted]
twocommits|4 days ago
[deleted]
esafak|4 days ago