(no title)
Davertron | 3 years ago
I know that I could obviously grep the output of the first command, and then use sed or awk to manipulate the line I want to get just the url, but I'm not sure about the best way to go about the rest. In addition, I usually want to see all the output of the first command (in this case, it's not done executing, it continues to run after printing out the url), so maybe there's a way to do that with tee? But I usually ALSO don't want to intermix 2 commands in the same shell, i.e. I don't want to just have a big series of pipes, Ideally I could run the 2 commands separately in their own terminals but the 2nd command that needs the url would effectively block until it received the url output from the first command. I have a feeling maybe you could do this with named pipes or something but that's pretty far out of my league...would love to hear if this is something other folks have done or have a need for.
abbeyj|3 years ago
sillysaurusx|3 years ago
ufo|3 years ago
You can create a named pipe using "mkfifo", which creates a pipe "file" with the specified name. Then, you can tell your programs to read and write to the pipe the same way you'd tell them to read and write from a normal file. You can use "<" and ">" to redirect stdout/stderr, or you can pass the file name if it's a program that expects a file name.
nix0n|3 years ago
[0] https://core.tcl-lang.org/expect/index
rwmj|3 years ago
teddyh|3 years ago
1. Run one command with output to a file, possibly in the background. Since you want to watch the output, run “tail --follow=name filename.log”.
2. In a second terminal, run a second tail --follow on the same log file but pipe the output to a command sequence to find and extract the URL, and then pipe that into a shell while loop; something like “while read -r url; do do-thing-with "$url"; done”.
sillysaurusx|3 years ago
… good luck, is my best advice. It’s not straightforward to handle edge cases.