top | item 39786378

(no title)

salmo | 1 year ago

As an old Unix guy this is exactly how I see jq: a gateway to a fantastic library of text processing tools. I see a lot of complicated things done inside the language, which is a valid approach. But I don’t need it to be a programming language itself, just a transform to meet my next command after the pipe.

If I want logic beyond that, then I skip the shell and write “real” software.

I personally find those both to be more readable and easier to fit in my head than long complex jq expressions. But that’s completely subjective and others may find the jq expression language easier to read than shell or (choose your programming language).

discuss

order

digdugdirk|1 year ago

Your comment made me go look up jq (even more than the article did) and the first paragraph of the repo [0] feels like a secret club's secret language.

I'm very interested, but not a Linux person, do you know of any good resources for learning the Linux shell as a programming language?

[0] https://jqlang.github.io/jq/

salmo|1 year ago

I’ll say, I did shell scripting for years from copy/paste, cribbing smarter people, and reading online guides. But I didn’t really understand until I read The Unix Programming Environment by Brian Kernighan and Rob Pike.

It’s a very old book and the audience was using dumb terminals. But it made me understand why and how. I think I’ve read every Kernighan book at this point and most he was involved in because he is just so amazing and not just conveying facts, but teaching how to think idiomatically in the topic.

I also used awk for 2 decades, kind of like how I use jq now. But when I read his memoir I suddenly “got it.” What I make with it now is intentional and not just me banging on the keyboard until it works. A great middle ground for something a little sophisticated, but not worth writing a full program for.

Something else that helped me was to install a minimal distro… actually a base FreeBSD install would be great… and read the man pages for all the commands. I don’t remember the details, but I learned that things existed. I have many man pages that I look at the same options on every few months because I’m not positive I remember right. Heck, I ‘man test’ all the time still. (‘test’ and ‘[‘ are the same thing)

I also had an advantage of 2 great coworkers. They’d been working on Unix since the 80s and their feedback helped me be more efficient, clean, and avoid “useless use of cat” problems.

I also highly recommend using shellcheck. I sometimes disagree with it when I’m intentionally abusing shell behavior, but it’s a great way to train good habits and prevent bugs that only crop up with bad input, scale, etc. I get new devs to use it and it’s helped them “ramp up” quickly, with me explaining the “why” from time to time.

But yeah. The biggest problem I see is that people think there is more syntax than there really is (like my test and [ comment). And remember it’s all text, processes, and files. Except when we pretend it’s not ;).

NortySpock|1 year ago

So, grab yourself a Linux box (I suggest Debian), a large CSV file or JSON lines file you need to slice up, and an hour of time, and start trying out some bash one-liners on your data. Set some goals like "find the Yahoo email addresses in the data and sort by frequency" or "find error messages that look like X" or "find how many times Ben Franklin mentions his wife in his autobiography"

Here's the thing. These tools have been used since the '70s to slice, dice and filter log files, CSVs, or other semi-structured data. They can be chained together with the pipe command. Sys admins were going through 100MB logs with these tools before CPUs hit the gigahertz

These tools are blisteringly fast, and they are basically installed on every Linux machine.

https://github.com/onceupon/Bash-Oneliner

And for a different play-by-play example:

https://adamdrake.com/command-line-tools-can-be-235x-faster-...

coldtea|1 year ago

>Your comment made me go look up jq (even more than the article did) and the first paragraph of the repo [0] feels like a secret club's secret language.

Or one of the most old standing widespread clubs of computing open standard language :)

"jq is like sed for JSON data - you can use it to slice and filter and map and transform structured data with the same ease that sed, awk, grep and friends let you play with text."

Translation:

JQ is like a (UNIX/POSIX staple command line text-manipulation tool) but specialized for text structured in JSON format. You can use it to extract parts of a JSON document (slice), keep nodes based on some criteria (filter), transform each element in a list of structured data to get a new list with the transformed versions (map), and do that as easily as you can with the sed (basic command line text manipulation program), awk (command line text manipulation program with a full featured text-processing oriented language), grep (command line program to search for strings), and other assorted unix userland programs.