This is becoming a really crowded space. Some other similar tools that make slightly different design choices and that have variable envisioned use cases:
I'd argue this is more about quacking like a PowerShell than manipulating xSV/JSON in the pipeline. So here's my quick bunch of links that show the demand for that.
Here people emulate formatted and filtered ps(1) using GVariants and a bunch of CLI tools:
Those who fail to understand powershell are condemned to recreate it poorly.
It'd be great for GNU to create a standard for native structured output (as well as a converter tool like the one in this post), then have other tools be able to do it.
But realistically, pwsh is Open Source, runs just fine on Unix boxes and does this now.
I think Plan 9 gives a nice distinction. We use files as both a persistent store as well as an interface, so it seems nice to separate those two concerns out. That way you could have your logs as a UI into application state and only incur the overhead of serialization and persistence when you deem necessary.
Caveat, my Plan 9 experience is mostly theoretical.
I will donate $50 to you or your favorite charity to encourage a new feature: to-usv, which outputs Unicode separated values (USV) with unit separator U+241F and record separator U+241E.
Unicode separated values (USV) are much like comma separated values (CSV), tab separated values (TSV) a.k.a. tab delimited format (TDF), and ASCII separated values (ASV) a.k.a. DEL (Delimited ASCII).
The advantages of USV for me are that USV handles text that happens to contain commas and/or tabs and/or newlines, and also having a visual character representation.
For example USV is great for me within typical source code, such as Unix scripts, because the characters show up, and also easy to copy/paste, and also easy to use within various kinds of editor search boxes.
Bonus: if the programming implementation of to-usv calls a more-flexible function that takes a unit separator string and a record separator string, then you can easily create similar commands for to-asv, to-csv, etc.
Eventually you have to deal with content that contains your separator characters, however obscure. So essentially you have two choices:
A. use some "weird" separators and hope those don't appear in your input
B. bite the bullet and escape and parse properly
Option A is perfectly reasonable for one-offs, where you can handle exceptional cases or know they won't occur because you know what's in the data. However for reusable code, you need option B, which means not using `cut` to parse CSV files, for instance (since commas can occur inside double-quoted strings). In that case, what's the benefit of using USV over an existing, more common, format?
U+241E is "SYMBOL FOR RECORD SEPARATOR". It seems a bit weird to use that as a separator instead of simply U+1E which is the ASCII character "record separator".
While not exactly what you asked for, I wrote something similar called csvquote ( https://github.com/dbro/csvquote ) which transforms "typical" CSV or TSV data to use the ASCII characters for field separators and record separators, and also allows for a reverse transform back to regular CSV or TSV files.
It is handy for pipelining UNIX commands so that they can handle data that includes commas and newlines inside fields. In this example, csvquote is used twice in the pipeline, first at the beginning to make the transformation to ASCII separators and then at the end to undo the transformation so that the separators are human-readable.
It doesn't yet have any built-in awareness of UTF or multi-byte characters, but I'd be happy to receive a pull request if it's something you're able to offer.
I think you're going to need a bigger budget to establish your new proposed standard through consulting fees. Do you remember what happened to GCC's CHILL frontend?
Seems a lot like the Powershell model, which I have mixed feelings about. It's nice for shell scripts, but it makes day-to-day usage cumbersome. I think you can actually use Powershell on Linux, but I'm interested to see where this tool goes.
> It's nice for shell scripts, but it makes day-to-day usage cumbersome.
How? `ps | kill node`. No pgrep hack because ps output a list of processes, not a line of text. As a Unix person Windows Terminal and pwsh is where I spend most of my day.
In the Powershell model, I thought things stayed as structured objects in reality, although the UI was ready to render them as text. This seems to be about continuing to use text, but to being disciplined about formatting.
If the above characterisation is right, it is a middle-ground between Powershell and traditional methods.
Also, this is not introducing a new shell language.
> Aligns the data with the headers. This is done by resizing the columns so that even the longest value fits into the column.
> ...
> This command doesn't work with infinite streams.
Does this do nothing with infinite streams or does it do a "rolling" alignment?
Even with an infinite stream you can keep track of the max width seen thus far and align all future output to those levels. It'll still have some jank to the initial alignment but assuming a consistent distribution of the lengths over time it'd be good enough for eyeballing the results.
Currently it uses the alignment of the headers as the default. It's only when a field exceeds the size of the header when the output is misaligned. The next record returns to the default alignment though.
I was thinking about adding a 'trim' command that would trim long fields to fit into the default field size.
> * any other escape sequence MUST be interpreted as ? (question mark) character.
Isn't it better to forbid them? Presumably you are saving the space for further extensions, but this is allowing readers to interpret them as '?'
Similarly what is the rationale for interpreting control characters is '?'? Instead you can ban them, with the possible exception of treating tabs as spaces.
For anyone interested in learning how to create their own Unix command-line tools (not just use them), feel free to check out these links to content by me (about doing such work in C and Python):
1) Developing a Linux command-line utility: an article I wrote for IBM developerWorks:
Seems like an acro-mondeau of UX (user experience) and XY (tabular format). The tool normalizes some of the Unix tool outputs as a table which can be manipulated.
Very cool, I've had a similar idea myself recently! Though, why not go with a simpler format like TSV (tab-separated values)? Then you don't have to worry about quoting and escaping anything but tabs and newlines (which are very rare in tabular data).
Tabs are a nightmare to deal with when you want to align the columns. Also, I don't consider tabs to be human readable: They are too easily confused with spaces. (Case in point: make)
[+] [-] dima55|6 years ago|reply
- https://github.com/dkogan/vnlog
- https://csvkit.readthedocs.io/
- https://github.com/johnkerl/miller
- https://github.com/BurntSushi/xsv
- https://github.com/eBay/tsv-utils-dlang
- https://stedolan.github.io/jq/
- http://harelba.github.io/q/
- https://github.com/BatchLabs/charlatan
- https://github.com/dinedal/textql
- https://github.com/dbohdan/sqawk
(disclaimer: vnlog is my tool)
[+] [-] vthriller|6 years ago|reply
Here people emulate formatted and filtered ps(1) using GVariants and a bunch of CLI tools:
https://blogs.gnome.org/alexl/2012/08/10/rethinking-the-shel...
Here people use SQL to query and format data right from the shell:
https://github.com/jhspetersson/fselect
https://github.com/facebook/osquery
Also, libxo is a library that allows tools like ls(1) in FreeBSD to generate data in various formats (e.g. JSON):
https://wiki.freebsd.org/LibXo
(edit: formatting)
[+] [-] majkinetor|6 years ago|reply
- https://github.com/adamwiggins/rush
- https://github.com/xonsh/xonsh
Its amazing that people still try this nowdays that pwsh solved it for all.
[+] [-] nailer|6 years ago|reply
Those who fail to understand powershell are condemned to recreate it poorly.
It'd be great for GNU to create a standard for native structured output (as well as a converter tool like the one in this post), then have other tools be able to do it.
But realistically, pwsh is Open Source, runs just fine on Unix boxes and does this now.
[+] [-] jasonpeacock|6 years ago|reply
- https://github.com/benbernard/RecordStream
[+] [-] bayareanative|6 years ago|reply
This resource-wasting antipattern pops up over and over again.
Also, logs are message-oriented entries and serializing them as discrete, lengthy files is insane.
Structured data should stay structured, say a time-series / log-structured database. Destructuring should be a rare event.
[+] [-] xelxebar|6 years ago|reply
Caveat, my Plan 9 experience is mostly theoretical.
[+] [-] jph|6 years ago|reply
I will donate $50 to you or your favorite charity to encourage a new feature: to-usv, which outputs Unicode separated values (USV) with unit separator U+241F and record separator U+241E.
Unicode separated values (USV) are much like comma separated values (CSV), tab separated values (TSV) a.k.a. tab delimited format (TDF), and ASCII separated values (ASV) a.k.a. DEL (Delimited ASCII).
The advantages of USV for me are that USV handles text that happens to contain commas and/or tabs and/or newlines, and also having a visual character representation.
For example USV is great for me within typical source code, such as Unix scripts, because the characters show up, and also easy to copy/paste, and also easy to use within various kinds of editor search boxes.
Bonus: if the programming implementation of to-usv calls a more-flexible function that takes a unit separator string and a record separator string, then you can easily create similar commands for to-asv, to-csv, etc.
[+] [-] inimino|6 years ago|reply
A. use some "weird" separators and hope those don't appear in your input
B. bite the bullet and escape and parse properly
Option A is perfectly reasonable for one-offs, where you can handle exceptional cases or know they won't occur because you know what's in the data. However for reusable code, you need option B, which means not using `cut` to parse CSV files, for instance (since commas can occur inside double-quoted strings). In that case, what's the benefit of using USV over an existing, more common, format?
[+] [-] driax|6 years ago|reply
[+] [-] dbro|6 years ago|reply
It is handy for pipelining UNIX commands so that they can handle data that includes commas and newlines inside fields. In this example, csvquote is used twice in the pipeline, first at the beginning to make the transformation to ASCII separators and then at the end to undo the transformation so that the separators are human-readable.
> csvquote foobar.csv | cut -d ',' -f 5 | sort | uniq -c | csvquote -u
It doesn't yet have any built-in awareness of UTF or multi-byte characters, but I'd be happy to receive a pull request if it's something you're able to offer.
[+] [-] rabidrat|6 years ago|reply
Also, is your offer available for other tabular data tools? :)
[+] [-] kragen|6 years ago|reply
[+] [-] nerdponx|6 years ago|reply
[+] [-] nailer|6 years ago|reply
How? `ps | kill node`. No pgrep hack because ps output a list of processes, not a line of text. As a Unix person Windows Terminal and pwsh is where I spend most of my day.
[+] [-] adrianratnapala|6 years ago|reply
If the above characterisation is right, it is a middle-ground between Powershell and traditional methods.
Also, this is not introducing a new shell language.
[+] [-] koolba|6 years ago|reply
> Aligns the data with the headers. This is done by resizing the columns so that even the longest value fits into the column.
> ...
> This command doesn't work with infinite streams.
Does this do nothing with infinite streams or does it do a "rolling" alignment?
Even with an infinite stream you can keep track of the max width seen thus far and align all future output to those levels. It'll still have some jank to the initial alignment but assuming a consistent distribution of the lengths over time it'd be good enough for eyeballing the results.
[+] [-] rumcajz|6 years ago|reply
I was thinking about adding a 'trim' command that would trim long fields to fit into the default field size.
[+] [-] no_gravity|6 years ago|reply
[+] [-] adrianratnapala|6 years ago|reply
Isn't it better to forbid them? Presumably you are saving the space for further extensions, but this is allowing readers to interpret them as '?'
Similarly what is the rationale for interpreting control characters is '?'? Instead you can ban them, with the possible exception of treating tabs as spaces.
[+] [-] rumcajz|6 years ago|reply
[+] [-] vram22|6 years ago|reply
1) Developing a Linux command-line utility: an article I wrote for IBM developerWorks:
https://jugad2.blogspot.com/2014/09/my-ibm-developerworks-ar...
Follow links in the article to go to the source code of the tool described in the tutorial, and the PDF of the IBM dW article.
2) My comment, here:
https://news.ycombinator.com/item?id=19564706
on this HN thread:
Ask HN: Looking for a series on implementing classic Unix tools from scratch:
https://news.ycombinator.com/item?id=19560418
[+] [-] dharmatech|6 years ago|reply
Have you considered having a way to render output in a graphical toolkit?
See for example:
https://github.com/dharmatech/PsReplWpf
which renders PowerShell output in WPF presentations.
[+] [-] dima55|6 years ago|reply
[+] [-] mijoharas|6 years ago|reply
[+] [-] imglorp|6 years ago|reply
[+] [-] rabidrat|6 years ago|reply
[+] [-] rumcajz|6 years ago|reply