top | item 19954195

Show HN: UXY – adding structure to Unix tools

126 points| rumcajz | 6 years ago |github.com | reply

39 comments

order
[+] dima55|6 years ago|reply
[+] vthriller|6 years ago|reply
I'd argue this is more about quacking like a PowerShell than manipulating xSV/JSON in the pipeline. So here's my quick bunch of links that show the demand for that.

Here people emulate formatted and filtered ps(1) using GVariants and a bunch of CLI tools:

https://blogs.gnome.org/alexl/2012/08/10/rethinking-the-shel...

Here people use SQL to query and format data right from the shell:

https://github.com/jhspetersson/fselect

https://github.com/facebook/osquery

Also, libxo is a library that allows tools like ls(1) in FreeBSD to generate data in various formats (e.g. JSON):

https://wiki.freebsd.org/LibXo

(edit: formatting)

[+] nailer|6 years ago|reply
> This is becoming a really crowded space.

Those who fail to understand powershell are condemned to recreate it poorly.

It'd be great for GNU to create a standard for native structured output (as well as a converter tool like the one in this post), then have other tools be able to do it.

But realistically, pwsh is Open Source, runs just fine on Unix boxes and does this now.

[+] bayareanative|6 years ago|reply
A related problem is the constant churn of logging.. taking structured data, destructuring it with a string serialization and then parsing it again.

This resource-wasting antipattern pops up over and over again.

Also, logs are message-oriented entries and serializing them as discrete, lengthy files is insane.

Structured data should stay structured, say a time-series / log-structured database. Destructuring should be a rare event.

[+] xelxebar|6 years ago|reply
I think Plan 9 gives a nice distinction. We use files as both a persistent store as well as an interface, so it seems nice to separate those two concerns out. That way you could have your logs as a UI into application state and only incur the overhead of serialization and persistence when you deem necessary.

Caveat, my Plan 9 experience is mostly theoretical.

[+] jph|6 years ago|reply
Excellent, thank you for creating UXY!

I will donate $50 to you or your favorite charity to encourage a new feature: to-usv, which outputs Unicode separated values (USV) with unit separator U+241F and record separator U+241E.

Unicode separated values (USV) are much like comma separated values (CSV), tab separated values (TSV) a.k.a. tab delimited format (TDF), and ASCII separated values (ASV) a.k.a. DEL (Delimited ASCII).

The advantages of USV for me are that USV handles text that happens to contain commas and/or tabs and/or newlines, and also having a visual character representation.

For example USV is great for me within typical source code, such as Unix scripts, because the characters show up, and also easy to copy/paste, and also easy to use within various kinds of editor search boxes.

Bonus: if the programming implementation of to-usv calls a more-flexible function that takes a unit separator string and a record separator string, then you can easily create similar commands for to-asv, to-csv, etc.

[+] inimino|6 years ago|reply
Eventually you have to deal with content that contains your separator characters, however obscure. So essentially you have two choices:

A. use some "weird" separators and hope those don't appear in your input

B. bite the bullet and escape and parse properly

Option A is perfectly reasonable for one-offs, where you can handle exceptional cases or know they won't occur because you know what's in the data. However for reusable code, you need option B, which means not using `cut` to parse CSV files, for instance (since commas can occur inside double-quoted strings). In that case, what's the benefit of using USV over an existing, more common, format?

[+] driax|6 years ago|reply
U+241E is "SYMBOL FOR RECORD SEPARATOR". It seems a bit weird to use that as a separator instead of simply U+1E which is the ASCII character "record separator".
[+] dbro|6 years ago|reply
While not exactly what you asked for, I wrote something similar called csvquote ( https://github.com/dbro/csvquote ) which transforms "typical" CSV or TSV data to use the ASCII characters for field separators and record separators, and also allows for a reverse transform back to regular CSV or TSV files.

It is handy for pipelining UNIX commands so that they can handle data that includes commas and newlines inside fields. In this example, csvquote is used twice in the pipeline, first at the beginning to make the transformation to ASCII separators and then at the end to undo the transformation so that the separators are human-readable.

> csvquote foobar.csv | cut -d ',' -f 5 | sort | uniq -c | csvquote -u

It doesn't yet have any built-in awareness of UTF or multi-byte characters, but I'd be happy to receive a pull request if it's something you're able to offer.

[+] rabidrat|6 years ago|reply
How is USV better than ASV, which would use U+001E and U+001F?

Also, is your offer available for other tabular data tools? :)

[+] kragen|6 years ago|reply
I think you're going to need a bigger budget to establish your new proposed standard through consulting fees. Do you remember what happened to GCC's CHILL frontend?
[+] nerdponx|6 years ago|reply
Seems a lot like the Powershell model, which I have mixed feelings about. It's nice for shell scripts, but it makes day-to-day usage cumbersome. I think you can actually use Powershell on Linux, but I'm interested to see where this tool goes.
[+] nailer|6 years ago|reply
> It's nice for shell scripts, but it makes day-to-day usage cumbersome.

How? `ps | kill node`. No pgrep hack because ps output a list of processes, not a line of text. As a Unix person Windows Terminal and pwsh is where I spend most of my day.

[+] adrianratnapala|6 years ago|reply
In the Powershell model, I thought things stayed as structured objects in reality, although the UI was ready to render them as text. This seems to be about continuing to use text, but to being disciplined about formatting.

If the above characterisation is right, it is a middle-ground between Powershell and traditional methods.

Also, this is not introducing a new shell language.

[+] koolba|6 years ago|reply
> uxy align

> Aligns the data with the headers. This is done by resizing the columns so that even the longest value fits into the column.

> ...

> This command doesn't work with infinite streams.

Does this do nothing with infinite streams or does it do a "rolling" alignment?

Even with an infinite stream you can keep track of the max width seen thus far and align all future output to those levels. It'll still have some jank to the initial alignment but assuming a consistent distribution of the lengths over time it'd be good enough for eyeballing the results.

[+] rumcajz|6 years ago|reply
Currently it uses the alignment of the headers as the default. It's only when a field exceeds the size of the header when the output is misaligned. The next record returns to the default alignment though.

I was thinking about adding a 'trim' command that would trim long fields to fit into the default field size.

[+] no_gravity|6 years ago|reply
I think this is putting too many different functions into a single command.

    uxy ls
This looks like it "tabifies" the output of a given command. Aka it turns the output of the given command into a tab seperated format.

    uxy reformat "NAME SIZE"
This seems to collide with the above since "reformat" is not a command which will be tabified. Instead it filters stdin for two columns.

    uxy align
This seems to do the same as "column -t".
[+] adrianratnapala|6 years ago|reply
> * any other escape sequence MUST be interpreted as ? (question mark) character.

Isn't it better to forbid them? Presumably you are saving the space for further extensions, but this is allowing readers to interpret them as '?'

Similarly what is the rationale for interpreting control characters is '?'? Instead you can ban them, with the possible exception of treating tabs as spaces.

[+] rumcajz|6 years ago|reply
Postel's principle: By liberal in what you accept... It means that the tool won't crash just because there's weird input.
[+] vram22|6 years ago|reply
For anyone interested in learning how to create their own Unix command-line tools (not just use them), feel free to check out these links to content by me (about doing such work in C and Python):

1) Developing a Linux command-line utility: an article I wrote for IBM developerWorks:

https://jugad2.blogspot.com/2014/09/my-ibm-developerworks-ar...

Follow links in the article to go to the source code of the tool described in the tutorial, and the PDF of the IBM dW article.

2) My comment, here:

https://news.ycombinator.com/item?id=19564706

on this HN thread:

Ask HN: Looking for a series on implementing classic Unix tools from scratch:

https://news.ycombinator.com/item?id=19560418

[+] mijoharas|6 years ago|reply
Can anyone elaborate on why the tool is named UXY? I couldn't find anything in the repo, and there is no wiki.
[+] imglorp|6 years ago|reply
Seems like an acro-mondeau of UX (user experience) and XY (tabular format). The tool normalizes some of the Unix tool outputs as a table which can be manipulated.
[+] rabidrat|6 years ago|reply
Very cool, I've had a similar idea myself recently! Though, why not go with a simpler format like TSV (tab-separated values)? Then you don't have to worry about quoting and escaping anything but tabs and newlines (which are very rare in tabular data).
[+] rumcajz|6 years ago|reply
Tabs are a nightmare to deal with when you want to align the columns. Also, I don't consider tabs to be human readable: They are too easily confused with spaces. (Case in point: make)