If you need only a subset of full SQL, e.g. just joins, counting/aggregation, and date manipulation (as in several of these examples), I've found it fairly easy to work with a mixture of the standard Unix join(1) [found on nearly all systems], and some of the additions from Google's crush-tools (https://code.google.com/p/crush-tools/), mainly 'aggregate', 'grepfield', 'funiq', and 'convdate'. I find chaining them together a bit easier than writing monolithic SQL statements, but there's probably some crossover point at which that wouldn't be true.
It'd be interesting to compare runtimes as well. I would guess that there's some overhead in loading into the DB up front, but that you might gain some speedup by converting longer chains of Unix pipes into one SQL query. On the other hand you might lose some parallelism. Would take some testing on different kinds of queries and data sets to get an idea of the pros/cons I think.
Another interesting alternative could be using or writing a PostgreSQL foreign data wrapper.
There is one that supports tabular data (file_fdw), and another one for JSON files (json_fdw). If you have files in other formats, you can also write your fdw for it. This way, you get complete SQL coverage.
Rather than making a custom tool to issue SQL, the idea is that regular CLI tools map well to the traditional relational algebra operations. sed is like selection, cat is like union, etc.
This seems like a cool project, but Q is already a well-used JS promise library so there's a bit of a name conflict there even if the libs are aimed at very different tasks...
And the JS promise library in turn collided with the name of Q, the array-processing language, which itself collided with the name of another programming language named Q (http://q-lang.sourceforge.net). If you're naming a tech-related thing after about 1980, the single-letter names are all taken...
Any library with a really short name (<= 2 letters) should be prepared for name collisions. If they wanted to avoid name collisions, they should have chosen a slightly longer, more descriptive name in the first place. I think in this context (i.e. really short name) it doesn't really matter who was first.
It's also a single letter in the alphabet. Anyone who names their project a single letter (or two) is expecting some amount of collision. The real question is whether or not the collision is worth it.
In this case, it might be... they are trying to make a command-line tool. So in theory, you'll be typing the command often, meaning that a short name is preferable.
But honestly, it would have probably been a better idea to use a more descriptive name.
Eh, q as a name is going to be a serious problem as q is the language for programming kdb, a column-oriented database. As both are related to databases, it's difficult to defend this name.
The Linux toolset is really great, and I use it extensively. The whole idea of the tool is not to replace any of the existing tools, but to extend the toolset to concepts which treat text as data. In a way, it's a metatool which provides an easy and familiar way to add more data processing concepts to the linux toolset. There are many cases where I use 'wc -l' in order to count rows in a file, but if i need to count the rows of only the ones which have a specific column which is larger than the value X, or get the sum of some column per group, then q is a simple and readable way to do it properly, without any need for "tricks".
My rationale for creating it is also explained in the README of the github project.
NB. Follow link to original post to compare against standard regex version.
There are also some nice grammar parsers available in some languages which make this even easier. For examples of this see Perl6 Rules/Grammar, Perl5 Regexp::Grammars or (for something which doesn't used regex at all is) Rebol Parse.
In what way would you write regexes verbosely? I'm actually quite interested in the idea because regexes can be confusing to write at times, and it's difficult to remember which form to use where, if you use them in many languages/interfaces.
There are tools like Regexper[1] that let you visualize the regex as an automata graph, and there are tools like text2re[2] which will allow you to put in text and visually generate a regex to match it.
I feel like better regex tools should exist on the command line, and it's potentially a great place for such tools to be rapidly developed and adopted. There are GUI tools for this like poirot[3], but the command line still exists because of its accessibility, uniformity, and extensibility.
I'm still not seeing anything easier than ETLing into a regular old database, at which point you have the power and flexibility of a regular old database. Which is pretty spiffy, actually.
Are people really so bad at databases that they'll gladly suffer hacks like this to avoid using one?
I have ended up with MySQL and PostgreSQL on nearly every linux I've installed. It is baffling to me that someone would decline these tools in favor of text file drivers or SQLite, with the exception of using SQLite as an embedded config database or something.
Yeah I also do the same, import the csv (it doesn't have to be comma separated) file into PostgreSQL and analyze it using SQL. Alternatively, I will load it into IPython/pandas and work with it like that.
This is however useful for one off, throwaway query that offers familiar SQL syntax, if you don't want to use awk that is.
The credit should go to SQLite, which is extremely versatile. A while back I wrote Thredis (thredis.org), which lets you use SQL commands inside Redis (also using SQLite, of course).
[+] [-] mjn|12 years ago|reply
It'd be interesting to compare runtimes as well. I would guess that there's some overhead in loading into the DB up front, but that you might gain some speedup by converting longer chains of Unix pipes into one SQL query. On the other hand you might lose some parallelism. Would take some testing on different kinds of queries and data sets to get an idea of the pros/cons I think.
[+] [-] ozgune|12 years ago|reply
There is one that supports tabular data (file_fdw), and another one for JSON files (json_fdw). If you have files in other formats, you can also write your fdw for it. This way, you get complete SQL coverage.
Also, if you don't want to pay the overhead of parsing the file every time, you can use the new materialized feature for caching: http://www.postgresql.org/docs/9.3/static/rules-materialized...
(Disclaimer: Enthused Postgres user.)
[+] [-] jdp|12 years ago|reply
Rather than making a custom tool to issue SQL, the idea is that regular CLI tools map well to the traditional relational algebra operations. sed is like selection, cat is like union, etc.
[+] [-] unwind|12 years ago|reply
I guess it just applies tokenization and throws the text into a temporary database.
Quite similar to the Go project https://github.com/dinedal/textql, at least superficially.
[+] [-] j_s|12 years ago|reply
https://news.ycombinator.com/item?id=7175830
[+] [-] hipsters_unite|12 years ago|reply
[+] [-] mjn|12 years ago|reply
[+] [-] vog|12 years ago|reply
[+] [-] mbreese|12 years ago|reply
In this case, it might be... they are trying to make a command-line tool. So in theory, you'll be typing the command often, meaning that a short name is preferable.
But honestly, it would have probably been a better idea to use a more descriptive name.
[+] [-] borntyping|12 years ago|reply
[1] https://pypi.python.org/pypi/q
[+] [-] ajtulloch|12 years ago|reply
[+] [-] mseebach|12 years ago|reply
It looks like a neat API, but it's not really similar to this project.
[+] [-] bduerst|12 years ago|reply
[+] [-] sz4kerto|12 years ago|reply
[+] [-] BMarkmann|12 years ago|reply
[+] [-] zeckalpha|12 years ago|reply
[+] [-] harelba|12 years ago|reply
The Linux toolset is really great, and I use it extensively. The whole idea of the tool is not to replace any of the existing tools, but to extend the toolset to concepts which treat text as data. In a way, it's a metatool which provides an easy and familiar way to add more data processing concepts to the linux toolset. There are many cases where I use 'wc -l' in order to count rows in a file, but if i need to count the rows of only the ones which have a specific column which is larger than the value X, or get the sum of some column per group, then q is a simple and readable way to do it properly, without any need for "tricks".
My rationale for creating it is also explained in the README of the github project.
Any more comments are most welcome.
Harel
[+] [-] ww2|12 years ago|reply
[+] [-] jmnicolas|12 years ago|reply
[+] [-] draegtun|12 years ago|reply
NB. Follow link to original post to compare against standard regex version.
There are also some nice grammar parsers available in some languages which make this even easier. For examples of this see Perl6 Rules/Grammar, Perl5 Regexp::Grammars or (for something which doesn't used regex at all is) Rebol Parse.
For eg. Here is my Rebol version of the HN post above: http://www.reddit.com/r/programming/comments/1smpa1/why_rebo...
And here is a presentation which shows a great example using Perl6 grammars: http://jnthn.net/papers/2014-fosdem-perl6-today.pdf
Refs:
- http://en.wikibooks.org/wiki/Perl_6_Programming/Grammars
- http://en.wikipedia.org/wiki/Perl_6_rules
- https://metacpan.org/pod/Regexp::Grammars
- http://www.rebol.com/docs/core23/rebolcore-15.html
- http://blog.hostilefork.com/why-rebol-red-parse-cool/
PS. Alternatively f you looking for something interactive then checkout tools like these: http://rebol.informe.com/blog/2013/07/01/parse-aid/ | https://metacpan.org/pod/Regexp::Debugger
[+] [-] AUmrysh|12 years ago|reply
There are tools like Regexper[1] that let you visualize the regex as an automata graph, and there are tools like text2re[2] which will allow you to put in text and visually generate a regex to match it.
I feel like better regex tools should exist on the command line, and it's potentially a great place for such tools to be rapidly developed and adopted. There are GUI tools for this like poirot[3], but the command line still exists because of its accessibility, uniformity, and extensibility.
links:
[1] http://www.regexper.com/
[2] http://txt2re.com/index.php3?s=24%3AFeb%3A2014+%22This+is+an...
[3] http://www.espgraphics.com/poirot/
[+] [-] bigd|12 years ago|reply
[+] [-] spacemanmatt|12 years ago|reply
Are people really so bad at databases that they'll gladly suffer hacks like this to avoid using one?
[+] [-] spacemanmatt|12 years ago|reply
[+] [-] super_mario|12 years ago|reply
This is however useful for one off, throwaway query that offers familiar SQL syntax, if you don't want to use awk that is.
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] gtrubetskoy|12 years ago|reply
[+] [-] pointernil|12 years ago|reply
[+] [-] aethertap|12 years ago|reply
[+] [-] chris_wot|12 years ago|reply
[+] [-] npongracic|12 years ago|reply
[+] [-] rubysucks|12 years ago|reply
[deleted]