top | item 46667366

(no title)

RobinL | 1 month ago

Worse in some ways, better in others. DuckDB is often an excellent tool for this kind of task. Since it can run parallelized reads I imagine it's often faster than command line tool, and with easier to understand syntax

discuss

order

briHass|1 month ago

More importantly, you have your data in a structured format that can be easily inspected at any stage of the pipeline using a familiar tool: SQL.

I've been using this pattern (scripts or code that execute commands against DuckDB) to process data more recently, and the ability to do deep investigations on the data as you're designing the pipeline (or when things go wrong) is very useful. Doing it with a code-based solution (read data into objects in memory) is much more challenging to view the data. Debugging tools to inspect the objects on the heap is painful compared to being able to JOIN/WHERE/GROUP BY your data.

groundzeros2015|1 month ago

Yep. It’s literally what SQL was designed for, your business website can running it… the you write a shell script to also pull some data on a cron. It’s beautiful

mrgoldenbrown|1 month ago

IMHO the main point of the article is that typical unix command pipeline pipeline IS parallelized already.

The bottleneck in the example was maxing out disk IO, which I don't think duckdb can help with.

chuckadams|1 month ago

Pipes are parallelized when you have unidirectional data flow between stages. They really kind of suck for fan-out and joining though. I do love a good long pipeline of do-one-thing-well utilities, but that design still has major limits. To me, the main advantage of pipelines is not so much the parallelism, but being streams that process "lazily".

On the other hand, unix sockets combined with socat can perform some real wizardry, but I never quite got the hang of that style.