top | item 46436422

(no title)

carbocation | 2 months ago

That repo is throwing up a 404 for me.

Question - did you consider tradeoffs between duckdb (or other columnar stores) and SQLite?

discuss

order

keepamovin|2 months ago

No, I just went straight to sqlite. What is duckdb?

simonw|2 months ago

One interesting feature of DuckDB is that it can run queries against HTTP ranges of a static file hosted via HTTPS, and there's an official WebAssembly build of it that can do that same trick.

So you can dump e.g. all of Hacker News in a single multi-GB Parquet file somewhere and build a client-side JavaScript application that can run queries against that without having to fetch the whole thing.

You can run searches on https://lil.law.harvard.edu/data-gov-archive/ and watch the network panel to see DuckDB in action.

fsiefken|2 months ago

DuckDB is an open-source column-oriented Relational Database Management System (RDBMS). It's designed to provide high performance on complex queries against large databases in embedded configuration.

It has transparent compression built-in and has support for natural language queries. https://buckenhofer.com/2025/11/agentic-ai-with-duckdb-and-s...

"DICT FSST (Dictionary FSST) represents a hybrid compression technique that combines the benefits of Dictionary Encoding with the string-level compression capabilities of FSST. This approach was implemented and integrated into DuckDB as part of ongoing efforts to optimize string storage and processing performance." https://homepages.cwi.nl/~boncz/msc/2025-YanLannaAlexandre.p...

cess11|2 months ago

It is very similar to SQLite in that it can run in-process and store its data as a file.

It's different in that it is tailored to analytics, among other things storage is columnar, and it can run off some common data analytics file formats.

1vuio0pswjnm7|2 months ago

"What is duckdb?"

duckdb is a 45M dynamically-linked binary (amd64)

sqlite3 1.7M static binary (amd64)

DuckDB is a 6yr-old project

SQLite is a 25yr-old project

jacquesm|2 months ago

Maybe it got nuked by MS? The rest of their repo's are up.

keepamovin|2 months ago

Hey jacquesm! No, I just forgot to make it public.

BUT I did try to push the entire 10GB of shards to GitHub (no LFS, no thanks, money), and after the 20 minutes compressing objects etc, "remote hang up unexpectedly"

To be expected I guess. I did not think GH Pages would be able to do this. So have been repeating:

  wrangler pages deploy docs --project-name static-news --commit-dirty=true
on changes and first time CF Pages user here, much impressed!

3eb7988a1663|2 months ago

While I suspect DuckDB would compress better, given the ubiquity of SQLite, it seems a fine standard choice.

peheje|2 months ago

the data is dominated by big unique TEXT columns, unsure how that can much compress better when grouped - but would be interesting to know

linhns|2 months ago

Not the author here. I’m not sure about DuckDB, but SQLite allows you to simply use a file as a database and for archiving, it’s really helpful. One file, that’s it.

cobolcomesback|2 months ago

DuckDB does as well. A super simplified explanation of duckdb is that it’s sqlite but columnar, and so is better for analytics of large datasets.

keepamovin|2 months ago

i forgot to set repo to public. Fixed now