(no title)
lukekim | 1 year ago
Originally, we only supported DuckDB in our cloud product Spice Firecache, but actually lost a customer because their use-case was optimized for an OLTP DB. Now, you can get a choice... down to the dataset level and still be able to join across them in a single query. With Spice, you can load both SQLite and DuckDB together in the same process for local materialization and acceleration.
Finally, Spice OSS does more than just data query. You can read about the vision to power AI-driven applications by co-locating data with models at https://docs.spiceai.org/intelligent-applications.
riku_iki|1 year ago
my understanding is if you run some SQL in DuckDB against PG using extension, say select * from t where id = 2; it will perform actual lookup on PG server but results will be accessible in DuckDB.
> With Spice, you can load both SQLite and DuckDB together in the same process for local materialization and acceleration.
you can do this in any Py or Java or C++ or whatever program..
lukekim|1 year ago
## Using DuckDB:
app -> duckdb -> network -> remote postgres (data) | local postgres (materialization)
## Using Spice:
app -> localhost gRPC/HTTP -> [Spice <duckdb|sqlite>] -> network -> [postgres|S3|snowflake|etc]
In addition, Spice manages the materialization for you. In the DuckDB-only case, you'd have to do a COPY FROM [remote postgres] to [local postgres] manually every time, and manage the data lifecycle yourself. That gets even more complicated if you want to do append or incremental updates of data to your local materialization.