top | item 42468828

(no title)

Upitor | 1 year ago

In my experience, NWP data is big. Like, really big! Data over HTTP calls seems to limit the use case a bit; have you considered making it possible to mount storage directly (fsspec), and use fx the zarr format? In this way, querying with xarray would be much more flexible

discuss

order

ElPeque|1 year ago

The point of GribStream is that you don't need to download the grided data (cause as you say it is huge! and it grows many Gb every single hour for years).

This is an API that will do streaming extraction and only have you download what you actually need.

When you make the http request to the API, the API will be processing up to terabytes of data only to respond to you with maybe a few Kb of csv.

westurner|1 year ago

Why is weather data stored in netcdf instead of tensors or sparse tensors?

Also, SQLite supports virtual tables that can be backed by Content Range requests; https://www.sqlite.org/vtab.html

sqlite-wasm-http, sql.js-httpvfs; HTTP VFS: https://www.npmjs.com/package/sqlite-wasm-http

sqlite-parquet-vtable: https://github.com/cldellow/sqlite-parquet-vtable

Could there be a sqlite-netcdf-vtable or a sqlite-gribs-vtable, or is the dimensionality too much for SQLite?

From https://news.ycombinator.com/item?id=31824578 :

> It looks like e.g. sqlite-parquet-vtable implements shadow tables to memoize row group filters. How does JOIN performance vary amongst sqlite virtual table implementations?

https://news.ycombinator.com/item?id=42264274

SpatialLite does geo vector search with SQLite.

datasette can JOIN across multiple SQLite databases.

Perhaps datasette and datasette-lite could support xarray and thus NetCDF-style multidimensional arrays in WASM in the browser with HTTP Content Range requests to fetch and cache just the data requested

"The NetCDF header": https://climateestimate.net/content/netcdfs-and-basic-coding... :

> The header can also be used to verify the order of dimensions that a variable is saved in (which you will have to know to use, unless you’re using a tool like xarray that lets you refer to dimensions by name) - for a 3-dimensional variable, `lon,lat,time` is common, but some files will have the `time` variable first.

"Loading a subset of a NetCDF file": https://climateestimate.net/content/netcdfs-and-basic-coding...

From https://news.ycombinator.com/item?id=42260094 :

> xeus-sqlite-kernel > "Loading SQLite databases from a remote URL" https://github.com/jupyterlite/xeus-sqlite-kernel/issues/6#i...

  %FETCH <url> <filename>

tomnicholas1|1 year ago

> Why is weather data stored in netcdf instead of tensors or sparse tensors?

NetCDF is a "tensor", at least in the sense of being a self-describing multi-dimensional array format. The bigger problem is that it's not a Cloud-Optimized format, which is why Zarr has become popular.

> Also, SQLite supports virtual tables that can be backed by Content Range requests

The multi-dimensional equivalent of this is "virtual Zarr". I made this library to create virtual Zarr stores pointing at archival data (e.g. netCDF and GRIB)

https://github.com/zarr-developers/VirtualiZarr

> xarray and thus NetCDF-style multidimensional arrays in WASM in the browser with HTTP Content Range requests to fetch and cache just the data requested

Pretty sure you can do this today already using Xarray and fsspec.

ElPeque|1 year ago

In theory it could be done. It is sort of analogous to what GribStream is doing already.

The grib2 files are the storage. They are sorted by time in the path and so that is used like a primary index. And then grib2 is just a binary format to decode to extract what you want.

I originally was going to write this as a plugin for Clickhouse but in the end I made it a Golang API cause then I'm less constrained to other things. Like, for example, I'd like to create and endpoint to live encode the gribfiles into MP4 so the data can be served as Video. And then with any video player you would be able to playback, jump to times, etc.

I might still write a clickhouse integration though because it would be amazing to join and combine with other datasets on the fly.