Everyone asking why this exists when DuckDB or PostGIS or the JVM based Sedona already exists, clearly has not run into the painful experience of working on these large geospatial workloads when the legacy options are either not viable or not an option for other reasons, which happens more often than you might expect! And the CRS awareness!!! Incredible! This is such a huge source of error when you throw folks that are doing their best, but don't have a lot of experience with GIS workloads. Very expensive queries have had to be rerun with drastic changes to the results, because someone got their CRS mixed up.
I don't get to do geospatial work as much anymore, but I would have killed for this just a year ago.
I usually start with PostGIS for single-node workloads and then switch to Exasol when I get to truly massive datasets (Exasol has a more limited set of spatial operators, but scales effortlessly across multiple nodes).
It will be great with some more options in this space, especially if it makes a smooth transition from single-node/local interactions to multi-node scale-out.
Somehow I dont see this applicable for 90% of all current spatial needs, where PostGIS does just right, and same IMHO goes for DuckDB. There perhaps exists 10% of business where data is so immense you want to hit it with Rust & whatnot, but all others do just fine im Postgre.
My bet is most of actually useful spatial ST_ functions are not implemented in this one, as they are not in the DuckDB offering.
I wrote a book on PostGIS and used it for years and these single node analytical tools make sense when PostGIS performance starts to break down. For many tasks PostGIS works great, but again you are limited by the fact that your tables have to live in the DB and can only scale as much as the computing resources you have allocated.
In terms of number of functions PostGIS is still the leader, but for analytical functions (spatial relationships, distances, etc) having those in place in these systems is important. DuckDB started this but this has a spatial focused engine. You can use the two together, PostGIS for transactional processing and queries, and then SedonaDB for processing and data prep.
A combination of tools makes a lot of sense here especially as the data starts to grow.
SedonaDB can decode PROJJSON and authority:code CRSes at the moment, although the underlying representation is just a string. In this case you might want something like CZBOND:999 or
I thought Apache Sedona is implemented in Java/Scala for distributed runtimes like Spark and Flink. Wouldn't Rust tooling for interactive use be built atop a completely different stack?
It's built on a separate stack but conceptually it's very similar (DataFusion shares a number of idioms with Spark and has a number of projects implementing various Spark compatibility)...I think the idea was to bring the successful pieces of Sedona Spark to a wider audience.
Currently, lazier GeoParquet reads, a K-nearest neigbours join, Coordinate Reference System tracking, and built-in GeoPandas IO. These aren't things that DuckDB spatial can't or won't do, but they are things that DuckDB hasn't prioritized over the last year that are essential to a lot of spatial pipelines.
While DuckDB is excellent, I've found the spatial extension still has some rough edges compared to more mature solutions like PostGIS.
1. The latitude/longitude ordering for points differs from PostGIS and most standard geospatial libraries, which creates friction due to muscle memory.
2. Anecdotal: spatial joins haven't matched PostGIS performance for similar operations, though this may vary by use case and data size.
3. The spatial extension has a backlog of long-standing GitHub issues.
I’ve been out of the geo loop for a while. I’m struggling to understand why I’d use this over postgis. There used to be the argument that installing extensions was painful, but now that docker exists pulling the postgis image is just as easy as normal Postgres. And RDS has supported it for a while.
PostGIS is great when your data is already in a Postgres table! SedonaDB and DuckDB are much faster when your data starts elsewhere (e.g., GeoParquet files).
Agreed that the polars interface is far superior to SQL! There are a few ways to do this if there's interest...polars wasn't an option because we needed Arrow extension types (https://github.com/pola-rs/polars/issues/9112).
You’re absolutely asking the right question. As we noted in Future Work section of the SpatialBench result (https://sedona.apache.org/spatialbench/single-node-benchmark...), this benchmark is focused on geospatial analytical queries. For these workloads, features like columnar layout, vectorized execution, zero-copy, and zero SerDe provide huge performance benefits.
While PostGIS is often used for spatial analytics because of its rich spatial function coverage, it is fundamentally a transactional database. This design makes it less suited for analytical query performance, and including it directly in SpatialBench would risk claims of being an “apples-to-oranges” comparison. That’s why we exclude PostGIS from the published benchmark results.
That said, we do continuously validate against PostGIS. For every single function in SedonaDB, we maintain an automated PyTest benchmark framework (https://github.com/apache/sedona-db/tree/main/benchmarks) that compares both speed and correctness against DuckDB and PostGIS. This ensures we catch regressions early and guarantees correctness. You can even run these benchmarks yourself to see how SedonaDB performs. It is often extremely fast in practice.
Rust is a good language for performant computing in general, but especially for data projects because there are so many great OSS data libraries like DataFusion and Arrow.
SedonaDB currently supports SQL, Python, R, and Rust APIs. We can support APIs for other languages in the future. That's another nice part about Rust. There are lots of libraries to expose other language bindings to Rust projects.
> Update (August 2024): GeoPolars is blocked on Polars supporting Arrow extension types, which would allow GeoPolars to persist geometry type information and coordinate reference system (CRS) metadata. It's not feasible to create a geopolars. GeoDataFrame as a subclass of a polars. DataFrame (similar to how the geopandas. GeoDataFrame is a subclass of pandas.DataFrame) because polars explicitly does not support subclassing of core data types.
SedonaDB builds on libraries in the Rust ecosystem, like Apache DataFusion, to provide users with a nice geospatial DataFrame experience. It has functions like ST_Intersects that are common in spatial libraries, but not standard in most DataFrame implementations.
There are other good alternatives, such as GeoPandas and DuckDB Spatial. SedonaDB has Python/SQL APIs and is very fast. New features like full raster support and compatibility with lakehouse formats are coming soon!
ZeroCool2u|5 months ago
I don't get to do geospatial work as much anymore, but I would have killed for this just a year ago.
jinjin2|5 months ago
It will be great with some more options in this space, especially if it makes a smooth transition from single-node/local interactions to multi-node scale-out.
throwmeaway222|5 months ago
larodi|5 months ago
My bet is most of actually useful spatial ST_ functions are not implemented in this one, as they are not in the DuckDB offering.
mattforrest|5 months ago
In terms of number of functions PostGIS is still the leader, but for analytical functions (spatial relationships, distances, etc) having those in place in these systems is important. DuckDB started this but this has a spatial focused engine. You can use the two together, PostGIS for transactional processing and queries, and then SedonaDB for processing and data prep.
A combination of tools makes a lot of sense here especially as the data starts to grow.
wichert|5 months ago
czbond|5 months ago
For example, if i wanted to define a 4d region called (fish, towel, mouse, alien) and there were floats for each of fish/towel/mouse/alien?
paleolimbot|5 months ago
{ "type": "EngineeringCRS", "name": "Fish, Towel, Mouse", "datum": {"name": "Wet Kitty + Mouse In Peril"}, "coordinate_system": { "subtype": "Cartesian", "axis": [ {"name": "Fish", "abbreviation": "F", "direction": "east"}, {"name": "Towel", "abbreviation": "T", "direction": "north"}, {"name": "Mouse", "abbreviation": "M", "direction": "up"}, ] } }
(Subject to the limitations of PROJJSON, such as a 4D CRS having a temporal axis and a limited set of acceptable "direction" values)
gangtao|5 months ago
drewda|5 months ago
I thought Apache Sedona is implemented in Java/Scala for distributed runtimes like Spark and Flink. Wouldn't Rust tooling for interactive use be built atop a completely different stack?
paleolimbot|5 months ago
ZeroCool2u|5 months ago
whinvik|5 months ago
paleolimbot|5 months ago
neilfrndes|5 months ago
1. The latitude/longitude ordering for points differs from PostGIS and most standard geospatial libraries, which creates friction due to muscle memory.
2. Anecdotal: spatial joins haven't matched PostGIS performance for similar operations, though this may vary by use case and data size.
3. The spatial extension has a backlog of long-standing GitHub issues.
WD-42|5 months ago
What am I missing? The api even looks the same.
paleolimbot|5 months ago
zigzag312|5 months ago
0x9e3779b6|5 months ago
It comes a disappointment for me that SedonaDB hasn’t adopted a similar approach.
Apache stack provides everything needed, but for small things I would not prefer SQL exactly
paleolimbot|5 months ago
mkesper|5 months ago
thirtygeo|5 months ago
itsthecourier|5 months ago
dr-jia-yu|5 months ago
While PostGIS is often used for spatial analytics because of its rich spatial function coverage, it is fundamentally a transactional database. This design makes it less suited for analytical query performance, and including it directly in SpatialBench would risk claims of being an “apples-to-oranges” comparison. That’s why we exclude PostGIS from the published benchmark results.
That said, we do continuously validate against PostGIS. For every single function in SedonaDB, we maintain an automated PyTest benchmark framework (https://github.com/apache/sedona-db/tree/main/benchmarks) that compares both speed and correctness against DuckDB and PostGIS. This ensures we catch regressions early and guarantees correctness. You can even run these benchmarks yourself to see how SedonaDB performs. It is often extremely fast in practice.
dzonga|5 months ago
MrPowers|5 months ago
SedonaDB currently supports SQL, Python, R, and Rust APIs. We can support APIs for other languages in the future. That's another nice part about Rust. There are lots of libraries to expose other language bindings to Rust projects.
jrozner|5 months ago
MrPowers|5 months ago
From the README:
> Update (August 2024): GeoPolars is blocked on Polars supporting Arrow extension types, which would allow GeoPolars to persist geometry type information and coordinate reference system (CRS) metadata. It's not feasible to create a geopolars. GeoDataFrame as a subclass of a polars. DataFrame (similar to how the geopandas. GeoDataFrame is a subclass of pandas.DataFrame) because polars explicitly does not support subclassing of core data types.
jedisct1|5 months ago
What does it do besides being written in Rust?
MrPowers|5 months ago
There are other good alternatives, such as GeoPandas and DuckDB Spatial. SedonaDB has Python/SQL APIs and is very fast. New features like full raster support and compatibility with lakehouse formats are coming soon!
tomtom1337|5 months ago
As someone who has had to use geopandas a lot, having something which is up to an order of magnitude faster is a real dream come true.