We do something similar for some limited geospatial search using elastic search. We make a set of h3 indexes for each of the hundreds of millions of gps recordings on our service, and store them in elastic search. Geospatial queries become full text search queries, where a point is on the line if the set of h3 indexes contains the point. You can do queries on how many cells overlap, which lets you match geospatial tracks on the same paths, and with ES coverage queries, you can tune how much overlap you want.
Instead of using integers IDs for the hexes, we created an encoded version of the ID that has the property that removing a character gets you the containing parent of the cell. This means we can do basic containment queries by querying with a low resolution hex (short string) as a prefix query. If a gps track goes through this larger parent cell, the track will have hexes with the same prefix. You don’t get perfect control of distances because hexes have varying diameters (or rather the approximation, since they aren’t circles they are hexes), but in practice and at scale for a product that doesn’t require high precision, it’s very effective.
I think at the end of this year we’ll have about 6tb of these hex sets in a four node 8 process ES cluster. Performance is pretty good. Also acts as our full text search. Half the time we want a geo search we also want keyword / filtering / etc on the metadata of these trips.
Pretty fun system to build, and the concept works with a wide variety of data stores. Felt like a total hack job but it has stood the test of time.
Elastisearch and Opensearch have a built in geo_shape type that is a bit more optimal for queries like this.
Before that existed (pre 1.0 actually), I did something similar with geohashes, which are similar to h3 but based on simple string encoded quad trees. I indexed all the street segments in openstreetmap with that (~800 million at the time) and implemented a simple reverse geocoder. Worked shockingly well.
The geo_shape type uses a bkd tree in binary format. It's heavily optimized for this type of intersects/overlaps queries at scale. Basically does the same thing but using a lot less disk space and memory. It's similar to what you would find in proper GIS databases. Elasticsearch/opensearch also support h3 and geohash grid aggregations on top of geo_shape or geo_point types.
I'm guessing the author is using something like postgresql which of course has similar geospatial indexing support via post gis.
Awesome comment, thanks for sharing the details. I love this kind of pragmatic optimization. Also, one dev's "total hack* job" [e.g. yourself, in the past] is another's stroke of genius.
* I'd frame it as "kludge", reserving "hack" for the positive HN sense. :)
There is a lot of literature on join operations using discrete global grid systems (DGGS). H3 is a widely used DGGS optimized for visualization.
If joins are a critical performance-sensitive operation, the most important property of a DGGS is congruency. H3 is not congruent it was optimized for visualization, where congruency doesn’t matter, rather than analytical computation. For example, the article talks about deduplication, which is not even necessary with a congruent DGGS. You can do joins with H3 but it is not recommended as a general rule unless the data is small such that you can afford to brute-force it to some extent.
H3 is great for doing point geometry aggregates. It shines at that. Not so much geospatial joins though. DGGS optimized for analytic computation (and joins by implication) exist, they just aren’t optimal for trivial visualization.
I agree that the lack of congruency in H3 hexagons can cause weird overlaps and gaps if you plot mixed resolutions naively, but there are some workarounds that work pretty well in practice. For example, if you have mixed resolutions from compacted H3 cells but a single “logical” target resolution underneath, you can plot the coarser cells not with their native geometry, but using the outline of their children. When you do that, there are no gaps. (Totally unrelated but fun: that shape is a fractal sometimes called a "flowsnake" or a "Gosper Island" (https://en.wikipedia.org/wiki/Gosper_curve), which predates H3 by decades.)
That said, this feels like an issue with rendering geometry rather than with the index itself. I’m curious to hear more about why you think the lack of congruency affects H3’s performance for spatial joins. Under the hood, it’s still a parent–child hierarchy very similar to S2’s — H3 children are topological rather than geometric children (even though they still mostly overlap).
There are some competing grid systems with similar features and benefits as well.
Notably A5, which has the property that each cell covers exactly the same area, even when stretched towards the north and south pole. Useful for certain spatial analysis where you need every cell to have the same size.
Z-order based indexes avoid the resolution problem. Basically:
- Generate z-values for spatial objects. Points -> a single z-value at the highest resolution of the space. Non-points -> multiple z-values. Each z-value is represented by a single integer, (I use 64 bit z-values, which provide for space resolution of 56 bits.) Each integer represents a 1-d range. E.g. 0x123 would represent 0x123000 through 0x123fff
- Spatial join is basically a merge of these z-values. If you are joining one spatial object with a collection of N spatial objects, the time is logN. If you are joining two collections, then it's more of a linear-time merge.
For more information: PROBE Spatial Data Modeling and Query Processing in an Image Database Application. IEEE Trans. Software Eng. 14(5): 611-629 (1988)
An open source java implementation: https://github.com/geophile/geophile. (The documentation includes a number of corrections to the published algorithm.)
Ohh, every geo join/spatial thing with picture that consists of those small cells over map is such pet peeve of mine. Facebook marketplace, craigslist, tinder, any app with “proximity search”.
No, this city isn’t 4 miles from my city. There is a literal lake between us. It’s 10+ miles.
Please, invent something, do precompute, but just avoid naive-ish searches.
I wrote a post about using H3 or any DGGS for that matter. Yes it speeds things up but you loose accuracy. If search is the primary concern it can help but if any level of accuracy matters I would just use a better engine with GeoParquet to handle it. https://sedona.apache.org/latest/blog/2025/09/05/should-you-...
You can do this with [S2](https://s2geometry.io/) as well which has the very nice property that parent cells do indeed always contain their children, and sorting the cell ids puts them into in-order order.
The big reason is that H3 is data independant. You put your data in predefined bins and then join on them, whereas kd/r trees depend on the data and building the trees may become prohibitive or very hard (especially in distributed systems).
Why does H3 use a nonoverlapping set of hexagons? A square grid would make it even simpler and faster to calculate. I am perfectly happy to believe that a hex grid works better for some reason but what is that reason?
The main design goal was to make the distance between neighbours constant. With squares, you have 4 side neighbours and 4 corner neighbours. With hexagons, it's easier to interpolate paths and analyse distances.
nice writeup, terrible website garbling the page history
I wonder how it compare with geohashing, I know it is not as efficient in term of partitioning and query end up weird since you need to manage decoding neighbor cells but finding all element of a cell is a "starts with" query which allow to put data effectively on most nosql databases with some sort of text sorting
cullenking|24 days ago
Instead of using integers IDs for the hexes, we created an encoded version of the ID that has the property that removing a character gets you the containing parent of the cell. This means we can do basic containment queries by querying with a low resolution hex (short string) as a prefix query. If a gps track goes through this larger parent cell, the track will have hexes with the same prefix. You don’t get perfect control of distances because hexes have varying diameters (or rather the approximation, since they aren’t circles they are hexes), but in practice and at scale for a product that doesn’t require high precision, it’s very effective.
I think at the end of this year we’ll have about 6tb of these hex sets in a four node 8 process ES cluster. Performance is pretty good. Also acts as our full text search. Half the time we want a geo search we also want keyword / filtering / etc on the metadata of these trips.
Pretty fun system to build, and the concept works with a wide variety of data stores. Felt like a total hack job but it has stood the test of time.
Thanks uber, h3 is a great library!
jillesvangurp|24 days ago
Before that existed (pre 1.0 actually), I did something similar with geohashes, which are similar to h3 but based on simple string encoded quad trees. I indexed all the street segments in openstreetmap with that (~800 million at the time) and implemented a simple reverse geocoder. Worked shockingly well.
The geo_shape type uses a bkd tree in binary format. It's heavily optimized for this type of intersects/overlaps queries at scale. Basically does the same thing but using a lot less disk space and memory. It's similar to what you would find in proper GIS databases. Elasticsearch/opensearch also support h3 and geohash grid aggregations on top of geo_shape or geo_point types.
I'm guessing the author is using something like postgresql which of course has similar geospatial indexing support via post gis.
anacoluthe|24 days ago
chrisweekly|24 days ago
* I'd frame it as "kludge", reserving "hack" for the positive HN sense. :)
ajfriend|24 days ago
freakynit|24 days ago
jandrewrogers|24 days ago
If joins are a critical performance-sensitive operation, the most important property of a DGGS is congruency. H3 is not congruent it was optimized for visualization, where congruency doesn’t matter, rather than analytical computation. For example, the article talks about deduplication, which is not even necessary with a congruent DGGS. You can do joins with H3 but it is not recommended as a general rule unless the data is small such that you can afford to brute-force it to some extent.
H3 is great for doing point geometry aggregates. It shines at that. Not so much geospatial joins though. DGGS optimized for analytic computation (and joins by implication) exist, they just aren’t optimal for trivial visualization.
0xfaded|24 days ago
ajfriend|24 days ago
That said, this feels like an issue with rendering geometry rather than with the index itself. I’m curious to hear more about why you think the lack of congruency affects H3’s performance for spatial joins. Under the hood, it’s still a parent–child hierarchy very similar to S2’s — H3 children are topological rather than geometric children (even though they still mostly overlap).
TacticalCoder|24 days ago
Not familiar with geo stuff / DGGS. Is H3 not congruent because hexagons, unlike squares or triangles, do not tile the plane perfectly?
I mean: could a system using hexagons ever be congruent?
dgsan|24 days ago
nmstoker|24 days ago
adrriv|21 days ago
stevemk14ebr|23 days ago
markstos|23 days ago
Notably A5, which has the property that each cell covers exactly the same area, even when stretched towards the north and south pole. Useful for certain spatial analysis where you need every cell to have the same size.
https://a5geo.org/
jandrewrogers|23 days ago
febed|24 days ago
feverzsj|24 days ago
hiddew|24 days ago
geophile|23 days ago
- Generate z-values for spatial objects. Points -> a single z-value at the highest resolution of the space. Non-points -> multiple z-values. Each z-value is represented by a single integer, (I use 64 bit z-values, which provide for space resolution of 56 bits.) Each integer represents a 1-d range. E.g. 0x123 would represent 0x123000 through 0x123fff
- Spatial join is basically a merge of these z-values. If you are joining one spatial object with a collection of N spatial objects, the time is logN. If you are joining two collections, then it's more of a linear-time merge.
For more information: PROBE Spatial Data Modeling and Query Processing in an Image Database Application. IEEE Trans. Software Eng. 14(5): 611-629 (1988)
An open source java implementation: https://github.com/geophile/geophile. (The documentation includes a number of corrections to the published algorithm.)
galkk|24 days ago
No, this city isn’t 4 miles from my city. There is a literal lake between us. It’s 10+ miles.
Please, invent something, do precompute, but just avoid naive-ish searches.
boxed|24 days ago
analytically|24 days ago
mattforrest|23 days ago
gct|22 days ago
mgaunard|24 days ago
cpa|24 days ago
mcherm|23 days ago
RaczeQ|23 days ago
kbaker|23 days ago
https://h3geo.org/docs/comparisons/s2/
avereveard|24 days ago
I wonder how it compare with geohashing, I know it is not as efficient in term of partitioning and query end up weird since you need to manage decoding neighbor cells but finding all element of a cell is a "starts with" query which allow to put data effectively on most nosql databases with some sort of text sorting
tiagod|23 days ago