I agree that the lack of congruency in H3 hexagons can cause weird overlaps and gaps if you plot mixed resolutions naively, but there are some workarounds that work pretty well in practice. For example, if you have mixed resolutions from compacted H3 cells but a single “logical” target resolution underneath, you can plot the coarser cells not with their native geometry, but using the outline of their children. When you do that, there are no gaps. (Totally unrelated but fun: that shape is a fractal sometimes called a "flowsnake" or a "Gosper Island" (https://en.wikipedia.org/wiki/Gosper_curve), which predates H3 by decades.)That said, this feels like an issue with rendering geometry rather than with the index itself. I’m curious to hear more about why you think the lack of congruency affects H3’s performance for spatial joins. Under the hood, it’s still a parent–child hierarchy very similar to S2’s — H3 children are topological rather than geometric children (even though they still mostly overlap).
jandrewrogers|24 days ago
Congruency allows for much more efficient join schedules and maximizes selectivity. This minimizes data motion, which is particularly important as data becomes large. Congruent shards also tend to be more computationally efficient generally, which does add up.
The other important aspect not raised here, is that congruent DGGS have much more scalable performance when using them to build online indexes during ingestion. This follows from them being much more concurrency friendly.
ajfriend|24 days ago
To me, the big selling point of H3 is that once you’re "in the H3 system", many operations don’t need to worry about geometry at all. Everything is discrete. H3 cells are nodes in a tree with prefixes that can be exploited, and geometry or congruency never really enter the picture at this layer.
Where geometry and congruency do come in is when you translate continuous data (points, polygons, and so on) into H3. In that scenario, I can totally see congruency being a useful property for speed, and that H3 is probably slower than systems that are optimized for that conversion step.
However, in most applications I’ve seen, the continuous-to-H3 conversion happens upstream, or at least isn’t the bottleneck. The primary task is usually operating on already "hexagonified" data, such as joins or other set operations on discrete cell IDs.
Am I understanding the bottleneck correctly?