(no title)
gregw2 | 7 days ago
This at a minimum often involved adding back a shard key to the physical data, or partitioning, and/or physical data sorting easily in the "OLAP" layer. And a surprising number of CDC and ETL toolkits don't make it easy to parameterize a single code/configuration base, nor handle situations like shards being down at different times for maintenance or fetching data from each shard at a time of day specified by its end-of-day or handling retransmissions or reconciliation or gaps or data quality of a single shard when back in an unsharded landscape. SQL UNION ALL to reunite shards works, until it doesn't.
YMMV but would be curious if you have a story/solution/thoughts along these lines. It's easier if you shard with unified analytics/reporting in mind on day one of a sharded system design, but in the worlds I've lived in, nobody ever does. But maybe you could.
levkk|7 days ago
1. Replicate shards into one beefy database and use that. Replication is cheaper than individual statements, so this can work for a while. The sink can be Postgres or another database like Clickhouse. At Instacart, we used Snowflake, with an in-house CDC pipeline. It worked well, but Snowflake was only usable for offline analytics, like BI / batch ML, and quite expensive. We'll add support for this eventually; we're getting pretty good at managing logical replication, including DDL changes.
2. Use the shards themselves and build a decent query engine on top. This is the Citus way and we know it's possible. Some queries could be expensive, but that's expected and can be solved with more compute.
In our architecture, shards going down for maintenance is an incident-level event, so we expect those to be up at all times, and failover to a standby if there is an issue. These days, most maintenance tasks can be done online in-place, or with blue/green, which we'll support as well. Zero downtime is the name of the game.