Re: scale – we do handle billions of rows! As you can imagine, exact throughput depends on the source and destination database (as well as the width of the rows) but to give you a rough sense – on most warehouses, we can sync on the order of 5M rows per minute for each customer that you want to send data to. In practice, for a source with billions of rows, the initial backfill might take a few hours, and each incremental sync thereafter will be much faster. We can hook you up with a sandbox account if you want to run your own speed test!Re: configuration – you would create a config file for each "source" table that you want to make available to customers, including which columns should be sent over. Then at the destination level, you can specify the subset of tables you'd like to sync. This could be a single common schema for all customers, or different schemas based on the products the customer uses.
No comments yet.