(no title)
dangets | 9 months ago
Another tangentially related project is https://steampipe.io/ though it is for exposing APIs via Postgres tables and the clients are written using Go code and shared through a marketplace.
dangets | 9 months ago
Another tangentially related project is https://steampipe.io/ though it is for exposing APIs via Postgres tables and the clients are written using Go code and shared through a marketplace.
maxgrinev|9 months ago
* Built-in rate limiting controls at the source level (requests per second/minute/hour): each http_request operations refers to an http source defined separately * Automatic backoff and retry logic with delays * There is a option for per-endpoint rate limit configuration since different API calls can have different limits * Because it is at the source level, it works properly even for parallel requests to the same source.
The key idea is that rate limits are handled by the engine - no need to handle it explicitely by the user.
Concurrency is explicit by the user: * Inter-operation parallelism is activated by adding begin_parallel_block and end_parallel_block - between these two operations all the operations are executed at once in parallel * Intra-operation parallelism: many operations have parameters to partition input data and run in parallel on partitions. For example, http_request takes an input table that contains data to be updated via API and you can partition the input table by key columns into specified number of partitions.
Thanks for the Steampipe reference! That's a really interesting approach - exposing APIs as Postgres tables is clever, and I'm definitely going to play with it.