top | item 47131730

Ask HN: How do you handle API rate limits in production?

3 points| rjpruitt16 | 7 days ago

I'm building data pipelines that sync data from various third party apis. We constantly hit 429 rate limits, and our janky retry system fails regularly. For those running production data syncs or microservices calling external APIs heavily:

How do you handle rate limiting across multiple workers? Do you use circuit breakers, retry libraries, or something custom? How do you prevent retry storms when 100 workers all hit the same rate limit?

Curious what's working at scale.

2 comments

order

toomuchtodo|7 days ago

Use Redis as a shared metrics data store to coordinate back off in the aggregate and to track collective throughput (and the delta between functional baseline and when you’re exceeding counterparty limits). Make workers aware of allowance state, and responsive to it and limits.

Via this mechanism, you should be able to pause your worker fleet as it scales out as well as regulate its request rate while monitoring on health of the steady state interface between your workers and other systems.

rjpruitt16|6 days ago

interesting. What type of features did this enable. How was it maintaining redis. How many queues did you have.