Fun weekend project but definitely not production-ready (no tests, no error handling, concurrent requests will cause a race condition, etc.). If readers are looking for something production-ready to use, consider https://github.com/go-redis/redis_rate (which implements GCRA/leaky bucket), or https://github.com/ulule/limiter (which uses a much simpler algorithm, but has good middleware).
> It is capabale to handle distributed workload with its redis database support
sounds like this is limited by redis. For many organizations, this is fine. At my last gig, we used redis for deduplication and it required over 150 redis nodes with deterministic routing and consensus. Redis reportedly could support 200k rps per node, but in our case, we wouldn't see it get passed around 50k rps no matter how we tuned it.
An interesting addition to this library would be to use an interface and allow your backing datastore of choice allowing teams to use redis, zookeeper, an in-mem Go instance of the same library, sql, etc.
A fun exercise would be to figure out how to make the rate limiting itself distributed so you don't need a single store keeping everything in sync. Maybe a combo of deterministic request routing in a ring topology
> An interesting addition to this library would be to use an interface and allow your backing datastore of choice allowing teams to use redis, zookeeper, an in-mem Go instance of the same library, sql, etc.
Thanks for the feedback. I'm gonna implement an in-mem Go instance for local dev, but not sure if that will be enough to use in prod. also, in the next release, I will make redis optional.
RPS meaning reads or writes? What's the distribution of message sizes, and how large is your total dataset? What specs (core count, NIC) did each node have?
I'm asking because without this info, RPS is not a particularly useful metric. As an extreme example, if your dataset is <1MB, you could likely serve read-heavy requests from your SmartNIC's dcache at close to line rate.
Given most of the backends use round robin for loadbalancing, having in-memory counter should be enough. Removing redis as downstream dependency is a big win.
For the redis implementation, there should be fallback to in-memory counting instead blocking altogether. Currently the redis is a SPOF for the entire service.
if you're round robining clients w/o sticky assignment then you're going to get nodes*limit consumption. Not correct.
Also if you give limit/nodes per node and random assign a connection, you get correct answers on average, but a really janky pattern at the edge case (a user gets a 429, and retries and succeeds, then gets 429 again as they consume those last few requests).
> Given most of the backends use round robin for loadbalancing, having in-memory counter should be enough. Removing redis as downstream dependency is a big win.
thanks for the feedback. planning to make redis optional in next release.
1. some tests, over the wire preferably, would be nice
2. redis.go does not seem to be nessary it just changes signature of redis client constructor without much difference, might as well inline its contents
3. using fmt too much, if you don't need run time variables encoding, can do something more simpler. like writing to w.Write([]byte) directly. fmt uses reflect and runtime type detection, better avoid if not needed.
4. code comments do not follow godoc conventions. they should start from symbol name. did you run go fmt and some basic linters?
I really like the idea of getting code reviews from Hacker News for personal projects. There's so much knowledge and passion on here, it could be really useful. It would also help for me, as a bystander, to understand the context of these recommendations. I've done a bit of Go, and some of these sound useful to know.
Good for you doing a thing! Please understand the community is likely very wary of single maintainer, weekend project, high risk dependencies right now.
https://stanza.systems is a managed thing that offers this functionality (and a bit more) if y'all are looking for an escape valve away from redis as the coordination mechanism.
Thank you to everyone who provided me with suggestions to make it better. I got a lot of awesome feedback. and I will build/fix them. you guys are awesome.
jamespwilliams|1 year ago
sethammons|1 year ago
sounds like this is limited by redis. For many organizations, this is fine. At my last gig, we used redis for deduplication and it required over 150 redis nodes with deterministic routing and consensus. Redis reportedly could support 200k rps per node, but in our case, we wouldn't see it get passed around 50k rps no matter how we tuned it.
An interesting addition to this library would be to use an interface and allow your backing datastore of choice allowing teams to use redis, zookeeper, an in-mem Go instance of the same library, sql, etc.
A fun exercise would be to figure out how to make the rate limiting itself distributed so you don't need a single store keeping everything in sync. Maybe a combo of deterministic request routing in a ring topology
dasubhajit|1 year ago
Thanks for the feedback. I'm gonna implement an in-mem Go instance for local dev, but not sure if that will be enough to use in prod. also, in the next release, I will make redis optional.
dist1ll|1 year ago
I'm asking because without this info, RPS is not a particularly useful metric. As an extreme example, if your dataset is <1MB, you could likely serve read-heavy requests from your SmartNIC's dcache at close to line rate.
b1-88er|1 year ago
For the redis implementation, there should be fallback to in-memory counting instead blocking altogether. Currently the redis is a SPOF for the entire service.
maerF0x0|1 year ago
Also if you give limit/nodes per node and random assign a connection, you get correct answers on average, but a really janky pattern at the edge case (a user gets a 429, and retries and succeeds, then gets 429 again as they consume those last few requests).
dasubhajit|1 year ago
thanks for the feedback. planning to make redis optional in next release.
nikolayasdf123|1 year ago
2. redis.go does not seem to be nessary it just changes signature of redis client constructor without much difference, might as well inline its contents
3. using fmt too much, if you don't need run time variables encoding, can do something more simpler. like writing to w.Write([]byte) directly. fmt uses reflect and runtime type detection, better avoid if not needed.
4. code comments do not follow godoc conventions. they should start from symbol name. did you run go fmt and some basic linters?
5. mutex is not used. and it should be pointer.
blowski|1 year ago
dasubhajit|1 year ago
citrin_ru|1 year ago
maerF0x0|1 year ago
candiddevmike|1 year ago
If folks are looking for a serious rate limiting package for Go, limiter is a good start: https://github.com/ulule/limiter
abrahms|1 year ago
jahewson|1 year ago
dasubhajit|1 year ago
kseistrup|1 year ago
unknown|1 year ago
[deleted]
maerF0x0|1 year ago