(no title)
cle | 1 month ago
This is just not true, there are so many scenarios where 83/sec would be the limit. That number by itself is almost meaningless, similar to benchmarks which also make a bunch of assumptions about workloads and runtime environments.
As a simple example if your queue has a large backlog, you have a large worker fleet aggressively pulling work to minimize latency, your payloads are large, you have not optimized indexing, and/or you have many jobs scheduled for the future, every acquire can be an expensive table scan.
(This is a specific example because this is one of many failure scenarios I’ve encountered with Graphile that can cause your DB to meltdown. The same workload in Redis barely causes a blip in Redis CPU, without having to fiddle with indexes and auto vacuuming and worker backoffs.)
No comments yet.