(no title)
wh0knows | 1 year ago
> Efficiency: Taking a lock saves you from unnecessarily doing the same work twice (e.g. some expensive computation). If the lock fails and two nodes end up doing the same piece of work, the result is a minor increase in cost (you end up paying 5 cents more to AWS than you otherwise would have) or a minor inconvenience (e.g. a user ends up getting the same email notification twice).
I think multiple nodes doing the same work is actually much worse than what’s listed, as it would inhibit you from having any kind of scalable distributed processing.
karmakaze|1 year ago
jmull|1 year ago
To be clear, my point is don't use distributed locking for correctness. There are much better options.
Now, the atomicity I mention implies some kind of internal synchronization mechanism for multiple requests, which could be based on locks, but those would be real, non-distributed ones.
jmull|1 year ago
Efficiency is one, as you say.
The other main one that comes to mind is to implement other "business rules" (hate that term, but that's what people use), like for a online shopping app, the stock to fulfill an order might be reserved for a time when the user starts the checkout process.