top | item 46401846

(no title)

jpalomaki | 2 months ago

Learned once the hard way that it makes sense to use "flock" to prevent overlapping executions of frequently running jobs. Server started to slow down, my monitoring jobs started piling, causing server to slow down even more.

  */5 * * * * flock -n /var/lock/myjob.lock /usr/local/bin/myjob.sh

discuss

order

cluckindan|2 months ago

Have you tested how this behaves on eventually consistent cloud storage?

atherton94027|2 months ago

I'm confused, is EBS eventually consistent? I assume that it's strongly consistent as otherwise a lot of other linux things would break

If you're thinking about using NFS, why would you want to distribute your locks across other machines?

garganzol|2 months ago

If a file system implements lock/unlock functions precisely to the spec, it should be fully consistent for the file/directory that is being locked. Does not matter if the file system is local or remote.

In other words, it's not the author's problem. It's the problem of a particular storage that may decide to throw the spec out of the window. But even in an eventually consistent file system, the manufacturer is better off ensuring that the locking semantics is fully consistent as per the spec.