top | item 35893582

(no title)

siebenmann | 2 years ago

We used to use this sort of locking in (frequently running) system cron jobs and the like. Then these jobs started getting killed off by Linux OOM on some systems and we ran into the downsides of locks that don't automatically clear if something goes wrong, and switched to flock(1) based locks (fortunately on the local system, so we're not affected by NFS issues there).

(I'm the author of the linked-to entry.)

discuss

order

chasil|2 years ago

Switch the shell from bash to dash (which is the default on Ubuntu) if that is part of the memory problem.

Dash is nearly 10x smaller, rumored to be 4x faster than bash.

Otherwise, have you compiled any object code with -Os?

siebenmann|2 years ago

Oops, my fallible memory bit me. We weren't specifically running into OOM, but into strict overcommit (which we had turned on on some machines). OOM will only kill big things, so it would be weird for bash (as /bin/sh) or small Python programs to get killed off. But strict overcommit favors already running things (who've already claimed their memory) over newly started cron jobs.

(You could criticize the shell for needing to do dynamic allocation in failure paths and so being exposed to malloc() failing, but this is a hard area and lots of code assumes it can malloc() on demand, some of it in the C library.)