top | item 30297349

(no title)

etbe | 4 years ago

zram is a good thing. However in my investigations in 2014 "my conclusion was that swap wouldn’t be a problem as no normally operating systems that I run had swap using any significant fraction of total disk writes". During the last 8 years the amount of RAM in all my systems has increased significantly so swap is even less of an issue.

discuss

order

tpetry|4 years ago

Sure but setting up swap is still recommended. I guess some internal linux memory algorithms prefer to have some safety net? Setting up a 1GB zram swap was effective for me, it‘s not much wasted memory as servers have so much memory these days and because of compression it can fit more than 1GB.

wahern|4 years ago

What I've seen is that under memory pressure kernel tasks trying to evict or free pages to satisfy an allocation request can race with other tasks dirtying and filling pages even faster, especially via the buffer cache. This can induce patterns of lock contention on low-level VM data structures and flushing procedures that effectively behave like a deadlock. Eventually various loop limits and lock timeouts will help unstick things, but in the worst cases the system gets caught in higher order loops and I've seen systems lock up for minutes. The systems I've seen this on never had any swap.

Some of the heuristics designed to minimize pathological contention latency seem to implicitly assume that the swapping subsystem--both in its ability to help free space, and the latency it introduces when loading and evicting pages--will help mitigate the chance tasks will get caught in a tight contention loop. IOW, the I/O latency of swap effectively induces back pressure on load, helping operations freeing pages to progress faster than operations consuming pages. (Predictably, the faster your swap, the less well this works. When people began putting swap on SSDs, heuristics had to be retuned.)

Arguably the root of the problem is the legacy of overcommit. Even though it can be nominally disabled, many aspects of the kernel were designed with the notion that the only direction to move under memory pressure is forward, relying on the promise of the OOM killer eventually freeing up enough memory to maintain forward progress on the current operation, rather than unwinding and returning a failure condition. The dynamic seems similar to buffer bloat.

_0w8t|4 years ago

A fast swap like zram or modern SSD is very beneficial on a development machine as one allows to keep active file cache for much longer so grep and git on huge trees works instantly even if one have a lot of memory hogs like VMs or language servers around.