top | item 46897506

(no title)

FooBarWidget | 25 days ago

One pet peeve I have with virtual memory management on Linux is that, as memory usage approaches 100%, the kernel starts evicting executable pages because technically they're read-only and can be loaded from disk. Thus, the entire system grinds to a halt in a behavior that looks like swapping, because every program that wants to execute instructions has to load its instructions from disk again, only to have those instruction pages be evicted again when context switching to another program. This behavior is especially counter intuitive because disabling swap does not prevent this problem. There are no convenient settings for administrators for preventing this problem.

It's good that we have better swapping now, but I wish they'd address the above. I'd rather have programs getting OOMKilled or throwing errors before the system grinds to a halt, where I can't even ssh in and run 'ps'.

discuss

order

Rygian|25 days ago

I suffer from the same behavior, ever since I moved from Ubuntu to Debian.

An interactive system that does not interact (terminal not reactive, can't ssh in, screen does not refresh) is broken. I don't understand why this is not a kernel bug.

On my system, to add insult to injury, when the system does come back twenty minutes later, I get a "helpful" pop-up from the Linux Kernel saying "Memory Shortage Avoided". Which is just plain wrong. The pop-up should say "sorry, the kernel bricked your system for a solid twenty minutes for no good reason, please file a report".

man8alexd|25 days ago

Actively used executable pages are explicitly excluded from reclaim. And if they are not used, why should they stay in memory when the memory is constrained? It is not the first time I have heard complaints about executable pages, but it seems to be some kind of common misunderstanding.

https://news.ycombinator.com/item?id=45369516

FooBarWidget|25 days ago

What is "actively used"? The bash session that I was using 2 seconds before system grinded to a halt sure didn't count.

robinsonb5|25 days ago

Indeed. I think what's really needed is some way to mark pages as "required for interactivity" so that nothing related to the user interface gets paged out, ever. That, I think, would go at least some way towards restoring the feeling of "having a computer's full attention" that we had thirty years ago.

FooBarWidget|25 days ago

There is, mlock() or mlockall(), but it requires developer support. I wish there is an administrator knob that allows me to mark whole processes without needing to modify them.

akdev1l|25 days ago

Seems the applications can call mlockall() to do this

direwolf20|25 days ago

An Electron app would mark its entire 2GB as required for interactivity. If you run 4 electron apps on an 8GB system you run out of memory.

nolist_policy|25 days ago

Linux swap has been fixed on Chromebooks for years thanks to MGLRU. It's upstream since Linux 6.1 and you can try it with

  echo y >/sys/kernel/mm/lru_gen/enabled

M95D|25 days ago

I had nothing but problems since that was introduced in 6.1. It seems that the kernel prefers to compact/defrag memory, repeatedly, each time freezing everything 1-2 seconds, rather than releasing some disk cache memory or swapping out.

112233|25 days ago

Is there a way to make linux kernel schedule in a "batch friendly way"? Say I do "make -j" and get 200 gcc processes diong jobserver LTO link with 2GB RSS each. In my head, optimal way through such mess is get as many processes as can fit into RAM without swapping, run them to completion, and schedule additional processes as resources become available. A depth first, "infinite latency" mode.

Any combination of cgroups, /proc flags and other forbidden knobs to get such behaviour?

Neywiny|25 days ago

"make -j" has OOMd me more than it's worth. If it's a big project I just put in how many threads I want. I do hear your point but that is a solved problem.

direwolf20|25 days ago

It's not possible for the kernel to predict the memory needs of a process unfortunately

worldsavior|25 days ago

Program instructions size is small thus loading is fast, so no need to worry about that too much. I'd look on different things first.

twic|25 days ago

Have you measured this, or is this just an opinion?