Zero. My office workstation has 48 GB of RAM, my home computer has 64 (I went a bit overboard). I have very bad memories of swap thrashing and the computer becoming totally unresponsive until I forced a reset; if I manage to fill up so much RAM, I very much prefer the offending process to die instead of killing the whole computer.
It's funny how people think they're disabling swapping just because they don't have a swap file. Where do you think mmap()-ed file pages go? Your machine can still reclaim resident file-backed pages (either by discarding them if they're clear or writing them to their backing file if dirty) and reload them later. That's.... swap.
Instead of achieving responsiveness by disabling swap entirely (which is silly, because everyone has some very cold pages that don't deserve to be stuck in memory), people should mlockall essential processes, adjust the kernel's VM swap propensity, and so on.
Also, I wish we'd just do away with the separation between the anonymous-memory and file-backed memory subsystems entirely. The only special about MAP_ANONYMOUS should be that its backing file is the swap file.
I'm not an expert, but aren't you just reducing the choice of what pages can be offloaded from RAM? Without swap space, only file-backed pages can be written out to reclaim RAM for other uses (including caching). With swap space, rarely used anonymous memory can be written out as well.
Swap space is not just for overcommitting memory (in fact, I suspect nowadays it rarely ever is), but also for improving performance by maximizing efficient usage of RAM.
With 48GB, you're probably fine, but run a few VMs or large programs, and you're backing your kernel into a corner in terms of making RAM available for efficient caching.
I have 64GB of RAM and 16GB of swap. Swap is small enough it can't get really out of hand.
I have memories from like 20 years ago that even when I had plenty of RAM, and plenty of it was free, I would get random OOM killer events relatively regularly. Adding just a tiny bit of swap made that stop happening.
I'm like 90% sure at this point it's just a stupid superstition I carry. But I'm not gonna stop doing it even though it is stupid.
Luckily we're not in the spinning HDDs thrashing a working set in and out of 128 MB of primary memory days anymore. We have laptops that ship with SSDs that read/write at 6 GB/s.
I was experimenting with some graphics algorithm and had a memory leak where it would leak the uncompressed 12 MP image with every iteration. I was browsing the web when waiting for it to process when I wondered why it was taking so long. That's when I noticed it was using 80+ GB of swap just holding onto all those dead frames. It finished and meanwhile it had no noticeable performance impact on whatever else I was doing.
I ran with a setup like this for a bit, but I experienced far worse thrashing (and far more sudden onset) than I did with swap enabled. You need to take some extra steps to get a quick and graceful failure on RAM exhaustion.
I did similar with my 32GB laptop, but it was fairly flaky for ~4 years and I just recently put 48GB of swap on and it's been so much better. It's using over 20GB of the swap. The are cases in Linux where running without swap results in situations very similar to swapping too much.
On systems with 32/64/128 GB of ram, I'll typically have a 1GB or 2GB swap. Just so that the system can page out here and there to run optimally. Depending on the system, swap is typically either empty or just has a couple hundred MB kicking around.
On what OS are you using these settings? I found that Windows will refuse to allocate more virtual memory when the commit charge hit the max RAM size even if there is plenty of physical memory left to use.
I have 64 GiB of RAM and programs would start to crash at only 25 GiB of physical memory usage in some workloads because of high commit charge. I had to re-enable a 64 GiB SWAP file again just to be able to actually use my RAM.
My understanding is that Linux will not crash on the allocation and instead crash when too much virtual memory becomes active instead. Not sure how Mac handles it.
Windows: I set min size to whatever is necessary to make RAM+swap add up to ~2 GBytes per CPU thread, to avoid problems with parallel Visual Studio builds. (See, e.g., https://devblogs.microsoft.com/cppblog/precompiled-header-pc...) Performance is typically fine with ~0.75+ GBytes RAM per job, but if the swapfile isn't preconfigured then Windows can seemingly end up sometimes refusing to grow it fast enough. Safest to configure it first
macOS: never found a reason not to just let it do whatever it does. There's a hard limit of ~100 GBytes swap anyway, for some reason, so, either you'll never run out, or macOS is not for you
Linux: I've always gone for 1x physical RAM, though with modern RAM sizes I don't really know why any more
My work laptop currently has 96GB of RAM. 32 of it is allocated to the graphics portion of the APU. I have 128GB (2x) of SWAP allocated, since I sometimes do big FPGA Synthesizations, which take up 50GB of RAM on its own. Add another two IDEs and a browser, and my 64GB or remaining RAM is full.
Fwiw you’ll see technical reasons for swap being a bad idea on servers. These are valid. Virtualised servers don’t really have great ways to make swap work.
On a personal setup though there’s no reason not to have swap space. Your main ram gets to cache more files if you let the os have some space to place allocated but never actually used objects.
As in ‘I don’t use swap because i don’t use all my ram’ isn’t valid since free ram caches files on all major OS’s. You pretty much always end up using all your ram. Having swap is purely a win, it lets you cache even more.
But then you're putting data that used to be on RAM on storage, in order to keep copies of stored data on RAM. Without any advance knowledge of access patterns, it doesn't seem like it buys you anything.
pezezin|5 days ago
quotemstr|5 days ago
Instead of achieving responsiveness by disabling swap entirely (which is silly, because everyone has some very cold pages that don't deserve to be stuck in memory), people should mlockall essential processes, adjust the kernel's VM swap propensity, and so on.
Also, I wish we'd just do away with the separation between the anonymous-memory and file-backed memory subsystems entirely. The only special about MAP_ANONYMOUS should be that its backing file is the swap file.
anyfoo|5 days ago
Swap space is not just for overcommitting memory (in fact, I suspect nowadays it rarely ever is), but also for improving performance by maximizing efficient usage of RAM.
With 48GB, you're probably fine, but run a few VMs or large programs, and you're backing your kernel into a corner in terms of making RAM available for efficient caching.
nwallin|5 days ago
I have memories from like 20 years ago that even when I had plenty of RAM, and plenty of it was free, I would get random OOM killer events relatively regularly. Adding just a tiny bit of swap made that stop happening.
I'm like 90% sure at this point it's just a stupid superstition I carry. But I'm not gonna stop doing it even though it is stupid.
kalleboo|5 days ago
I was experimenting with some graphics algorithm and had a memory leak where it would leak the uncompressed 12 MP image with every iteration. I was browsing the web when waiting for it to process when I wondered why it was taking so long. That's when I noticed it was using 80+ GB of swap just holding onto all those dead frames. It finished and meanwhile it had no noticeable performance impact on whatever else I was doing.
rcxdude|3 days ago
linsomniac|5 days ago
stock_toaster|5 days ago
debazel|5 days ago
I have 64 GiB of RAM and programs would start to crash at only 25 GiB of physical memory usage in some workloads because of high commit charge. I had to re-enable a 64 GiB SWAP file again just to be able to actually use my RAM.
My understanding is that Linux will not crash on the allocation and instead crash when too much virtual memory becomes active instead. Not sure how Mac handles it.
tom_|5 days ago
macOS: never found a reason not to just let it do whatever it does. There's a hard limit of ~100 GBytes swap anyway, for some reason, so, either you'll never run out, or macOS is not for you
Linux: I've always gone for 1x physical RAM, though with modern RAM sizes I don't really know why any more
drnick1|5 days ago
AnyTimeTraveler|5 days ago
AnotherGoodName|5 days ago
Fwiw you’ll see technical reasons for swap being a bad idea on servers. These are valid. Virtualised servers don’t really have great ways to make swap work.
On a personal setup though there’s no reason not to have swap space. Your main ram gets to cache more files if you let the os have some space to place allocated but never actually used objects.
As in ‘I don’t use swap because i don’t use all my ram’ isn’t valid since free ram caches files on all major OS’s. You pretty much always end up using all your ram. Having swap is purely a win, it lets you cache even more.
fluoridation|5 days ago
fullstop|5 days ago
The contents of swap could be read after a power cut.
SAI_Peregrinus|5 days ago
void-star|5 days ago
Edit: oh and I don’t have an actual personal system with swap configuration on it anymore to give my own answer anymore either.
binsquare|5 days ago
people are too negative these days :|