How are the memory overheads of ZFS these days? In the old days, I remember balking at the extra memory required to run ZFS on the little ARM board I was using for a NAS.
That was always FUD more or less. ZFS uses RAM as its primary cache…like every other filesystem, so it if you have very little RAM for caching the performance will degrade…like every other filesystem.
But if you have a single board computer with 1 GB of RAM and several TB of ZFS, will it just be slow, or actually not run? Granted, my use case was abnormal, and I was evaluating in the early days when there were both license and quality concerns with ZFS on Linux. However, my understanding at the time was that it wouldn't actually work to have several TB in a ZFS pool with 1 GB of RAM.
My understanding is that ZFS has its own cache apart from the page cache, and the minimum cache size scales with the storage size. Did I misundertand/is my information outdated?
To give some context. ZFS support de-duplication, and until fairly recently, the de-duplication data structures had to be resident in memory.
So if you used de-duplication earlier, then yes, you absolutely did need a certain amount of memory per byte stored.
However, there is absolutely no requirement to use de-duplication, and without it the memory requirements are just a small, fairly fixed amount.
It'll store writes in memory until it commits them in a so-called transaction group, so you need to have room for that. But the limits on a transaction group is configurable, so you can lower the defaults.
doublepg23|1 year ago
KMag|1 year ago
My understanding is that ZFS has its own cache apart from the page cache, and the minimum cache size scales with the storage size. Did I misundertand/is my information outdated?
magicalhippo|1 year ago
To give some context. ZFS support de-duplication, and until fairly recently, the de-duplication data structures had to be resident in memory.
So if you used de-duplication earlier, then yes, you absolutely did need a certain amount of memory per byte stored.
However, there is absolutely no requirement to use de-duplication, and without it the memory requirements are just a small, fairly fixed amount.
It'll store writes in memory until it commits them in a so-called transaction group, so you need to have room for that. But the limits on a transaction group is configurable, so you can lower the defaults.
BSDobelix|1 year ago
Thank you thank you, exactly this! And additionally that cache is compressed. In the day's of 4GB machines ZFS was overkill but today...no problem.