I simply use SQLite for this. You can store the cache blocks in the SQLite database as blobs. One file, no sparse files. I don't think the "sparse file with separate metadata" approach is necessary here, and sparse files have hidden performance costs that grow with the number of populated extents. A sparse file is not all that different than a directory full of files. It might look like you're avoiding a filesystem lookup, but you're not; you've just moved it into the sparse extent lookup which you'll pay for every seek/read/write, not just once on open. You can simply use a regular file and let SQLite manage it entirely at the application level; this is no worse in performance and better for ops in a bunch of ways. Sparse files have a habit of becoming dense when they leave the filesystem they were created on.
I dont think the author could even use SQLite for this. NULL in SQLite is stored very compactly, not as pre-filled zeros. Must be talking about a columnar store.
I wonder if attaching a temporary db on fast storage, filled with results of the dense queries, would work without the big assumptions.
I’ve used this technique in the past, and the problem is that the way some file systems perform the file‑offset‑to‑disk‑location mapping is not scalable. It might always be fine with 512 MB files, but I worked with large files and millions of extents, and it ran into issues, including out‑of‑memory errors on Linux with XFS.
The XFS issue has since been fixed (though you often have no control over which Linux version your program runs on), but in general I’d say it’s better to do such mapping in user space. In this case, there is a RocksDB present anyway, so this would come at no performance cost.
I am guessing the choice here is do you want the kernel to handle this and is that more performant than just managing a bunch of regular empty files and a home grown file allocation table.
Or even just bunch of little files representing segments of larger files.
Microsoft MS-DOS and Windows supported this in the 90s with DriveSpace, and modern file systems like btrfs and zfs also support transparent compression.
electroly|29 days ago
sigwinch|29 days ago
I wonder if attaching a temporary db on fast storage, filled with results of the dense queries, would work without the big assumptions.
uroni|29 days ago
The XFS issue has since been fixed (though you often have no control over which Linux version your program runs on), but in general I’d say it’s better to do such mapping in user space. In this case, there is a RocksDB present anyway, so this would come at no performance cost.
hahahahhaah|29 days ago
Or even just bunch of little files representing segments of larger files.
avmich|29 days ago
fh973|29 days ago
praseodym|29 days ago
eeeficus|29 days ago
UltraSane|29 days ago
clawsyndicate|29 days ago
[deleted]