top | item 28721815

Freqfs: In-memory filesystem cache for Rust

142 points| haydnv | 4 years ago |docs.rs

76 comments

order

munro|4 years ago

Musing: sometimes I wish file systems & databases were the unified. I'm imaging just a single fast db engine sitting on my storage—and your traditional file system structure would just be tables in there. I kinda just treat SQLite like that, but it's not as transparently optimized as it could be for large files. Why? I don't want to mentally jump around technologies. I want to query my FS like a DB, and I want to store files in my DB like a FS. The reality is though that there isn't a one size fits all DB that exists.

And more on topic: tokio-uring is really fast [1], and I'm really loving tokio in general.

[1] https://gist.github.com/munro/14219f9a671484a8fe820eb35d26bb...

weinzierl|4 years ago

We've been there. Before we had filesystems as we know them today, there were many different ways of persistent data storage. Roughly these could be grouped into two camps: The files camp and the records camp.

The record based approach had many properties we know from modern databases. It was a first class citizen on the mainframe and IBM was its champion.

In my opinion hierarchical filesystems won as everyday data storage because of their simplicity and not despite it. I think the idea of a file being just a series of bytes and leaving the interpretation to the application is ingenious. That doesn't mean there is no room for standardized OS-level database-like storage. In fact I'd love to see that.

arghwhat|4 years ago

Your filesystem is a database. It's just a document-oriented database, rather than relational SQL.

pure_simplicity|4 years ago

I have the exact same wish. On top of that, i'd wish for application data to be stored in the system database by default, neatly namespaced and permissioned, so that you can allow for greater interoperability if desired and manually query and combine data across different applications.

There was some research being done on the concept of a db as a filesystem: https://youtu.be/wN6IwNriwHc

sanketsarang|4 years ago

We actually did work on this a few years ago but did not get enough takers for it. We created a one size fits all database, that leverages the full capability of the file system.

Try it here: https://github.com/blobcity/db

PS: I am the chief architect of the DB, and the project is no longer being actively maintained by us. But if you make a contribution, we will oblige to review and merge a PR.

Bottom line, nothing you do can make your database faster than the filesystem. So why not make a database that just uses the filesystem to the fullest, than creating a filesystem on top of a filesystem. BlobCity DB does not create a secondary filesystem. It dumps all data directly to the filesystem, thereby giving peak filesystem performance. This is scientifically really the best it gets from a performance standpoint. Not necessarily the most efficient in data storage / data-compression standpoint.

This means, we gain speed, while compromising on data-compression. We produce a larger storage footprint, but are insanely fast. Storage is cheap, compute isn't. So that should be okay I suppose.

GekkePrutser|4 years ago

Wasn't this what Microsoft was working on with WinFS in Longhorn which later became Vista but without the WinFS part?

And I think ReiserFS was also working towards this but got abandoned for obvious reasons.

haydnv|4 years ago

Yeah I just learned about tokio-uring and I'm planning to get it into the next major release of freqfs

jerrysievert|4 years ago

until an underlying change in technology happens and then you wish they were no longer unified (rust to ssd to nvme, for example).

I would prefer more pluggable interfaces personally.

(hi Ryan, long time no see!)

amelius|4 years ago

> freqfs automatically caches the most frequently-used files and backs up the others to disk. This allows the developer to create and update large collections of data purely in-memory without explicitly sync’ing to disk, while still retaining the flexibility to run on a host with extremely limited memory.

Why not let the OS take care of this?

haydnv|4 years ago

One advantage is consistency across host platforms, but the main advantage is that the file data can be accessed (and mutated) in memory in a deserialized format. If you let the OS take care of it, you would still have the overhead of serializing & deserializing a file every time it's accessed.

BiteCode_dev|4 years ago

You can pick and chose.

Maybe your caching strat of you OS isn't best for your use case. Also, you may use a network file system, or several types of FS, and want your cache warm up to be tuned up and consistent.

shakna|4 years ago

Presumably for a similar usecase as SQLite [0]. Performance. You can beat the OS, and by a noticeable margin, by doing things in memory and avoiding the I/O bottleneck.

[0] https://www.sqlite.org/fasterthanfs.html

jdeaton|4 years ago

My thought exactly.

haydnv|4 years ago

I realized based on several of the comments here that I should have included a comparison with OS filesystem caching in the documentation for freqfs. I will update this in the next release.

The major advantage of freqfs over just letting the OS handle file caching is that with freqfs you can read and mutate the data that your file represents purely in memory. For example, if you implement a BTree node as a struct, you can just borrow the struct mutably and update it, and it will only be synchronized with the filesystem in the event that it's evicted (or you explicitly call `sync`). This avoids a lot of (de)serialization overhead and defensive coding against an out-of-memory error.

Again, I will update the documentation to clarify.

spullara|4 years ago

This is called mmap?

nonameiguess|4 years ago

I honestly don't think I like this at the application level. You're removing a degree of freedom from operators and users. I have a ton of memory on all of my home devices, and usually just take the working directories for frequently used applications and mount them as tmpfs. I do the same thing for application working directories of applications I deploy at work as well, where we have complete freedom to deploy memory-optimized servers with lots of RAM. Putting an extra in-memory cache on top of the OS filesystem that is already in-memory is an unnecessary extra step and doubling the memory use of each file that can't be turned off without patching and recompiling your application. The OS is already smart enough not to add a cache on top of tmpfs.

haydnv|4 years ago

I don't know that it's fair to say it's "doubling" the memory use of each file because the OS cache memory is still "free" from the perspective of an application. Where it comes in handy is an applications like databases or training an ML model where there are hot spots that get accessed/updated extremely frequently--then the application doesn't have to incur serialization overhead in order to read/write the data that the file encodes (although as another poster pointed out it might also be possible to do this with mmap).

hexo|4 years ago

Is this some kind of excercise or what? I mean, my OS already has filesystem cache why would I need another one? More buffer bloat?

tedunangst|4 years ago

> This crate assumes that file paths are valid Unicode and may panic if it encounters a file path which is not valid Unicode.

Love it when a program could simply work, but chooses to fail because it doesn't like my life choices.

habibur|4 years ago

This will hardly increase performance if any. Note that OS caches frequently used files in-memory. When you are using this, you are basically competing with the OS for in-memory file cache.

This library might have other uses that I am not aware of.

fsckboy|4 years ago

are there embedded systems, including potentially realtime, that are running OSes or on CPUs that don't provide caches or page fault support?

__s|4 years ago

Unfortunately this doesn't meet my use case (which they list as an intended use case): serving static assets over http. I currently use an in-memory cache without eviction. It doesn't meet my requirements because I store the in-memory content precompressed

https://github.com/serprex/openEtG/blob/master/src/rs/server...

edit: seems it can. Nice

jagged-chisel|4 years ago

I think I don't understand the problem. Precompressed files are still files and can be cached.

sagichmal|4 years ago

Don't filesystems already do this? Like, really really good?

minroot|4 years ago

Why did you link to doc.rs instead of to the source repository?

mountainboy|4 years ago

requires tokio / async. ugh. I'm out.

axegon_|4 years ago

Why hasn't anyone told me about this?!?!? I love you so much for posting, I needed something like this for a personal project I'm fiddling around with in my spare time.

Koshkin|4 years ago

They just did.