top | item 25246050

DwarFS: A fast high compression read-only file system

280 points| daantje | 5 years ago |github.com

111 comments

order

dj_mc_merlin|5 years ago

> I started working on DwarFS in 2013 and my main use case and major motivation was that I had several hundred different versions of Perl that were taking up something around 30 gigabytes of disk space, and I was unwilling to spend more than 10% of my hard drive keeping them around for when I happened to need them.

It fills me with joy that someone has been coding a fs for 7 years due to perl installs taking too much space. Necessity is the mother of all invention.

mhx77|5 years ago

Hahaha, I haven't actually been coding on this for that long, it's more that I coded for a few weeks back in 2013 and only found the motivation to resurrect the whole thing a few weeks back.

pjc50|5 years ago

Nowadays you can have the same problem with Python and Javascript too!

rurban|5 years ago

I have about the very same problems as mhx, several hundreds of huge perl versions which are almost the same, taking up enourmous amounts of diskspace. E.g. I had to move most of them from my SSD to a spinning disk. I really want to move them back.

Thanks to mhx I can move them now back to my fast disk. This is also perfect for testers.

deepstack|5 years ago

nice, wonder how this compare with MongoDB compression of files and objects. Seems like a great foundation for archiving data.

fefe23|5 years ago

It looks like the benefit is some kind of block or file deduplication.

@OP: Can you please explain why you keep 50 gigs of perl around? :-)

I use compressed read-only file systems all the time to save space on my travel laptop. I have one squashfs for firefox, one for the TeX base install, one for LLVM, one for qemu, one for my cross compiler collection. I suspect the gains over squashfs will be far less pronounced than for the pathological "400 perl version".

mhx77|5 years ago

> @OP: Can you please explain why you keep 50 gigs of perl around? :-)

Sure. I've been the maintainer of a perl portability module (Devel::PPPort) for a long time and every release was tested against basically every possible version (and several build flag permutations) of perl that was potentially out in the wild.

rkeene2|5 years ago

AppFS provides global file deduplication and also solves the distribution problem, and also you don't need to have all the resources locally

slagfart|5 years ago

Perhaps not strictly on-topic, but is there any equivalent FS/program in Windows that will allow users to have read-only access to files that are deduplicated in some way?

My use case is the MAME console archives, which are now full of copies of games from different localisations with 99% identical content. 7Z will compress them together and deduplicate, but breaks once the archive exceeds a few gigs.

These archives are already compressed (CHD format, which is 7Z + FLAC for ISOs), but it's deduplication that needs to happen on top of these already compressed files that I'm struggling with.

Sorry for the off-topic ask!

rakoo|5 years ago

It's probably a hack, but you can try "backing up" your files with bup, restic or borg, and mount the resulting snapshot with FUSE

aidenn0|5 years ago

You probably need to de-duplicate before compression, at least for many compression schemes.

robbyt|5 years ago

s/ask/request/g

Scaevolus|5 years ago

Neat! I'd like to see benchmarks for more typical squashfs payloads-- embedded root filesystems totalling under 100MB. Small docker images like alpine would be a decent proxy. The given corpus of thousands of perl versions is more appropriate for comparison against git.

mhx77|5 years ago

Author here :)

I'll add more benchmarks, this is still WIP and so far I've mainly tried to satisfy my own needs. My intention with DwarFS wasn't to write "a better SquashFS", but to make it better in certain scenarios (huge, highly redundant data) than SquashFS. SquashFS still has the big advantage of being part of the kernel, which makes it a lot more attractive for things like root file systems.

david_draco|5 years ago

I wish there was a semi-compressed transparent filesystem layer which slowly compresses the least recently used files in the background, and un-compresses files upon use. That way you could store much more mostly unused content than space on the disk, without sacrificing accessibility.

bufferoverflow|5 years ago

I don't know about you guys, but most of the stuff that takes up space on my drives are:

1) Videos from my DSLR

2) RAW images from my DSLR

3) Various movies / TV series I downloaded

4) Game files (most of which are textures and 3D models)

None of that stuff is really compressible.

rwmj|5 years ago

You could probably build something easily in nbdkit to do this. (Note this is at the block layer). An advantage of nbdkit is you could write the whole thing in the high-level language of your choice, even a scripting language such as Python, which might make it easier to rapidly explore designs.

Having said that I did try to implement a deduplication layer for nbdkit, but what I found was that it wasn't very effective. It turns out that duplicate data in typical VM filesystems isn't common, and the other parts of the filesystem (block free lists etc) were not sufficiently similar to deduplicate given my somewhat naive approach.

yourapostasy|5 years ago

I believe the term of art that applies here is "Hierarchical Storage Management". Along with automatically moving data between high-cost and low-cost storage media, the low-cost storage media for your filesystem of choice for the kind of compressing you described can simply be fast disk on a compressing filesystem.

pjc50|5 years ago

I believe NT file compression works like this, and before that MSDOS "DriveSpace" ...

siscia|5 years ago

Checkout CVMFS

It is not what you describe but it can help.

hachari|5 years ago

Why not use BTRFS with file deduplication and transparent compression (zstd specifically)?

TimTheTinker|5 years ago

This is a read-only file system, so it’s able to exploit certain properties of that—- locating similar files next to each other, for example.

tutfbhuf|5 years ago

Is Btrfs stable yet?

stabbles|5 years ago

mksquashfs supports gzip, xz, lzo, lz4 and zstd too, you can also compile it to have any of those as a default instead of gzip.

Does the performance benchmark show DwarFS versus single-threaded gzip compressed SquashFS?

Hello71|5 years ago

> $ time mksquashfs install perl-install.squashfs -comp zstd -Xcompression-level 22

> Parallel mksquashfs: Using 12 processors

gnosek|5 years ago

Is this viable as a backup/archive format? Would it make sense to e.g. have an incremental backup as a DwarFS file, referring to the base backup in another DwarFS file?

iforgotpassword|5 years ago

I guess something like borgbackup would be better suited for this.

You could theoretically try to build this with dwarfs, by using overlayfs and then compressing the upper layer again with dwarfs, but that sounds pretty fragile and cumbersome.

giovannibonetti|5 years ago

This could be awesome for compressing Docker image layers. After all, they can be huge (hundreds of MB) and, if the Dockerfile is well organized, each step should contain a fairly homogeneous set of files (like apt-get artifacts, for example).

botto|5 years ago

It would amazing to see this work on OpenWRT, I think it would fit perfectly using less resources than squashfs. The other location would be on a Raspberry pi for scenarios where power can be cut at any time.

mhx77|5 years ago

Author here :) I'm not sure low-spec hardware is necessarily the best use case for DwarFS. It doesn't necessarily use less resources than SquashFS, although it can create file systems that are smaller with much less CPU resources. However, it'll still need a reasonable amount of memory at run time to cache active, decompressed blocks.

rektide|5 years ago

I was thinking the same thing! I'm not sure what it would take to make /rom a FUSE based filesystem, to make it bootable. The current boot process involves the bare kernel mounting Squashfs to find it's init=/etc/preinit & booting from there[1].

Would love some theorycrafting on possible ways to work with DwarFS being a FUSE filesystem.

[1] https://openwrt.org/docs/techref/process.boot

jedberg|5 years ago

Does anyone remember back in the 90s when we'd install DoubleSpace to get on the fly compression? And then they built it into MSDOS 6 and that was a major game changer?

tssva|5 years ago

It was DoubleDrive until Microsoft licensed it and relabeled it as DoubleSpace. Stacker was the far more popular drive compression solution until MSDOS 6 was released.

evantahler|5 years ago

Oh wow. This would be excellent for language dependencies - ruby gems, node_modules, etc. Integrating this with something like pnpm [1], which already keeps a global store of dependencies would excellent. [1] - https://pnpm.js.org

rurban|5 years ago

So I tried it out on my 17BG of perl builds. (just on my laptop, not on my big machine).

mkdwarfs crashed with recursive links (1-level, just pointing to itself) and when I removed dirs while running mkdwarfs, which were part of of the input path. Which is fair, I assume.

mhx77|5 years ago

> mkdwarfs crashed with recursive links (1-level, just pointing to itself)

That's odd, it shouldn't crash with links at all, as it doesn't actively follow links. Can you please file a bug if you can reproduce this?

> and when I removed dirs while running mkdwarfs, which were part of of the input path

I guess this is fair, but I'll try to take a look anyway. :-)

> On success, mkdwarfs needed 1 hr, and reduced 219 dirs to a size of 970 MB. Not just source files, but also the build and install object files.

My 500 MB image with the 1100+ perls is just installations, from which I've actually removed libperl.a as I've never needed it and it really bloats the image. I've got a separate image with debug information (everything built with -g in case I need to debug the binaries), so the binaries in the main image are essentially all stripped. If I need to debug, I'll just mount the debug image as well, which contains the source files and the stripped debug data.

> 1 hr is a lot, but just think how long squashfs would have needed.

It might be worth trying a lower compression level, especially if you find that mkdwarfs is CPU bound and not I/O bound.

rurban|5 years ago

On success, mkdwarfs needed 1 hr, and reduced 219 dirs to a size of 970 MB. Not just source files, but also the build and install object files.

1 hr is a lot, but just think how long squashfs would have needed. Totally impractical. Thanks mhx

ed25519FUUU|5 years ago

I noticed that enabling compression on zfs made a huge difference with the source size of some of my largely text file petitions. I never turned on deduplication because I don’t want to bother with the memory overhead, but I bet that would help even further.

ggm|5 years ago

Most ZFS howto's now recommend against dedup on the prolongued memory cost consequences. Yes, you would get some block level compression outcome. But, you enter the cost/benefit hell of balancing CPU and memory at runtime.

Twirrim|5 years ago

I'm curious, why do you have so many perl installations around. I thought I'd got a fair number of python venvs kicking around for each of the repos I'm dealing with, but nowhere near that many.

isoprophlex|5 years ago

My Python shits have pip requirements that easily dump 3-4 gigs in a venv folder. Do that once or twice a month when starting a new project for a couple of years and it gets messy...

st_goliath|5 years ago

Circa 2 years ago, I was working on a side project and got so annoyed with SquashFS tooling, that I decided to fix it instead. After getting stuck with the spaghetti code behind mksquashfs, I decided to start from scratch, having learnt enough about SquashFS to roughly understand the on-disk format.

Because squashfs-tools seemed pretty unmaintained in late 2018 (no activity on the official site & git tree for years and only one mailing list post "can you do a release?" which got a very annoyed response) I released my tooling as "squashfs-tools-ng" and it is currently packaged by a hand full of distros, including Debian & Ubuntu.[1]

I also thoroughly documented the on-disk format, after reverse engineering it[2] and made a few benchmarks[3].

For my benchmarks I used an image I extracted from the Debian XFCE LiveDVD (~6.5GiB as tar archive, ~2GiB as XZ compressed SquashFS image). By playing around a bit, I also realized that the compressed meta data is "amazingly small", compared to the actual image file data and the resulting images are very close to the tar ball compressed with the same compressor settings.

I can accept a claim of being a little smaller than SquashFS, but the claimed difference makes me very suspicious. From the README, I'm not quite sure: Does the Raspbian image comparison compare XZ compression against SquashFS with Zstd?

I have cloned the git tree and installed dozens of libraries that this folly thingy needs, but I'm currently swamped in CMake errors (haven't touched CMake in 8+ years, so I'm a bit rusty there) and the build fails with some still missing headers. I hope to have more luck later today and produce a comparison on my end using my trusty Debian reference image which I will definitely add to my existing benchmarks.

Also, is there any documentation on how the on-disk format for DwarFS and it's packing works which might explain the incredible size difference?

[1] https://github.com/AgentD/squashfs-tools-ng

[2] https://github.com/AgentD/squashfs-tools-ng/blob/master/doc/...

[3] https://github.com/AgentD/squashfs-tools-ng/tree/master/doc

mhx77|5 years ago

This is really cool, I'll give squashfs-tools-ng a try!

> Does the Raspbian image comparison compare XZ compression against SquashFS with Zstd?

That's correct. It's not an exhaustive matrix of comparisons.

> Also, is there any documentation on how the on-disk format for DwarFS and it's packing works which might explain the incredible size difference?

The format as of 0.2.0 is actually quite simple. It's a list of compressed data blocks, followed by a metadata block (and a schema describing the metadata block). The metadata format is implemented by and documented in in [1].

There are probably 3 things that contribute to compression level:

1) Block size. DwarFS can use arbitrary block sizes (artificially limited to powers of two), and uses a much larger block size (16M) by default. SquasFS doesn't seem to be able to go higher than 1M.

2) Ordering files by similarity.

3) Segment deduplication. If segments of files overlap with previously seen data, these segments are referenced instead of written again. The minimum size of these segments can be configured and defaults to 2k. For my primary use case, of the 47.6 GB of input data, 28.2 GB are saved by file-level deduplication, and another 12.4 GB by this segment-level deduplication. So before the "real" compression algorithms actually kick in, there are only 7 GB of data left. As these are ordered by similarity, and stored in rather big blocks, some of the 16M blocks can actually be compressed down to less then 100k.

[1] https://github.com/mhx/dwarfs/blob/main/thrift/metadata.thri...

hawski|5 years ago

I just want to say thank you for squashfs-tools-ng. For my usecase I had to patch mksquashfs and your tool fits just right. I'm yet to switch however.

Hello71|5 years ago

> You can pick either clang or g++, but at least recent clang versions will produce substantially faster code

have you investigated why this might be the case?

mhx77|5 years ago

> have you investigated why this might be the case?

Very briefly. It looks like clang has a different strategy breaking up the code (which is mostly C++ templates) into actual functions vs. inlining it, and the hot code ultimately performs fewer function calls with clang than it does with gcc. But this is nowhere near a proper analysis of what's going on. :)

aarchi|5 years ago

I have several highly-redundant NTFS backups that I'd like to compress into a read-only fs. Can DwarFS preserve all NTFS metadata?

kristianp|5 years ago

I think it uses FUSE, which is linux specific.

saurabhnanda|5 years ago

Is this useful for long-term log storage? say, from a typical webapp (eg. Nginx logs, Rails logs, Postgres logs, etc)

throwmemoney|5 years ago

Compression - anyone using lrzip on production servers?

GGfpc|5 years ago

What are the use cases for a read only file system?

fishermanbill|5 years ago

Game asset packages - all game assets are read only and need to be compressed and nowadays with SSD's you don't want duplication.

Just to clarify that last statement (and something to think about) with HDD's you want duplicate assets so that you don't cause seeks which are VERY slow on 5400rpm HDD's still found on some/alot of systems.

FroshKiller|5 years ago

Have you ever used a CD-ROM or DVD-ROM?

fsiefken|5 years ago

the use case for a read only compressed filesystem is that one..

* can search archived files potentially faster because read access is potentially faster

* fit more data on bootable media

pjc50|5 years ago

Booting. Arguably all containers, too.

rwmj|5 years ago

squashfs is widely used in Linux install media.