Well, first of all: I'm not trying to bash BTRFS at all, it probably is just not meant for me. However, I'm trying to gain information it is really considered stable (like rock solid) or it might have been a hardware Problem on my system.
I used cryptsetup with BTRFS because I encrypt all of my stuff. One day, the system froze and after reboot the partition was unrecoverably gone (the whole story[1]). Not a real problem because I had a recent backup, but somehow I lost trust in BTRFS that day. Anyone experienced something like that?
Since then I switched to ZFS (on the same hardware) and never had problems - while it was a real pain to setup until I finished my script [2], which still is kind of a collection of dirty hacks :-)
Yes, my story with btrfs is quite similar- used it for a couple years, suddenly threw some undocumented error and refused to mount, asked about it on the dev irc channel and was told apparently it was a known issue with no solution, have fun rebuilding from backups. No suggestion that anyone was interested in documenting this issue, let alone fixing it.
These same people are the only ones in the world suggesting btrfs is "basically" stable. I'll never touch this project again with a ten foot pole, afaic it's run by children. I'll trust adults with my data.
I've used it as my desktops main filesystem for many years and not had any problems. I have regular snapshots with snapper. I run the latest kernel, so ZFS is not an option.
That said, I avoid it like the plague on servers, to get acceptable performance (or avoid fragmentation) with VMs or databases you need to disable COW which disables many of it's features, so it's better just to roll with XFS (and get pseudo-snapshots anyway).
I worked on a linux distro some years ago that had to pull btrfs long after people had started saying thats its truly solid because customers had so many issues. Its probably improved since but its hard to know. Im surprised fedora workstation defaults to it now. I'm hoping bcachefs finds its way in the next few years as being the rock solid fs it aims to be.
My btrfs filesystem has been slowly eating my data for a while; large files will find their first 128k replaced with all nulls. Rewriting it will sometimes fix it temporarily, but it'll revert back to all nulls after some time. That said, this might be my fault for using raid6 for data and trying to replace a failing disk a while ago.
Have you used 4K sectors with cryptsetup? Many distributions still defaults to 512 bytes if SSD reports 512 bytes as its logical size and with 512 sectors there are heavier load on the system.
I was reluctant to use BTRFS on my Linux laptop but for the last 3 years I have been using it with 4K cryptsetup with no issues.
I wonder if I can use a smaller SSD for this and make it avoid HDD wakeups due to some process reading metadata. That alone would make me love this feature.
I think you'd rather want a cache device (or some more complicated storage tiering) for that so that both metadata and frequently accessed files get moved to that dynamically based on access patterns.
Afaik btrfs doesn't support that. LVM, bcache, device mapper, bcachefs and zfs support that (though zfs would require separate caches for reading and synchronous write). And idk which of these let you control the writeback interval.
Most likely yes, but the also envisioned periodically repacking oft multiple small data extents into one big that gets written to the HDD would wake up the HDD. And if you'd make the SSD "metadata only", browser cache and logging will keep the HDD spinning.
This feature is for performance, not the case you described.
Just buy more RAM and you get that for free. Really I guess that's my sense of patches like this in general: while sure, filesystem research has a long and storied history and it's a very hard problem in general that attracts some of the smartest people in the field to do genius-tier work...
Does it really matter in the modern world where a vanilla two-socket rack unit has a terabyte of DRAM? Everything at scale happens in RAM these days. Everything. Replicating across datacenters gets you all the reliability you need, with none of the fussing about storage latency and block device I/O strategy.
I feel a bit lost here. In the good old days, I ran ext2/ext3/ext4 and forgot about it, or Reiserfs if I felt fancy (and which was great until it wasn't).
Now, there is a cambrian explosion going on. Ext4, xfs, btrfs,bcachefs, zfs. They each have their pros and cons, and it takes a while before you find out you run into an expensive limit. E.g. Ext3/4 is good, until it ran out of inodes. ZFS is good, but has only 1 password for full disk encryption and I want to store a second one with IT. According to the jungle drums, btrfs eats your data once in a while. Bcachefs stupidly tries to get itself rejected from Linux, not good for long term stability. I'm on XFS now, but let's see how that ends.
That doesn't really match my recollection of timeline. I remember xfs being mentioned in the same sources contemporary with reiserfs (it predates ext3, even!). ZFS is about a decade newer, but not by much, and was probably the main reason most people would pay any real attention to their filesystem at that point, since it meaningfully added features not available in anything else at that point. BTRFS was basically a 'let's build the same thing, but in linux', but seems to have kinda stalled in terms of reliability (or at least in terms of reputation), and bcachefs is very much the new kid on the block, but seems to have a little bit more of a focus on getting to the reliability of ZFS, but it certainly is still not something to trust even as much as BTRFS. So it doesn't really feel like a cambrian explosion, more a new filesystem every ~5 years or so at a reasonably steady pace.
(pretty much the 3 filesystems I think about ATM are ext4 as a standard boot drive, zfs for large, long-lived data storage, and FAT/exFAT for interoperability with windows. It'd have to be a pretty niche use-case for me to consider another option. BcacheFS sounds really interesting but only to experiment with right now)
FDE with ZFS is kind of fighting the way things are meant to be done with ZFS. ZFS allows encryption on a per dataset/zvol basis which is the officially recommended way to do encryption (see https://arstechnica.com/gadgets/2021/06/a-quick-start-guide-...)
And XFS will at unexpected shutdowns sometimes leave you with files that previously contained data now being 0 bytes.
I only really trust ZFS on Linux, but it's such a bother it can't be upstreamed and isn't fully integrated with the native Linux caching, as the native file systems are. Ext is fine too but it's missing features like checksumming and compression, and has limitations as you mentioned.
forza_user|7 months ago
- https://github.com/kakra/linux/pull/36
- https://wiki.tnonline.net/w/Btrfs/Allocator_Hints
What do you think?
dontdoxxme|7 months ago
It seems these patches possibly fix that.
sandreas|7 months ago
I used cryptsetup with BTRFS because I encrypt all of my stuff. One day, the system froze and after reboot the partition was unrecoverably gone (the whole story[1]). Not a real problem because I had a recent backup, but somehow I lost trust in BTRFS that day. Anyone experienced something like that?
Since then I switched to ZFS (on the same hardware) and never had problems - while it was a real pain to setup until I finished my script [2], which still is kind of a collection of dirty hacks :-)
1: https://forum.cgsecurity.org/phpBB3/viewtopic.php?t=13013
2: https://github.com/sandreas/zarch
ghostly_s|7 months ago
These same people are the only ones in the world suggesting btrfs is "basically" stable. I'll never touch this project again with a ten foot pole, afaic it's run by children. I'll trust adults with my data.
ChocolateGod|7 months ago
That said, I avoid it like the plague on servers, to get acceptable performance (or avoid fragmentation) with VMs or databases you need to disable COW which disables many of it's features, so it's better just to roll with XFS (and get pseudo-snapshots anyway).
pa7ch|7 months ago
gavinsyancey|7 months ago
fpoling|7 months ago
I was reluctant to use BTRFS on my Linux laptop but for the last 3 years I have been using it with 4K cryptsetup with no issues.
riku_iki|7 months ago
it looks like you didn't use raid, so any FS could fail in case of disk corruption.
bjoli|7 months ago
the8472|7 months ago
bionade24|7 months ago
This feature is for performance, not the case you described.
ajross|7 months ago
Does it really matter in the modern world where a vanilla two-socket rack unit has a terabyte of DRAM? Everything at scale happens in RAM these days. Everything. Replicating across datacenters gets you all the reliability you need, with none of the fussing about storage latency and block device I/O strategy.
hyperman1|7 months ago
Now, there is a cambrian explosion going on. Ext4, xfs, btrfs,bcachefs, zfs. They each have their pros and cons, and it takes a while before you find out you run into an expensive limit. E.g. Ext3/4 is good, until it ran out of inodes. ZFS is good, but has only 1 password for full disk encryption and I want to store a second one with IT. According to the jungle drums, btrfs eats your data once in a while. Bcachefs stupidly tries to get itself rejected from Linux, not good for long term stability. I'm on XFS now, but let's see how that ends.
rcxdude|7 months ago
(pretty much the 3 filesystems I think about ATM are ext4 as a standard boot drive, zfs for large, long-lived data storage, and FAT/exFAT for interoperability with windows. It'd have to be a pretty niche use-case for me to consider another option. BcacheFS sounds really interesting but only to experiment with right now)
mdedetrich|7 months ago
swinglock|7 months ago
I only really trust ZFS on Linux, but it's such a bother it can't be upstreamed and isn't fully integrated with the native Linux caching, as the native file systems are. Ext is fine too but it's missing features like checksumming and compression, and has limitations as you mentioned.
unknown|7 months ago
[deleted]