I used to be in the "why BTRFS" camp and religiously would only install ext4 without LVM on Fedora for my laptops, desktops, and servers. When I saw so many subsequent releases persist in offering BTRFS by default, I decided to try it for a recent laptop install, because honestly, the appeal of deduplication, checksumming, snapshotting, and so many other features that modern FSes generally come with (e.g., ZFS), I just decided to jump the gun and went ahead and installed it.
I can safely say it has not presented any problem with me thus far, and I am at the stage of my life where I realize that I don't have the time to fiddle as much with settings. If the distributions are willing to take that maintenance on their shoulders, I'm willing to trust them and deal with the consequences – at least I know I'm not alone.
It's obviously not there as a NAS filesystem, ZFS drop-in replacement, etc. But if what you take away from that is that BTRFS is no good as a filesystem on a single drive system, you're missing out. Just a few weeks ago I used a snapshot to get myself out of some horrible rebase issue that lost half my changes. Could I have gone to the reflog and done other magic? Probably. But browsing my .snapshots directory was infinitely easier!
BTRFS works fine. I use it on my everyday laptop without problems. Compression can help on devices with not a lot of disk, and also copy on write. However, BTRFS has its drawbacks, for example it's tricky to have a swapfile on it (now it's possible with some special attributes).
Also I wouldn't trust BTRFS for the purpose of data archiving, for the fact that ext4 is a proven (and simpler) filesystem, thus it's less likely to became corrupt, and it's more likely to being able to recover data from it if it will become corrupt (or the disk has some bad sectors and that sort of stuff).
I have not used BTRFS for years, but I remember at some point a BTRFS regression prevented me from booting my system. It is hard to regain trust after such a meltdown from such a fundamental component. That said, I believe my Synology NAS uses btrfs and it has never had an issue.
It certainly used to be the case that BTRFS had some nasty behaviour patterns when it started to run low on space. It could well be that it has not presented any problem for you yet.
On the other hand, those days might be behind it. I haven't kept track.
Are you using a UPS on the desktop? A recent HN thread highlighted BTRFS issues, especially with regards to dataloss on power loss. Also, there's a "write hole issue" on some RAID configurations, RAID 4 or 6 I think.
That said, I'm thinking about leaving a BTRFS partition unmounted and mounting it only to perform backups, taking advantage of the snapshotting features.
I was an early adopter and some bad experiences early on made it a bitter pill. I swore it off for a decade, and about year and half came back around. It's MUCH MUCH better now. With automated "sensible" settings for btrfsmaintenance tools it's actually just fine now.
Anecdata: Once upon a time I installed a SuSE with their default choice of ReiserFS (v3) as root partition. Couple of months later that filesystem was dead beyond repair. I don't know whether I did something wrong, but I've been very wary of "defaults" ever since. That said, that was a different time and I tend to see a ZFS or a BTRFS in my near future.
One of the a little weird things about btrfs is that some Software (e.g. OBS) seems to have a hard time to get the free space on disk. Maybe because they assume the usual ways of getting free space works (which doesn't on btrfs)
> If the distributions are willing to take that maintenance on their shoulders, I'm willing to trust them and deal with the consequences – at least I know I'm not alone.
But then they make changes that add an insane amount of complexity, and suddenly you're running into random errors and googling all the time to try to find the magical fixes to all the problems you didn't have before.
Although this would be an interesting way to drag some of my old NTFS filesystems kicking & screaming into the 21st century, I'd never do one of these in-place conversions again. I tried to go from ext3 to btrfs several years ago - and it would catastrophically fail after light usage. (W're talking less than a few hours of desktop-class usage. In retrospect I think it was `autodefrag/defragment` that would kill it.) I tried that conversion a few times and it never worked, I think I even tried to go from ext3->ext4->btrfs. This was on an Arch install so (presumably) it was the latest and greatest kernel & userspace available at the time.
I eventually gave up (/ got sick of doing restores) and just copied the data into a fresh btrfs volume. That worked "great" up until I realized (a) I had to turn off CoW for a bunch of things I wanted to snapshot, (b) you can't actually defrag in practice because it unlinks shared extents and (c) btrfs on a multi-drive array has a failure mode that will leave your root filesystem readonly; which is just a footgun that shouldn't exist in a production-facing filesystem. - I should add that these were not particularly huge filesytems: the ext3 conversion fiasco was ~64G, and my servers were like ~200G and ~100G respectively. I also was doing "raid1"/"raid10" style setups, and not exercising the supposedly broken raid5/raid6 code in any way.
I think I probably lost three or four filesystems which were supposed to be "redundant" before I gave up and switched to ZFS. Between (a) & (b) above btrfs just has very few advantages compared to ZFS. Really the only thing going for it was being available in mainline kernel builds. (Which, frankly, I don't consider that to be an advantage the way the GPL zealots on the LKML seem to think it is.)
> ...btrfs just has very few advantages compared to ZFS. Really the only thing going for it was being available in mainline kernel builds.
ZFS doesn't have defrag, and BtrFS does.
There was a paper recently on purposefully introducing fragmentation, and the approach could drastically reduce performance on any filesystem that was tested.
This can be fixed in BtrFS. I don't see how to recover from this on ZFS, apart from a massive resilver.
I'm pretty dependent on the ability to deduplicate files in place without massive overhead. The built in defrag on BTRFS is unfortunate but I think you can defragment and re-deduplicate.
I don't know, I'm just hoping for a filesystem that can get these features right to come along...
In-place conversion of NTFS? You either still believe in a god or need to google the price of harddrives these days. Honest question tho, why would anybody do in-place conversion of partitions?
>You either still believe in a god or need to google the price of harddrives these days.
That was pretty funny, and I agree a thousand times over. When I was younger (read: had shallower pockets) I was willing to spend time on these hacks to avoid the need for intermediate storage. Now that I'm wiser, crankier, and surrounded by cheap storage: I would rather just have Bezos send me a pair of drives in <24h to chuck in a sled. They can populate while I'm occupied and/or sleeping.
My time spent troubleshooting this crap when it inevitably explodes is just not worth the price of a couple of drives; and if I still manage to cock everything up at least the latter approach leaves me with one or more backup copies. If everything goes according to plan well hey the usable storage on my NAS just went up ;-). I feel bad for the people that will inevitably run this command on the only copy of their data. (Though I would hope the userland tool ships w/ plenty of warnings to the contrary.)
Just because something is cheap doesn't mean I'm fine with buying it for a one-shot use.
Buying an extra disk for just the conversion is wasteful, and then you need space to keep it stashed forever when you never use it. Not at all sustainable, I'd rather leave the hardware on the market for people who _actually_ need it.
So you buy an external 1TB drive just for the sake of the conversion, then create a new partition, then copy your 1TB of data over, then... what? Wipe your PC, boot into a live CD, then copy the partition over? Do you find this easier/more worthwhile than an in-place conversion? How/why?
From the same person that made WinBtrfs and Quibble, a Windows NT Btrfs installable filesystem and bootloader. And yes, with all of that one can boot and run Windows natively on Btrfs, at least in theory.
That's in common with the conversion from ext[234] and reiserfs, too. Makes it easy to both undo the conversion and inspect the original image in case the btrfs metadata became wrong somehow.
In a former life I ran a web site with a co-founder. We needed to upgrade our main system (we only had 2), and had mirrored RAID1 hard drives, some backup but not great. We tested the new system, it appeared to work fine, so the plan was to take it to the colo, rsync the old system to the new one, make sure everything ran okay, then bring the old system home.
We did the rsync, started the new system, it seemed to be working okay, but then we started seeing some weird errors. After some investigation, it looked like the rsync didn't work right. We were tired, it was getting late, so we decided to put one of the original mirrors in the new system since we knew it worked.
Started up the new system with the old mirror, it ran for a while, then started acted weird too. At that point we only had 1 mirror left, were beat, and decided to pack the old and new system up and bring it all back to the office (my co-founder's house!) and figure out what was going on. We couldn't afford to lose the last mirror.
After making another mirror in the old system, we started testing the new system. It seemed to work fine with 1 disk in either bay (it had 2). But when we put them in together and started doing I/O from A to B, it corrupted drive A. We weren't even writing to drive A!
For the next test, I put both drives on 1 IDE controller instead of each on its own controller. (Motherboards had 2 IDE controllers, each supported 2 drives). That worked fine.
It turns out there was a defect on the MB and if both IDE ports were active, it got confused and sent data to the wrong drive. We needed the CPU upgrade so ended up running both drives on 1 IDE port and it worked fine until we replaced it a year later.
But we learned a valuable lesson: never ever use your production data when doing any kind of upgrade. Make copies, trash them, but don't use the originals. I think that lesson applies to the idea of doing an inplace conversion from NTFS to Btrfs, even if it says it keeps a backup. Do yourself a favor and copy the whole drive first, then mess around with the copy.
I used btrfs on an EC2 instance with two local SSDs that were mirrored for a CI pipeline running Concourse. It would botch up every few months, and I got to automating setup so that it was easy to recreate. I never did find the actual source of the instance botch-up though. It was either the local PostgreSQL instance running on btrfs, btrfs, or the Concourse software. I pretty much ruled out the PostgreSQL being the originating source of the issue, but didn't get further than that. I don't know if anyone would suspect mdadm.
Other than whatever that instability was, I can say that the performance was exceptional and would use that setup again, with more investigation into causes of the instability.
What I really want: ext4 performance with instant snapshots plus optional transparent compression when it can improve performance. There is only one promise to deliver this AFAIK: bcachefs, but it still isn't mature yet.
Personal anecdote: I've been using BTRFS on my laptop running Manjaro for the past year with no issues. Originally I had it running in an encrypted LUKS partition on a single Samsung NVMe, but for the past month I've been running two NVMe drives in RAID 0 with a LUKS volume on top of that and BTRFS inside of that. In both cases I've had no performance issues, no reliability issues or data loss (even when having to force shutdown the laptop due to unrelated freezes), and have been able to save and restore from snapshots with zero issues.
butter-fs[1] would not be the destination FS I would have chosen but such an effort deserves kudos.
[1] ...given how broken it seemingly is, see features that are half baked like raid-5. But I am a ZFS snob so don't mind me, my fs of choice has it's own issues.
BTRFS has been stable for years now as long as you don't use unsupported features like the aforementioned RAID5. A properly set up btrfs system is fine for production use, though note the "properly set up" bit, as a good number of distros still don't set it up right. I suspect the latter bit is why people continue to have issues with it (which is definitely a big downside compared to something like ZFS's "no admin intervention required" policy).
Regardless, in-place conversion is specifically a feature of btrfs due to how its designed. Since it doesn't require a lot of fixed metadata, you can convert a fs in-place by throwing the btrfs metadata into unallocated space and just point to the same blocks as the original fs. I think it even copies the original fs's metadata too, so you can mount the filesystem as either the original or btrfs for a while.
[+] [-] basilgohar|4 years ago|reply
I can safely say it has not presented any problem with me thus far, and I am at the stage of my life where I realize that I don't have the time to fiddle as much with settings. If the distributions are willing to take that maintenance on their shoulders, I'm willing to trust them and deal with the consequences – at least I know I'm not alone.
[+] [-] pkulak|4 years ago|reply
[+] [-] alerighi|4 years ago|reply
Also I wouldn't trust BTRFS for the purpose of data archiving, for the fact that ext4 is a proven (and simpler) filesystem, thus it's less likely to became corrupt, and it's more likely to being able to recover data from it if it will become corrupt (or the disk has some bad sectors and that sort of stuff).
[+] [-] bgorman|4 years ago|reply
[+] [-] regularfry|4 years ago|reply
On the other hand, those days might be behind it. I haven't kept track.
[+] [-] badsectoracula|4 years ago|reply
Does any other FS on Linux provide those?
[+] [-] dotancohen|4 years ago|reply
That said, I'm thinking about leaving a BTRFS partition unmounted and mounting it only to perform backups, taking advantage of the snapshotting features.
[+] [-] stjohnswarts|4 years ago|reply
[+] [-] pronik|4 years ago|reply
[+] [-] atoav|4 years ago|reply
[+] [-] nix23|4 years ago|reply
Try to fill your root-filesystem with dd, then remove the file and sync, reboot and enjoy a non booting OS ;) It's like they don't test it at all.
[+] [-] 0xbadcafebee|4 years ago|reply
But then they make changes that add an insane amount of complexity, and suddenly you're running into random errors and googling all the time to try to find the magical fixes to all the problems you didn't have before.
[+] [-] drbawb|4 years ago|reply
I eventually gave up (/ got sick of doing restores) and just copied the data into a fresh btrfs volume. That worked "great" up until I realized (a) I had to turn off CoW for a bunch of things I wanted to snapshot, (b) you can't actually defrag in practice because it unlinks shared extents and (c) btrfs on a multi-drive array has a failure mode that will leave your root filesystem readonly; which is just a footgun that shouldn't exist in a production-facing filesystem. - I should add that these were not particularly huge filesytems: the ext3 conversion fiasco was ~64G, and my servers were like ~200G and ~100G respectively. I also was doing "raid1"/"raid10" style setups, and not exercising the supposedly broken raid5/raid6 code in any way.
I think I probably lost three or four filesystems which were supposed to be "redundant" before I gave up and switched to ZFS. Between (a) & (b) above btrfs just has very few advantages compared to ZFS. Really the only thing going for it was being available in mainline kernel builds. (Which, frankly, I don't consider that to be an advantage the way the GPL zealots on the LKML seem to think it is.)
[+] [-] chasil|4 years ago|reply
ZFS doesn't have defrag, and BtrFS does.
There was a paper recently on purposefully introducing fragmentation, and the approach could drastically reduce performance on any filesystem that was tested.
This can be fixed in BtrFS. I don't see how to recover from this on ZFS, apart from a massive resilver.
https://www.usenix.org/system/files/login/articles/login_sum...
[+] [-] Dylan16807|4 years ago|reply
I don't know, I'm just hoping for a filesystem that can get these features right to come along...
[+] [-] pishpash|4 years ago|reply
[+] [-] Flocular|4 years ago|reply
[+] [-] drbawb|4 years ago|reply
That was pretty funny, and I agree a thousand times over. When I was younger (read: had shallower pockets) I was willing to spend time on these hacks to avoid the need for intermediate storage. Now that I'm wiser, crankier, and surrounded by cheap storage: I would rather just have Bezos send me a pair of drives in <24h to chuck in a sled. They can populate while I'm occupied and/or sleeping.
My time spent troubleshooting this crap when it inevitably explodes is just not worth the price of a couple of drives; and if I still manage to cock everything up at least the latter approach leaves me with one or more backup copies. If everything goes according to plan well hey the usable storage on my NAS just went up ;-). I feel bad for the people that will inevitably run this command on the only copy of their data. (Though I would hope the userland tool ships w/ plenty of warnings to the contrary.)
[+] [-] marcodiego|4 years ago|reply
[+] [-] WhyNotHugo|4 years ago|reply
Buying an extra disk for just the conversion is wasteful, and then you need space to keep it stashed forever when you never use it. Not at all sustainable, I'd rather leave the hardware on the market for people who _actually_ need it.
[+] [-] dataflow|4 years ago|reply
[+] [-] boricj|4 years ago|reply
[+] [-] danudey|4 years ago|reply
[+] [-] tedunangst|4 years ago|reply
Now that's really interesting.
[+] [-] chungy|4 years ago|reply
[+] [-] prirun|4 years ago|reply
We did the rsync, started the new system, it seemed to be working okay, but then we started seeing some weird errors. After some investigation, it looked like the rsync didn't work right. We were tired, it was getting late, so we decided to put one of the original mirrors in the new system since we knew it worked.
Started up the new system with the old mirror, it ran for a while, then started acted weird too. At that point we only had 1 mirror left, were beat, and decided to pack the old and new system up and bring it all back to the office (my co-founder's house!) and figure out what was going on. We couldn't afford to lose the last mirror.
After making another mirror in the old system, we started testing the new system. It seemed to work fine with 1 disk in either bay (it had 2). But when we put them in together and started doing I/O from A to B, it corrupted drive A. We weren't even writing to drive A!
For the next test, I put both drives on 1 IDE controller instead of each on its own controller. (Motherboards had 2 IDE controllers, each supported 2 drives). That worked fine.
It turns out there was a defect on the MB and if both IDE ports were active, it got confused and sent data to the wrong drive. We needed the CPU upgrade so ended up running both drives on 1 IDE port and it worked fine until we replaced it a year later.
But we learned a valuable lesson: never ever use your production data when doing any kind of upgrade. Make copies, trash them, but don't use the originals. I think that lesson applies to the idea of doing an inplace conversion from NTFS to Btrfs, even if it says it keeps a backup. Do yourself a favor and copy the whole drive first, then mess around with the copy.
[+] [-] karmakaze|4 years ago|reply
Other than whatever that instability was, I can say that the performance was exceptional and would use that setup again, with more investigation into causes of the instability.
[+] [-] marcodiego|4 years ago|reply
[+] [-] nisa|4 years ago|reply
[+] [-] R0b0t1|4 years ago|reply
Main usecase is storage spaces w/o server or workstation.
[+] [-] aborsy|4 years ago|reply
[+] [-] 8bitbuddhist|4 years ago|reply
[+] [-] bedros|4 years ago|reply
[+] [-] moonchild|4 years ago|reply
[+] [-] fulvioterzapi|4 years ago|reply
[+] [-] timbit42|4 years ago|reply
[+] [-] gigatexal|4 years ago|reply
[1] ...given how broken it seemingly is, see features that are half baked like raid-5. But I am a ZFS snob so don't mind me, my fs of choice has it's own issues.
[+] [-] stryan|4 years ago|reply
Regardless, in-place conversion is specifically a feature of btrfs due to how its designed. Since it doesn't require a lot of fixed metadata, you can convert a fs in-place by throwing the btrfs metadata into unallocated space and just point to the same blocks as the original fs. I think it even copies the original fs's metadata too, so you can mount the filesystem as either the original or btrfs for a while.
[+] [-] unmole|4 years ago|reply
You thought that'd be a swipe but that's how the developers pronounce it