- Very old kernel used (3.10!), makes me wonder how old the packages like btrfs-progs are as well.
- BTRFS not mounted with compression (compress=lzo)
- Don't use QCOW2, just don't, it's slow and you're just adding extra layers where you don't need to.
It would be interesting to see you re-run these tests using a modern kernel, say at least 4.4 and either raw block devices or logical volumes along with mounting BTRFS properly with the compress=lzo option
The benchmark configuration appears to be designed to evaluate the use of storage technologies for a KVM host. Consequently, saying to use raw block devices when giving tips for improving btrfs performance is contradictory.
Also, there are a large number of people that will not run a newer kernel for several years because they are on RHEL6 or RHEL7, so while newer kernels are interesting, we should not discount the results on the basis that the kernel is old. The latest ZFSOnLinux code is able to run on those kernels, so while btrfs remains stagnant there, ZFS will continue to improve.
As for rerunning the tests, using recordsize=4K and compression=lz4 on ZFS should improve its performance here too. Putting the VM images on zvols (where it would be volblocksize=4K) rather than qcow2 also would help. In ZoL, zvols are block devices.
Kvm (or any overwrite workload for that matter) is the worst possible workload for btrfs because of COW. We have ideas to address this but honestly it's not high on the list.
This is more than a year old and the hardware this was tested on wasn't even that exceptional when it was released 7 years ago. I'm kind of assuming that the tester did something strange for BTRFS in particular because his results disagree with every other benchmark out there. Using BTRFS on top of software RAID 10 is also inappropriate as this should be done by creating a filesystem containing all four devices.
That being said I'd love to see more benchmarks of BTRFS compared to other filesystems and on hardware that isn't so archaic. I think it's safe to say that this article is not representative of reality as Phoronix has tons of benchmarks that all don't have nearly this big of difference between BTRFS and Ext4. Here's an article that's even more outdated than the parent and it still shows BTRFS performing acceptably across the board.
I found it just recently when upgraded my VM images storage and BTRFS was still significally slower on HDDs using Ubuntu 16.04. Though I only compared it to EXT4 on LVM with QCOW2 + backing files.
Any idea if there was newer performance comparison for VM storage?
Not sure what you mean here. They showed that ZFS performed well on linux. Are you saying that ZFS performs poorly on BSD and Solaris?
Love me some ZFS and it works well pretty much everywhere. OpenZFS shows good promise of keeping (bringing) the BSD, Illumos and Linux versions in line with each other.
BTRFS has had Copy on Write disabled, while ZFS (not possible, cause the whole idea of the FS is intended to be copy on write), which actually makes BTRFS look even worse compared to ZFS, cause BTRFS writes once instead of twice and has most of its features not work, while still performance pretty bad. But then it's also younger.
Both ZFS and BTRFS (only know specifics of ZFS) can be configured to be better for DB workloads. On ZFS, I know a couple of people using it cause it has nice properties.
Anyway. You might not want to use ZFS or BTRFS for a (pure) database system when performance is the important thing (compared to data security).
What I find kind of missing is UFS because of the BSD world (it's kind of what Ext4 is in the sense of "your general purpose FS with good performance for DBs"), but okay.
And yeah, that all may sound a bit biased towards ZFS, but so far my experience with ZFS has been rather pleasent compared to BTRFS, but again. You might wanna use UFS or Ext4.
It's also kind of "wrong" to compare those file systems. It's like while you could use Redis and PostgreSQL for the same things it's probably not what you want to for one reason or the other.
But then of course it's good, cause you might really have a thing where you want to have a comparison of those two things to take the right decision for your specific application. Like those cases where you say "It's slow, but it makes a lot of thing easier" or "It doesn't guarantee that, but when I take care of it in the design I will only need only a fraction of the resources. And the other downsides don' t annoy me thaaat much".
Because the question came up. COW means Copy on Write and it really means what it says. You copy the data (so have find and to write blocks, duplicating your data on write) which in case of a full blown database which does that again (WAL, Autovacuum, keeping lots of metadata, etc.) you really shouldn't be surprised that it's a lot slower on write heavy sytems. It is more than expected.
On the other hand because the FS does a lot of things in a similar way to a database, other than snapshots and replication and all that cool stuff ZFS does things related to metadata extremely quickly and that's why some CDNs that have many small files use it. Besides just being an amazing thing for managing your data in general.
Also if you wanna learn more a about ZFS and have a good understanding of how you can run databases and other things on ZFS and really utilize it (not just gaining more performance) then I highly recommend the book FreeBSD Mastery: Advanced ZFS.
I've been curious about ZFS, btrfs, etc. But as a layperson, I don't have the technical chops, gumption, wherewithal to figure what's what. Reading posts (comments) about the edge cases where they fail (data, performance, missing features) leaves me more baffled.
There is a UFS driver for Linux, but it is a reimplementation rather than a port, so it's performance numbers would not be comparable. Also, it attempts to support many variants of UFS rather than just one:
It does not support the latest UFS developments in FreeBSD, NetBSD, etcetera, so its performance is also limited by the older disk format versus the newer formats used by drivers on other platforms.
Interesting. But in our case, we are running Hyper-V VMs (son sadly NTFS on host), were the VMs uses EXT4, and a few uses BTRFS.
The only problem that we had, was with a uncontrolled poweroff a year ago. Looks that BRTFS had some trouble recovering from it, but we manage to restore all data from the partitions. However, the flexibility that offer BTRFS (transparent compression, increasing hard disk space on demand, etc) is really nice. I hope that it improve more, so we can use it without any issues.
Note that ext4 on LVM can do this as well, you just need to plan for it in advance. All of my systems are configured as fs-on-lvm, even the single-disk ones. I've found it's just less hassle: I can do all storage migration or storage expansion without ever taking the machine offline.
I'd like to see NTFS somewhere in these comparisons just to shake some of that "grass is greener" thinking. The amount I've read on the various file systems I feel like there's a lot on the table there to get out of our hardware by just using a better system. But I've not seen a whole lot about ntfs, because it's all we've got in windows land I suppose.
Got a Synology NAS showing up today. Based on the results here and the fuzzing link posted several days ago (and elsewhere ITT), it looks like I'll be going with ext4 for now.
Amazingly, I've had a ReadyNAS device last me over 10 years and here's to hoping this one lasts a similar period of time.
[+] [-] mrmondo|9 years ago|reply
- BTRFS not mounted with compression (compress=lzo)
- Don't use QCOW2, just don't, it's slow and you're just adding extra layers where you don't need to.
It would be interesting to see you re-run these tests using a modern kernel, say at least 4.4 and either raw block devices or logical volumes along with mounting BTRFS properly with the compress=lzo option
[+] [-] ryao|9 years ago|reply
Also, there are a large number of people that will not run a newer kernel for several years because they are on RHEL6 or RHEL7, so while newer kernels are interesting, we should not discount the results on the basis that the kernel is old. The latest ZFSOnLinux code is able to run on those kernels, so while btrfs remains stagnant there, ZFS will continue to improve.
As for rerunning the tests, using recordsize=4K and compression=lz4 on ZFS should improve its performance here too. Putting the VM images on zvols (where it would be volblocksize=4K) rather than qcow2 also would help. In ZoL, zvols are block devices.
[+] [-] beefhash|9 years ago|reply
[+] [-] SXX|9 years ago|reply
http://www.ilsistemista.net/index.php/virtualization/47-zfs-...
[+] [-] koolba|9 years ago|reply
"I've figured out a way to increase our pageviews by 10x!"
For "major" sites there's usually a printable view that consolidates things into one page. I don't see one on this page though.
[+] [-] ovidiup|9 years ago|reply
[+] [-] _Codemonkeyism|9 years ago|reply
[+] [-] matt-attack|9 years ago|reply
[+] [-] josefbacik|9 years ago|reply
[+] [-] kinkdr|9 years ago|reply
[+] [-] ars|9 years ago|reply
[+] [-] MertsA|9 years ago|reply
That being said I'd love to see more benchmarks of BTRFS compared to other filesystems and on hardware that isn't so archaic. I think it's safe to say that this article is not representative of reality as Phoronix has tons of benchmarks that all don't have nearly this big of difference between BTRFS and Ext4. Here's an article that's even more outdated than the parent and it still shows BTRFS performing acceptably across the board.
https://www.phoronix.com/scan.php?page=article&item=linux_ra...
[+] [-] SXX|9 years ago|reply
Any idea if there was newer performance comparison for VM storage?
[+] [-] ryao|9 years ago|reply
[+] [-] LeoPanthera|9 years ago|reply
ZFS on Linux in particular is not representative of ZFS on BSD or Solaris.
[+] [-] georgyo|9 years ago|reply
Love me some ZFS and it works well pretty much everywhere. OpenZFS shows good promise of keeping (bringing) the BSD, Illumos and Linux versions in line with each other.
[+] [-] tete|9 years ago|reply
BTRFS has had Copy on Write disabled, while ZFS (not possible, cause the whole idea of the FS is intended to be copy on write), which actually makes BTRFS look even worse compared to ZFS, cause BTRFS writes once instead of twice and has most of its features not work, while still performance pretty bad. But then it's also younger.
Both ZFS and BTRFS (only know specifics of ZFS) can be configured to be better for DB workloads. On ZFS, I know a couple of people using it cause it has nice properties.
Anyway. You might not want to use ZFS or BTRFS for a (pure) database system when performance is the important thing (compared to data security).
What I find kind of missing is UFS because of the BSD world (it's kind of what Ext4 is in the sense of "your general purpose FS with good performance for DBs"), but okay.
And yeah, that all may sound a bit biased towards ZFS, but so far my experience with ZFS has been rather pleasent compared to BTRFS, but again. You might wanna use UFS or Ext4.
It's also kind of "wrong" to compare those file systems. It's like while you could use Redis and PostgreSQL for the same things it's probably not what you want to for one reason or the other.
But then of course it's good, cause you might really have a thing where you want to have a comparison of those two things to take the right decision for your specific application. Like those cases where you say "It's slow, but it makes a lot of thing easier" or "It doesn't guarantee that, but when I take care of it in the design I will only need only a fraction of the resources. And the other downsides don' t annoy me thaaat much".
Because the question came up. COW means Copy on Write and it really means what it says. You copy the data (so have find and to write blocks, duplicating your data on write) which in case of a full blown database which does that again (WAL, Autovacuum, keeping lots of metadata, etc.) you really shouldn't be surprised that it's a lot slower on write heavy sytems. It is more than expected.
On the other hand because the FS does a lot of things in a similar way to a database, other than snapshots and replication and all that cool stuff ZFS does things related to metadata extremely quickly and that's why some CDNs that have many small files use it. Besides just being an amazing thing for managing your data in general.
Also if you wanna learn more a about ZFS and have a good understanding of how you can run databases and other things on ZFS and really utilize it (not just gaining more performance) then I highly recommend the book FreeBSD Mastery: Advanced ZFS.
[+] [-] specialist|9 years ago|reply
https://aphyr.com/tags/Jepsen
I've been curious about ZFS, btrfs, etc. But as a layperson, I don't have the technical chops, gumption, wherewithal to figure what's what. Reading posts (comments) about the edge cases where they fail (data, performance, missing features) leaves me more baffled.
[+] [-] ryao|9 years ago|reply
https://github.com/torvalds/linux/blob/master/Documentation/...
It does not support the latest UFS developments in FreeBSD, NetBSD, etcetera, so its performance is also limited by the older disk format versus the newer formats used by drivers on other platforms.
[+] [-] Zardoz84|9 years ago|reply
The only problem that we had, was with a uncontrolled poweroff a year ago. Looks that BRTFS had some trouble recovering from it, but we manage to restore all data from the partitions. However, the flexibility that offer BTRFS (transparent compression, increasing hard disk space on demand, etc) is really nice. I hope that it improve more, so we can use it without any issues.
[+] [-] tremon|9 years ago|reply
Note that ext4 on LVM can do this as well, you just need to plan for it in advance. All of my systems are configured as fs-on-lvm, even the single-disk ones. I've found it's just less hassle: I can do all storage migration or storage expansion without ever taking the machine offline.
[+] [-] Already__Taken|9 years ago|reply
[+] [-] yAnonymous|9 years ago|reply
[+] [-] newman314|9 years ago|reply
Amazingly, I've had a ReadyNAS device last me over 10 years and here's to hoping this one lasts a similar period of time.
[+] [-] yashau|9 years ago|reply
[+] [-] ryao|9 years ago|reply
[+] [-] gervase|9 years ago|reply
Unfortunately, thanks to the way they've partitioned the article, this mirror is not particularly useful.
[+] [-] unknown|9 years ago|reply
[deleted]