You can’t… at least not today. While we have ported the majority of the ZFS code to the Linux kernel that does not yet include the ZFS posix layer. The only interface currently available from user space is the ZVOL virtual block device.
... ZPL, so the Z POSIX Layer, is the really awesome thing. Virtual block device access is neat, though. I wonder when the first one starts building a high performance data base project based on virtual blocks in zfs for linux. Could be interesting.
As I recall, the FreeBSD ZFS port had ZVOL integrated into GEOM (FreeBSD block device layer) about a week after the porting effort was started. ZPL took significantly longer.
What I'd love is for the L2ARC system built on top of ZFS to be ported as well. That's a killer feature for web applications. Basically, it can extend your cache to include an SSD, but manage that all internally, so you just use your database as usual on top of ZFS and let it handle all the caching and moving less used data from SSD to disk and vice versa, and of course the usual filesystem caching in RAM as well.
Though you could get similar functionality by using membase at the expense of being limited to get/set operations (it's a K/V store).
If it's as small as a single server, you probably don't need an SSD to cache access. If your db server is partitioned out, just put the whole fs on SSD. You can't partition the L2ARC, so you really don't want to mix web assets with database volumes. The assets will push your db out of the cache unless it's so big that you're massively overpowered, or traffic so low it didn't matter in the first place.
That's some hard won knowledge there. :-)
(You can choose to cache assets meta-data only, but that tuning has major downsides itself and while it may protect your db, it's also likely to render your caches very under-utilized. Basically it isn't a silver bullet and it's still important to think of volumes and work-loads in terms of what "spindles" they're on.
I've heard about btrfs being "just around the corner" for a good while now. ZFS on Nexenta and even FreeBSD is pretty robust. It's a great storage platform. I wouldn't let waiting on a stable btrfs make my SAN/NAS decisions for me.
The other concern in that realm is that when you're centralizing storage concerns to lower costs, boost performance and increase reliability you don't want software issues or corruption to take down your entire business.
ZFS can be a painful enough learning curve when it comes to that environment. I wouldn't trust btrfs until it's been stable for a couple years there. And outside of that environment, there are plenty of good stable alternatives for the DAS space. While ZFS is nice there, and I'm sure btrfs would be as well, that's not the bread and butter for these systems.
I still see btrfs as years behind zfs as measured strictly by maturity (not features), and therefore consider it to be at least a few years away from being usable in situations where an advanced file system really matters (mission critical databases, 30+TB file systems, etc.)
ZFS has been around 4-5 years, yet in the last few months we hit a severe data loss bug (ZIL corruption) and a service affecting ZFS cache performance bug (ARC cache maintenance routine with math error).
I'd assume that btrfs will have similar teething pains.
Actually you can't even distribute binaries at all.
Btrfs is quite far from being "around the corner". A solid, dependable filesystem must be in the wild for a couple of years to be called "production ready".
ZFS is not a distributed filesystem. What you can do is export it over nfs for example.
It also has replication features with send/receive but I don't think this implementation supports it.
Edit: I notice that a lot of people think ZFS is a cluster/distributed filesystem. I don't understand where they get this idea.
If it isn't backed by RedHat it is useless. Redhat have his amazing triple develop/test/bugfix process Fedora-development->Fedora->RHEL. But even Fedora is quite stable.
to stupid down-voters: try to count what amount of testing a standalone package gets compared to the package which is a part of Fedora Project? In term of number of installations? community involved? Fedora's team and infrastructure?
[+] [-] wnoise|15 years ago|reply
From the FAQ:
1.3 How do I mount the file system?
You can’t… at least not today. While we have ported the majority of the ZFS code to the Linux kernel that does not yet include the ZFS posix layer. The only interface currently available from user space is the ZVOL virtual block device.
[+] [-] dataguy|15 years ago|reply
[+] [-] Freaky|15 years ago|reply
[+] [-] jrmxrf|15 years ago|reply
[deleted]
[+] [-] binomial|15 years ago|reply
Though you could get similar functionality by using membase at the expense of being limited to get/set operations (it's a K/V store).
[+] [-] kmavm|15 years ago|reply
[+] [-] ssmoot|15 years ago|reply
That's some hard won knowledge there. :-)
(You can choose to cache assets meta-data only, but that tuning has major downsides itself and while it may protect your db, it's also likely to render your caches very under-utilized. Basically it isn't a silver bullet and it's still important to think of volumes and work-loads in terms of what "spindles" they're on.
[+] [-] steve19|15 years ago|reply
So this is a port and therefor CDDL licesned and no able to be merged into the mainline Linux Kernel, right?
Is ZFS still a hot commodity with BTRfs just around the corner?
[+] [-] ssmoot|15 years ago|reply
The other concern in that realm is that when you're centralizing storage concerns to lower costs, boost performance and increase reliability you don't want software issues or corruption to take down your entire business.
ZFS can be a painful enough learning curve when it comes to that environment. I wouldn't trust btrfs until it's been stable for a couple years there. And outside of that environment, there are plenty of good stable alternatives for the DAS space. While ZFS is nice there, and I'm sure btrfs would be as well, that's not the bread and butter for these systems.
[+] [-] rm-rf|15 years ago|reply
ZFS has been around 4-5 years, yet in the last few months we hit a severe data loss bug (ZIL corruption) and a service affecting ZFS cache performance bug (ARC cache maintenance routine with math error).
I'd assume that btrfs will have similar teething pains.
[+] [-] lmz|15 years ago|reply
[+] [-] patrickgzill|15 years ago|reply
[+] [-] wazoox|15 years ago|reply
[+] [-] nivertech|15 years ago|reply
[+] [-] spahl|15 years ago|reply
Edit: I notice that a lot of people think ZFS is a cluster/distributed filesystem. I don't understand where they get this idea.
[+] [-] leif|15 years ago|reply
[+] [-] c00p3r|15 years ago|reply
[+] [-] regularfry|15 years ago|reply
[+] [-] c00p3r|15 years ago|reply