top | item 46620673

Native ZFS VDEV for Object Storage (OpenZFS Summit)

127 points| suprasam | 1 month ago |zettalane.com

31 comments

order

kev009|1 month ago

I am curious about the inverse, using the dataset layer, to implement some higher level things like objects for an S3 compatible storage or pages directly for an RDBMS. I seem to remember hearing rumblings about that but it is hard to dredge up.

p_l|1 month ago

ZFS-Lustre operates this way.

Main issue with opening it further is lack of DMU-level userland API, especially given how syscall heavy it could get (and iouring might be locked out due to politics)

magicalhippo|1 month ago

There was some work on this presented at one of the OpenZFS summits. However it never got submitted. Not sure if it remains a private feature or if they hit some roadblocks.

In theory it should be a pretty good match considering internally ZFS is an object store.

suprasam|1 month ago

For RDBMS pages on object storage - you might be thinking of Neon.tech. They built a custom page server for PostgreSQL that stores pages directly on S3.

infogulch|1 month ago

How suitable would this be as a zfs send target to back up your local zfs datasets to object storage?

suprasam|1 month ago

Yes, this is a core use case ZFS fits nicely. See slide 31 "Multi-Cloud Data Orchestration" in the talk.

Not only backup but also DR site recovery.

  The workflow:

  1. Server A (production): zpool on local NVMe/SSD/HD
  2. Server B (same data center): another zpool backed by objbacker.io → remote object storage (Wasabi, S3, GCS)
  3. zfs send from A to B - data lands in object storage

  Key advantage: no continuously running cloud VM. You're just paying for object storage (cheap) not compute (expensive). Server B is in your own data center - it can be a VM too.
For DR, when you need the data in cloud:

  - Spin up a MayaNAS VM only when needed
  - Import the objbacker-backed pool - data is already there
  - Use it, then shut down the VM

p_l|1 month ago

Quite probably should work just fine.

The secret is that ZFS actually implements an object storage layer on top of block devices and only then implements ZVOL and ZPL (ZFS POSIX filesystem) on top of that.

A "zfs send" is essentially a serialized stream of objects sorted by dependency (objects later in stream will refer to objects earlier in stream, but not the other way around).

PunchyHamster|1 month ago

FS metrics without random IO benchmark are near meaningless, sequential read is best case for basically every file system and it's essentially "how fast you can get things from S3" in this case

suprasam|1 month ago

It is all part of ZFS architecture with two tiers: - Special vdev (SSD): All metadata + small blocks (configurable threshold, typically <128KB) - Object storage: Bulk data only If the workload is randomized 4K small data blocks - that's SSD latency, not S3 latency.

gigatexal|1 month ago

Yup. IIRC low queue depth random Reads are king for desktop usage

yjftsjthsd-h|1 month ago

Could someone possibly compare this to https://www.zerofs.net/nbd-devices ("zpool create mypool /dev/nbd0 /dev/nbd1 /dev/nbd2")

suprasam|1 month ago

ZeroFS doesn't exploit ZFS strengths with no native ZFS support, just an afterthought with NBD + SlateDB LSM Good for small burst workloads where everything kept it in memory for LSM batch writes. Once compaction hits all bets off with performance and not sure about crash consistency since it is playing with fire. ZFS special vdev + ZIL on ssd is much safer. No need for LSM. MayaNAS ZFS metadata at SSD speed and large blocks get throughput from high latency S3 at network speed.

0x457|1 month ago

I know my missing something, but can't figure out: why not just one device?

digiown|1 month ago

Exciting stuff, but will this be merged? I remember another similar effort that went nowhere because the company decided to not proceed with it

curt15|1 month ago

How does this relate to the work presented a few years ago by the ZFS devs using S3 as object storage? https://youtu.be/opW9KhjOQ3Q?si=CgrYi0P4q9gz-2Mq

magicalhippo|1 month ago

Just going by the submitted article, it seems very similar in what it achieves, but seems to be implemented slightly differently. As I recall the DelphiX solution did not use a character device to communicate with the user-space S3 service, and it relied on a local NVMe backed write cache to make 16kB blocks performant by coalescing them into large objects (10 MB IIRC).

This solution instead seems to rely on using 1MB blocks and store those directly as objects, alleviating the intermediate caching and indirection layer. Larger number of objects but less local overhead.

DelphiX's rationale for 16 kB blocks was that their primary use-case was PostgreSQL database storage. I presume this is geared for other workloads.

And, importantly since we're on HN, DelphiX's user-space service was written in Rust as I recall it, this uses Go.

tw04|1 month ago

AFAIK it was never released, and it used FUSE, it wasn’t native.

terinjokes|1 month ago

It doesn't look like the source has been released, nor any documentation outside this blog post and presentation. Is there a plan to open this up past what is used by MayaNAS and Zettalane's cloud offerings?

doktor2u|1 month ago

That’s brilliant! Always amazed at how zfs keeps morphing and stays relevant!

glemion43|1 month ago

I do not get it.

Why would I use zfs for this? Isn't the power of zfs that it's a filesystem with checksum and stuff like encryption?

Why would I use it for s3?

mustache_kimono|1 month ago

> Why would I use it for s3?

You have it the wrong way around. Here, ZFS uses many small S3 objects as the storage substrate, rather than physical disks. The value proposition is that this should be definitely cheaper and perhaps more durable than EBS.

See s3backer, a FUSE implementation of similar: https://github.com/archiecobbs/s3backer

See prior in kernel ZFS work by Delphix which AFAIK was closed by Delphix management: https://www.youtube.com/watch?v=opW9KhjOQ3Q

BTW this appears to be closed too!

bakies|1 month ago

I've got a massive storage server built that I want to run s3 protocol on it. It's already running ZFS. This is exactly what I want.

zfs-share already implements SMB and NFS.