top | item 44582773

(no title)

djray | 7 months ago

What you gain in performance, you somewhat sacrifice in flexibility, at least in comparison with OpenEXR.

OpenEXR was designed for modularity, allowing efficient access to individual layers or channels. This is crucial in VFX workflows where only specific passes (like normals or diffuse) might be needed at any one time. This access is possible because EXR stores channels separately and supports tiled or scanline-based access.

The custom compression method Aras proposes - using meshoptimizer on 16K pixel chunks, followed by zstd as a second compressor step - achieves significantly faster decompression and better compression speeds than EXR ZIP, HTJ2K, or JPEG-XL lossless. However, it trades off random access and requires decompressing the entire image at once, which increases memory usage. Individual frames for a VFX production can be multiple gigabytes (i.e. dozens of 32-bit layers at 4K resolution).

The author's proposal is still compelling, and I wonder if a variant could find its way into some sort of archival format.

discuss

order

aras_p|7 months ago

(author here) I think yes and no -- while it is true that the "MOP" quick test I tried does not allow to access/decompress individual EXR channels, it does allow to access "chunks" of the image. Unlike say EXR ZIP that splits up image into 16 scanline chunks where each is independent, this splits up into 16K pixel chunks where each is completely independent from each other. So you can access a chunk without decompressing the whole image.

That said, if someone were to investigate ideas like this furher, then yes, making "layers" within EXR be able to get decompressed independently would be a thing to look at. Making individual "channels" perhaps not so much; it is very likely that if someone needs say "indirect specular" layer, then they need all the channels inside of it (R, G, B).