top | item 43407005

(no title)

dvasdekis | 11 months ago

Would this work have the potential to speed up encoding/decoding of the PAR2 format[0]? This format is widely used to protect datasets against bitrot and small losses, but is held back because of the significant compute overhead when dealing with large datasets.

[0] https://en.wikipedia.org/wiki/Parchive

discuss

order

gjm11|11 months ago

Unless I am badly misunderstanding the paper, this is about doing matrix multiplication on matrices whose entries are large integers. So far as I can tell, PAR2 doesn't do any large-number arithmetic at all, so I don't think this work will have any relevance for implementations of PAR2.

[EDITED to add:] Reading the paper a bit more carefully, the above isn't exactly right. It is about doing matrix multiplication on matrices whose entries are "large" but they're interested in using the idea for hardware design where "large" may mean not-very-large, building (say) 16-bit multiplies out of 8-bit multipliers.

But for software implementations (and I doubt anyone will be making special-case hardware for PAR2) small multiplications are not generally faster than large ones. Maybe you might be able to do more of them in parallel using SIMD, but I bet the extra additions and subtractions and general overhead would outweigh any gains in that setting.