top | item 46705603

(no title)

jmillikin | 1 month ago

  > It seems that JPEG can be decoded on the GPU [1] [2]
Sure, but you wouldn't want to. Many algorithms can be executed on a GPU via CUDA/ROCm, but the use cases for on-GPU JPEG/PNG decoding (mostly AI model training? maybe some sort of giant megapixel texture?) are unrelated to anything you'd use CBZ for.

For a comic book the performance-sensitive part is loading the current and adjoining pages, which can be done fast enough to appear instant on the CPU. If the program does bulk loading then it's for thumbnail generation which would also be on the CPU.

Loading compressed comic pages directly to the GPU would be if you needed to ... I dunno, have some sort of VR library browser? It's difficult to think of a use case.

  > According to smhasher tests [3] CRC32 is not limited by memory bandwidth.
  > Even if we multiply CRC32 scores x4 (to estimate 512 bit wide SIMD from 128
  > bit wide results), we still don't get close to memory bandwidth.
Your link shows CRC32 at 7963.20 MiB/s (~7.77 GiB/s) which indicates it's either very old or isn't measuring pure CRC32 throughput (I see stuff about the C++ STL in the logs).

Look at https://github.com/corsix/fast-crc32 for example, which measures 85 GB/s (GB, GiB, eh close enough) on the Apple M1. That's fast enough that I'm comfortable calling it limited by memory bandwidth on real-world systems. Obviously if you solder a Raspberry Pi to some GDDR then the ratio differs.

  > The 32 bit hash of CRC32 is too low for file checksums. xxhash is definitely
  > an improvement over CRC32.
You don't want to use xxhash (or crc32, or cityhash, ...) for checksums of archived files, that's not what they're designed for. Use them as the key function for hash tables. That's why their output is 32- or 64-bits, they're designed to fit into a machine integer.

File checksums don't have the same size limit so it's fine to use 256- or 512-bit checksum algorithms, which means you're not limited to xxhash.

  > Why would you need to use a cryptographic hash function to check integrity
  > of archived files? Quality a non-cryptographic hash function will detect
  > corruptions due to things like bit-rot, bad RAM, etc. just the same.
I have personally seen bitrot and network transmission errors that were not caught by xxhash-type hash functions, but were caught by higher-level checksums. The performance properties of hash functions used for hash table keys make those same functions less appropriate for archival.

  > And why is 256 bits needed here? Kopia developers, for example, think 128
  > bit hashes are big enough for backup archives [4].
The checksum algorithm doesn't need to be cryptographically strong, but if you're using software written in the past decade then SHA256 is supported everywhere by everything so might as well use it by default unless there's a compelling reason not to.

For archival you only need to compute the checksums on file transfer and/or periodic archive scrubbing, so the overhead of SHA256 vs SHA1/MD5 doesn't really matter.

I don't know what kopia is, but according to your link it looks like their wire protocol involves each client downloading a complete index of the repository content, including a CAS identifier for every file. The semantics would be something like Git? Their list of supported algorithms looks reasonable (blake, sha2, sha3) so I wouldn't have the same concerns as I would if they were using xxhash or cityhash.

discuss

order

zigzag312|1 month ago

> which can be done fast enough to appear instant on the CPU

Big scanned PDFs can be problfrom more efficient processing (if it had HW support for such technique)

> Your link shows CRC32 at 7963.20 MiB/s (~7.77 GiB/s) which indicates it's either very old or isn't measuring pure CRC32 throughput

It may not be fastest implementation of CRC32, but it's also done on old Ryzen 5 3350G 3.6GHz. Below the table are results done on different HW. On Intel i7-6820HQ CRC32 achieves 27.6 GB/s.

> measures 85 GB/s (GB, GiB, eh close enough) on the Apple M1. That's fast enough that I'm comfortable calling it limited by memory bandwidth on real-world systems.

That looks incredibly suspicious since Apple M1 has maximum memory bandwidth of 68.25 GB/s [1].

> I have personally seen bitrot and network transmission errors that were not caught by xxhash-type hash functions, but were caught by higher-level checksums. The performance properties of hash functions used for hash table keys make those same functions less appropriate for archival.

Your argument is meaningless without more details. xxhash supports 128 bits, which I doubt wouldn't be able to catch an error in you case.

SHA256 is an order of magnitude or more slower than non-cryptographic hashes. In my experience archival process usually has big enough effect on performance to care about it.

I'm beginning to suspect your primary reason for disliking xxhash is because it's not de facto standard like CRC or SHA. I agree that this is a big one, but you constantly imply like there's more to why xxhash is bad. Maybe my knowledge is lacking, care to explain? Why wouldn't 128 bit xxhash be more than enough for checksums of files. AFAIK the only thing it doesn't do is protect you against tampering.

> I don't know what kopia is, but according to your link it looks like their wire protocol involves each client downloading a complete index of the repository content, including a CAS identifier for every file. The semantics would be something like Git? Their list of supported algorithms looks reasonable (blake, sha2, sha3) so I wouldn't have the same concerns as I would if they were using xxhash or cityhash.

Kopia uses hashes for block level deduplication. What would be an issue, if they used 128 bit xxhash instead of 128 bit cryptographic hash like they do now (if we assume we don't need to protection from tampering)?

[1] https://en.wikipedia.org/wiki/Apple_M1

minitech|1 month ago

> What would be an issue, if they used 128 bit xxhash instead of 128 bit cryptographic hash like they do now (if we assume we don't need to protection from tampering)?

malicious block hash collisions where the colliding block was introduced by some way other than tampering (e.g. storing a file created by someone else)