top | item 28231502

(no title)

romeovs | 4 years ago

A lot has been said about using this as an attack vector by possibly poisoning a victims iPhone with an image that matches a CSAM hash.

But could this not also be used to circumvent the CSAM scanning by converting images that are in the CSAM database to visually similar images that won't match the hash anymore? That would effectively defeat the CSAM scanning Apple and others are trying to put into place completely and render the system moot.

One could argue that these spoofed images could also be added to the CSAM database, but what if you spoof them to have hashes of extremely common images (like common memes)? Adding memes to the database would render the whole scheme unmanageable, no?

Or am I missing something here?

So we'd end up with a system that: 1. Can't be reliably used to track actual criminal offenders (they'd just be able to hide) without rendering the whole database useless. 2. Can be used to attack anyone by making it look like they have criminal content on their iPhones.

discuss

order

chongli|4 years ago

Or am I missing something here?

Wouldn't it be easier for offenders to avoid Apple products? That requires no special computer expertise and involves no risk on their part.

romeovs|4 years ago

That would be more effective! But that's not something Apple is trying to solve here (or could ever solve).

They are trying to prevent CSAM images from being stored and distributed using Apple products. If that goal is easily circumvented, the whole motivation for this (anti-)feature becomes invalid.

I a way, this architecture could potentially even make Apple products more attractive for CSAM distributors, since they now have a known way to fly under the radar (something that is arguably harder/riskier on other image sharing platforms, where the matching happens server-side).

One reasonable strategy Apple could have against that is through constantly finetuning the NeuralHash algorithm to hopefully catch more and more offenders. If that works reasonably well, it might deter criminals from their platform because an image that flies under the radar now might not fly under the radar in the future.

NB. I'm not trying to say Apple is doing the right thing here, especially since the above arguments put the efficacy of this architecture under scrutiny.

the_other|4 years ago

Allegedly other image sharing services already do CSAM detection after the upload. Switching handset OS removes any image-level defensive capability the abusers may employ, so they might face a higher risk using these other services.

argvargc|4 years ago

What we're describing at this point is effectively the same as a system of automatically flagging users as potential criminals based on something as manipulable as a filename.