None of this makes the system useless or harmful. Also, it’s not Apple’s algorithm. The actual hash list Apple will use is not accessible to the device.
If anybody with enough motivation can modify any existing harmless image to have the same neural hash as a "tracked database" image this will create too many false positives. Too many false positives make the algorithm useless.
If someone with even more motivation and the means to put those images onto your device via social engineering, exploits or maybe even features and you become the target of a criminal investigation in any jurisdiction you just happen to be at the moment, this makes the algorithm harmful.
> this will create too many false positives. Too many false positives make the algorithm useless
Apple can (and will) run a second algorithm server side to filter out further false positives.
> If someone with even more motivation and the means to put those images onto your device via social engineering, exploits or maybe even features
Such an attacker could just plant CSAM directly. The hash collision has no bearing on it. If, however, it is hash collision you're worried about, they'd be caught during the manual review.
yayr|4 years ago
If someone with even more motivation and the means to put those images onto your device via social engineering, exploits or maybe even features and you become the target of a criminal investigation in any jurisdiction you just happen to be at the moment, this makes the algorithm harmful.
FabHK|4 years ago
Apple can (and will) run a second algorithm server side to filter out further false positives.
> If someone with even more motivation and the means to put those images onto your device via social engineering, exploits or maybe even features
Such an attacker could just plant CSAM directly. The hash collision has no bearing on it. If, however, it is hash collision you're worried about, they'd be caught during the manual review.