(no title)
whisps
|
4 years ago
False positives of the kind you're thinking of aren't possible--it's checking for hashes that match known bad images, not running machine learning/image detection to detect if the photo you just took contains bad content. The issue is that there's nothing stopping Apple/the government from marking anything it finds objectionable--like anti-government free speech--as a Bad Image, beyond CSAM.
simion314|4 years ago
watt|4 years ago
briefcomment|4 years ago
throwprvcyaway|4 years ago
If this was the FINAL solution to catch every last child pornographer in one glorious roundup MAYBE it would be worth the massive risk of authoritarian abuse but this algorithm sounds stupidly easy to get around for the deviants while still throwing our collective privacy under the bus.
ratww|4 years ago
> In the same way that PhotoDNA can match an image that has been altered to avoid detection, PhotoDNA for Video can find child sexual exploitation content that’s been edited or spliced into a video that might otherwise appear harmless
https://en.wikipedia.org/wiki/PhotoDNA
thekyle|4 years ago
unknown|4 years ago
[deleted]
acuozzo|4 years ago
Many of the hashes provided by the NCMEC are MD5. There are going to be false positives left and right.