top | item 47074184

(no title)

Ukv | 12 days ago

> It can and should be removed in minutes because AI can evaluate the “bad” image quickly and a human moderator isn’t required anymore.

CSAM can be detected through hashes or a machine-learning image classifier (with some false positives), whereas whether an image was shared nonconsensually seems like it'd often require context that is not in the image itself, possibly contacting the parties involved.

discuss

order

voidUpdate|12 days ago

I would not want to be the supervisor that has to review any CSAM positives to check for false ones

thaumasiotes|12 days ago

> CSAM can be detected through hashes or a machine-learning image classifier (with some false positives), whereas

Everything can be detected "with some false positives". If you're happy with "with some false positives", why would you need any context?

pjc50|12 days ago

Indeed. It seems that the process being described is some kind of one-stop portal, operated by or for OFCOM or the police, where someone can attest "this is a nonconsensual intimate image of me" (hopefully in some legally binding way!), triggering a cross-system takedown. Not all that dissimilar to DMCA.