What I'm concerned about is a system that flags me for a crime based on a database I can't audit based on mechanisms with an entirely too high false positive rate.
Because the database can't be audited by anyone but a select group we have to trust that it only contains actual bad images. I do not trust that such databases don't also contain images that are embarrassing to powerful/connected people. I also do not trust such databases don't contain false positives.
The sort of people that are super zealous about a topic aren't simultaneously super rational and objective about that topic. There's a non-zero probability that those databases contain lewd yet entirely legal images that the submitters just didn't like.
Because of the false positive rate a photo of my dog might trigger an alarm and then my phone sends an automated message to the police. I'm then told by proponents there will be some manual review. I have to then hope that the local DA doesn't have an election coming up and wants to push a "tough on crime" message so charges me with a crime despite a review.
In short these scanning systems require far too much unearned trust. They also present a slippery slope thanks to the incendiary nature of the topic. Today it's CSAM but what undesirable content will the systems be used for tomorrow? Such systems require trust in the stewards of today and tomorrow. Do you want people of the opposite ideology to you in charge of such systems? Do you trust they'll never be abused? Do you trust well meaning people never make mistakes?
I do not trust in any of those things. I'm not worried about myself doing actual bad things, I'm worried that demonstrable false positive rates will ruin my life with the mere accusation of doing something bad.
bloopernova|2 years ago
giantrobot|2 years ago
Because the database can't be audited by anyone but a select group we have to trust that it only contains actual bad images. I do not trust that such databases don't also contain images that are embarrassing to powerful/connected people. I also do not trust such databases don't contain false positives.
The sort of people that are super zealous about a topic aren't simultaneously super rational and objective about that topic. There's a non-zero probability that those databases contain lewd yet entirely legal images that the submitters just didn't like.
Because of the false positive rate a photo of my dog might trigger an alarm and then my phone sends an automated message to the police. I'm then told by proponents there will be some manual review. I have to then hope that the local DA doesn't have an election coming up and wants to push a "tough on crime" message so charges me with a crime despite a review.
In short these scanning systems require far too much unearned trust. They also present a slippery slope thanks to the incendiary nature of the topic. Today it's CSAM but what undesirable content will the systems be used for tomorrow? Such systems require trust in the stewards of today and tomorrow. Do you want people of the opposite ideology to you in charge of such systems? Do you trust they'll never be abused? Do you trust well meaning people never make mistakes?
I do not trust in any of those things. I'm not worried about myself doing actual bad things, I'm worried that demonstrable false positive rates will ruin my life with the mere accusation of doing something bad.