Thanks for the reply, but you are exactly the audience my post is for. Because you say that, we will lose what little figments of privacy and freedoms we have left.
Apple tried and made good progress. They had bugs which could be resolved but your insistence that it couldn't be done caused too much of an uproar.
You can have a system that flags illicit content with some confidence level and have a human review that content. You can make any model or heuristic used is publicly logged and audited. You can anonymously flag that content to reviewers, and when deemed as actually illicit by a human, the hash or some other signature of the content can be published globally to reveal the devices and owners of those devices. You can presume innocence (such as a parent taking a pic of their kids bathing) and question suspects discretely without an arrest. You can require cops to build multiple sufficient points of independently corroborated evidence before arresting people.
These are just some of the things that are possible that I came up with in the last minute of typing this post. Better and more well thought out solutions can be developed if taken seriously and funded well.
However, your response of "Yes." is materially false, law makers will catch on to that and discredit anything the privacy community has been advocating. Even simple heuristics that isn't using ML models can have a higher "true positive" rate of identifying criminal activity than eye witness testimony, which is used to convict people of serious crimes. And I suspect, you meant security, not privacy. Because as I mentioned, for privacy, humans can review before a decision is made to search for the confirmed content across devices.
> Because you say that, we will lose what little figments of privacy and freedoms we have left.
I understand that you seem to think that adding systems like this will placate governments around the world but that is not the case. We have already conceded far more than we ever should have to government surveillance for a false sense of security.
> You can have a system that flags illicit content with some confidence level and have a human review that content. You can make any model or heuristic used is publicly logged and audited. You can anonymously flag that content to reviewers, and when deemed as actually illicit by a human, the hash or some other signature of the content can be published globally to reveal the devices and owners of those devices. You can presume innocence (such as a parent taking a pic of their kids bathing) and question suspects discretely without an arrest. You can require cops to build multiple sufficient points of independently corroborated evidence before arresting people.
What about this is privacy preserving?
> However, your response of "Yes." is materially false, law makers will catch on to that and discredit anything the privacy community has been advocating. Even simple heuristics that isn't using ML models can have a higher "true positive" rate of identifying criminal activity than eye witness testimony, which is used to convict people of serious crimes. And I suspect, you meant security, not privacy. Because as I mentioned, for privacy, humans can review before a decision is made to search for the confirmed content across devices.
It's not "materially false." Bringing a human into the picture doesn't do anything to preserve privacy. If, like in your example, a parent's family photos with their children flag the system, you have already violated the person's privacy without just cause, regardless of whether the people reviewing it can identify the person or not.
You cannot have a system that is scanning everyone's stuff indiscriminately and have it not be a violation of privacy. There is a reason why there is a process where law enforcement must get permission from the courts to search and/or surveil suspects - it is supposed to be a protection against abuse.
Except that it is not materially false. Only in a perfect society will your “system that flags illicit content” not become a system that flags whatever some authoritarian regime considers threatening, and subverting public logging/auditing is similarly trivial to a motivated authoritarian. All your hypothetical solutions rely on humans, who are notoriously susceptible to being influenced by either money or being beaten with pipes, and on corporations, who are notoriously susceptible to being influenced by things that influence their stock price.
The Pleyel’s corollary to Murphy’s law is that all compromises to individuals’ rights made for the sake of security will eventually be used to further deprive them of those rights.
(I especially liked the line “You can require cops to build multiple sufficient points of independently corroborated evidence before arresting people.”)
notepad0x90|3 months ago
Apple tried and made good progress. They had bugs which could be resolved but your insistence that it couldn't be done caused too much of an uproar.
You can have a system that flags illicit content with some confidence level and have a human review that content. You can make any model or heuristic used is publicly logged and audited. You can anonymously flag that content to reviewers, and when deemed as actually illicit by a human, the hash or some other signature of the content can be published globally to reveal the devices and owners of those devices. You can presume innocence (such as a parent taking a pic of their kids bathing) and question suspects discretely without an arrest. You can require cops to build multiple sufficient points of independently corroborated evidence before arresting people.
These are just some of the things that are possible that I came up with in the last minute of typing this post. Better and more well thought out solutions can be developed if taken seriously and funded well.
However, your response of "Yes." is materially false, law makers will catch on to that and discredit anything the privacy community has been advocating. Even simple heuristics that isn't using ML models can have a higher "true positive" rate of identifying criminal activity than eye witness testimony, which is used to convict people of serious crimes. And I suspect, you meant security, not privacy. Because as I mentioned, for privacy, humans can review before a decision is made to search for the confirmed content across devices.
least|3 months ago
I understand that you seem to think that adding systems like this will placate governments around the world but that is not the case. We have already conceded far more than we ever should have to government surveillance for a false sense of security.
> You can have a system that flags illicit content with some confidence level and have a human review that content. You can make any model or heuristic used is publicly logged and audited. You can anonymously flag that content to reviewers, and when deemed as actually illicit by a human, the hash or some other signature of the content can be published globally to reveal the devices and owners of those devices. You can presume innocence (such as a parent taking a pic of their kids bathing) and question suspects discretely without an arrest. You can require cops to build multiple sufficient points of independently corroborated evidence before arresting people.
What about this is privacy preserving?
> However, your response of "Yes." is materially false, law makers will catch on to that and discredit anything the privacy community has been advocating. Even simple heuristics that isn't using ML models can have a higher "true positive" rate of identifying criminal activity than eye witness testimony, which is used to convict people of serious crimes. And I suspect, you meant security, not privacy. Because as I mentioned, for privacy, humans can review before a decision is made to search for the confirmed content across devices.
It's not "materially false." Bringing a human into the picture doesn't do anything to preserve privacy. If, like in your example, a parent's family photos with their children flag the system, you have already violated the person's privacy without just cause, regardless of whether the people reviewing it can identify the person or not.
You cannot have a system that is scanning everyone's stuff indiscriminately and have it not be a violation of privacy. There is a reason why there is a process where law enforcement must get permission from the courts to search and/or surveil suspects - it is supposed to be a protection against abuse.
admash|3 months ago
The Pleyel’s corollary to Murphy’s law is that all compromises to individuals’ rights made for the sake of security will eventually be used to further deprive them of those rights.
(I especially liked the line “You can require cops to build multiple sufficient points of independently corroborated evidence before arresting people.”)