(no title)
MatthewWilkes | 4 years ago
Is this surprising? Any third party can read part of an e2e encrypted communication if one of the participants forwards it.
MatthewWilkes | 4 years ago
Is this surprising? Any third party can read part of an e2e encrypted communication if one of the participants forwards it.
nerdponx|4 years ago
> Most can agree that violent imagery and CSAM should be monitored and reported; Facebook and Pornhub regularly generate media scandals for not moderating enough. But WhatsApp moderators told ProPublica that the app’s artificial intelligence program sends moderators an inordinate number of harmless posts, like children in bathtubs. Once the flagged content reaches them, ProPublica reports that moderators can see the last five messages in a thread.
TekMol|4 years ago
The article says that by reporting a user, the software on the site of the reporting user silently sends data to WhatsApp. The reporting user does not know what data is sent.
bilal4hmed|4 years ago
take a look https://twitter.com/WABetaInfo/status/1435221936888483847
contravariant|4 years ago
I'm not too sure at what point the artificial intelligence program gets involved though.
artiszt|4 years ago
ectopod|4 years ago
When a user reports a post it is (unsurprisingly) forwarded to the moderators.
Additionally, there is some kind of AI CSAM detector, which automatically forwards posts.
In both cases, it also forwards the previous five messages from the thread to the moderators.
Jtsummers|4 years ago
> Instead, WhatsApp reviewers gain access to private content when users hit the “report” button on the app, identifying a message as allegedly violating the platform’s terms of service. This forwards five messages — the allegedly offending one along with the four previous ones in the exchange, including any images or videos — to WhatsApp in unscrambled form, according to former WhatsApp engineers and moderators. Automated systems then feed these tickets into “reactive” queues for contract workers to assess.
From the actual ProPublica report. If their published understanding is correct, E2EE is not broken, but rather end users who are one of the ends of E2EE are sending the decrypted content to be moderated. The AI bit is a filter to reduce the amount of content passed on to human moderators.
From near that second quote:
> Artificial intelligence initiates a second set of queues — so-called proactive ones — by scanning unencrypted data that WhatsApp collects about its users and comparing it against suspicious account information and messaging patterns (a new account rapidly sending out a high volume of chats is evidence of spam), as well as terms and images that have previously been deemed abusive.
That part is AI driven, but my reading is that the moderators do not get access to the encrypted data (the actual messages) only the behavior patterns, and from that make a determination of what to do.
barbazoo|4 years ago
spullara|4 years ago
barbazoo|4 years ago
It's not just when a recipient reports them it seems but also when they have been flagged by their algorithm. If that were true, the claim that the conversation is e2e encrypted simply cannot be true, unless the algorithm runs on the client.
KaiserPro|4 years ago
given that facebook has less than 1k moderators, do you honestly think that they'd just let the moderators sift through everything manually?
obviously you'd classify stuff first, checking against known images is easy. Classifying new images is a lot harder, plus the ethics of training and labelling a dataset for accurate detection is pretty hard, also almost impossible to do legally.
I suspect the next best thing is detecting nudity and age of the subject, and taking the hit that you're going to prioritise a lot of malicious reports, rather than genuine.
whoisjuan|4 years ago
What you’re describing doesn’t work with E2E encryption. I really doubt it works that way.
spywaregorilla|4 years ago
unknown|4 years ago
[deleted]
slim|4 years ago