top | item 28448746

(no title)

MatthewWilkes | 4 years ago

From the article "WhatsApp can read some of your messages if the recipient reports them."

Is this surprising? Any third party can read part of an e2e encrypted communication if one of the participants forwards it.

discuss

order

nerdponx|4 years ago

"AI" running on every client can automatically flag messages and send them to moderators.

> Most can agree that violent imagery and CSAM should be monitored and reported; Facebook and Pornhub regularly generate media scandals for not moderating enough. But WhatsApp moderators told ProPublica that the app’s artificial intelligence program sends moderators an inordinate number of harmless posts, like children in bathtubs. Once the flagged content reaches them, ProPublica reports that moderators can see the last five messages in a thread.

TekMol|4 years ago

The problem here is that the third party controls the software on both ends of the communication. And that software can send the messages to this party without the participants knowingly triggering it.

The article says that by reporting a user, the software on the site of the reporting user silently sends data to WhatsApp. The reporting user does not know what data is sent.

contravariant|4 years ago

Yeah it's not really too surprising, except that maybe the scale of what gets shared with Facebook is a bit unclear.

I'm not too sure at what point the artificial intelligence program gets involved though.

artiszt|4 years ago

'a bit unclear' like in having got it half right like in simply unknown ?!

ectopod|4 years ago

There seem to be two things happening.

When a user reports a post it is (unsurprisingly) forwarded to the moderators.

Additionally, there is some kind of AI CSAM detector, which automatically forwards posts.

In both cases, it also forwards the previous five messages from the thread to the moderators.

Jtsummers|4 years ago

> Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems.

> Instead, WhatsApp reviewers gain access to private content when users hit the “report” button on the app, identifying a message as allegedly violating the platform’s terms of service. This forwards five messages — the allegedly offending one along with the four previous ones in the exchange, including any images or videos — to WhatsApp in unscrambled form, according to former WhatsApp engineers and moderators. Automated systems then feed these tickets into “reactive” queues for contract workers to assess.

From the actual ProPublica report. If their published understanding is correct, E2EE is not broken, but rather end users who are one of the ends of E2EE are sending the decrypted content to be moderated. The AI bit is a filter to reduce the amount of content passed on to human moderators.

From near that second quote:

> Artificial intelligence initiates a second set of queues — so-called proactive ones — by scanning unencrypted data that WhatsApp collects about its users and comparing it against suspicious account information and messaging patterns (a new account rapidly sending out a high volume of chats is evidence of spam), as well as terms and images that have previously been deemed abusive.

That part is AI driven, but my reading is that the moderators do not get access to the encrypted data (the actual messages) only the behavior patterns, and from that make a determination of what to do.

barbazoo|4 years ago

Correct me if I'm wrong but unless the "AI CSAM detector" is running on the client, it simply cannot be e2e encrypted.

spullara|4 years ago

It looks like the AI stuff applies to the groups content which is not E2E.

barbazoo|4 years ago

> But WhatsApp moderators told ProPublica that the app’s artificial intelligence program sends moderators an inordinate number of harmless posts, like children in bathtubs. Once the flagged content reaches them, ProPublica reports that moderators can see the last five messages in a thread.

It's not just when a recipient reports them it seems but also when they have been flagged by their algorithm. If that were true, the claim that the conversation is e2e encrypted simply cannot be true, unless the algorithm runs on the client.

KaiserPro|4 years ago

just think about the sheer number of people that report stuff.

given that facebook has less than 1k moderators, do you honestly think that they'd just let the moderators sift through everything manually?

obviously you'd classify stuff first, checking against known images is easy. Classifying new images is a lot harder, plus the ethics of training and labelling a dataset for accurate detection is pretty hard, also almost impossible to do legally.

I suspect the next best thing is detecting nudity and age of the subject, and taking the hit that you're going to prioritise a lot of malicious reports, rather than genuine.

whoisjuan|4 years ago

It sounds to me that there’s actually an algorithm between the report and the moderator to control the volume of manual moderation.

What you’re describing doesn’t work with E2E encryption. I really doubt it works that way.

spywaregorilla|4 years ago

Is there any particular reason to believe its not running on the client?

slim|4 years ago

E2E is useless if the software is not opensource. Especially if you don't trust the vendor