top | item 44721447

(no title)

fwn | 7 months ago

> CSAM is also a list of hashes for some of the worst CP video/images out there. It doesn't read anything, just hash matching.

The list presumably contains CSAM hashes. However, it could also include hashes for other types of content.

AFAIK the specific scope at any point in time is not something that can be fully evaluated by independent third parties, and there is no obvious reason why this list could not be extended to cover different types of content in the future.

Once it is in place, why not search for documents that are known to facilitate terrorism? What about human trafficking? Drug trafficking? Antisemitic memes spring to mind. Or maybe memes critical of some government, a war, etc.

This is because, despite the CSAM framing, it is essentially a censorship/surveillance infrastructure. One that is neutral with regard to content.

discuss

order

EagnaIonat|7 months ago

CSAM scanning has been around for at least 15 years. All service providers are required to do it by law.

You are absolutely correct with your "what-ifs" and this underlines the need for more oversight and transparency.

The process (my knowledge is a few years old) is that service providers or Law enforcement from countries can submit files to the CSAM database.

The database is owned by National Center for Missing & Exploited Children (NCMEC).

Once they receive the files they review them and confirm that the files meet the standard for the database, document its entry, create a hash and add that to the database. After that the file is destroyed.

This whole process requires multiple approvals and numerous humans review the files before the hash goes into the CSAM.

Also every hash has a chain of custody. So in the event of an investigation they know exactly everyone who was involved in putting that hash into CSAM.

So it's possible to submit an image that is not what CSAM is intended for, but the chances of it even remotely getting into the database is next to nothing. To add to this service providers can be sued for submitting invalid files.

fwn|7 months ago

> CSAM scanning has been around for at least 15 years. All service providers are required to do it by law.

That is true for scanning in the cloud, but it's important not to conflate this with client-side scanning. The distinction between cloud and local processing is foundational. Collapsing that boundary would mark a serious shift in how surveillance infrastructure operates.

> Once they receive the files they review them and confirm that the files meet the standard for the database, document its entry, create a hash and add that to the database. After that the file is destroyed.

That is already a structural problem: If the original is destroyed, how can independent parties verify that database entries still correspond to the intended legal and ethical scope? This makes meaningful oversight functionally impossible.

Even if centralizing control in a state-funded NGO were considered acceptable (which is already questionable), locating that NGO in the US (subject to US law and politics!) is a serious issue. Why should, say, the local devices of German citizens be scanned against a hash list maintained under US jurisdiction?

> So it's possible to submit an image that is not what CSAM is intended for, but the chances of it even remotely getting into the database is next to nothing. To add to this service providers can be sued for submitting invalid files.

Procedural safeguards are good, but they don't solve the underlying problem: the entire system hinges on policy decisions that can change. A single legislative change is all it takes to expand the list’s scope. The current process may seem narrow today, but it offers no guarantees about tomorrow.

We’ve seen this pattern countless times: surveillance powers are introduced under the pretext of targeting only the most heinous crimes, but once established, they’re gradually repurposed for a wide range of far less serious offenses. It is the default playbook.