top | item 28104024

Apple's New CSAM Protections May Make iCloud Photos Bruteforceable

233 points| NTroy | 4 years ago |crypto.stackexchange.com | reply

81 comments

order
[+] jonathanmayer|4 years ago|reply
(Context: I teach computer security at Princeton and have a paper at this week's Usenix Security Symposium describing and analyzing a protocol that is similar to Apple's: https://www.usenix.org/conference/usenixsecurity21/presentat....)

The proposed attack on Apple's protocol doesn't work. The user's device adds randomness when generating an outer encryption key for the voucher. Even if an adversary obtains both the hash set and the blinding key, they're just in the same position as Apple—only able to decrypt if there's a hash match. The paper could do a better job explaining how the ECC blinding scheme works.

[+] jobigoud|4 years ago|reply
> only able to decrypt if there's a hash match

This is one of the concerns in the OP, have an AI generate millions of variations of a certain kind of images and check the hashes. In this case it boils down to how common false positives neural hashes are.

[+] amelius|4 years ago|reply
There may be another attack.

Given some CP image, an attacker could perhaps morph it into an innocent looking image while maintaining the hash. Then spread this image on the web, and incriminate everybody.

[+] kfprt|4 years ago|reply
It won't be long until these type of systems are mandated. Combined with a hardware root of trust it's not inconceivable that modifying your hardware not to report home will also be made a crime. It never stops with CSAM either, pretty soon it's terrorism and whatever vague new definition they use.

The focus on CSAM seems extremely hypocritical when authorities make such little effort to stop ongoing CSA. I would encourage everyone to research the Sophie Long case. Unless there is image or video evidence the police make little effort to investigate CSA because it's resource intensive.

[+] stjohnswarts|4 years ago|reply
Total surveillance is definitely the end goal of policing forces. It's in their very nature of getting their job done (what better way to catch criminals than a computer constantly scanning every move of everyone) and why people need to always push back against these "think of the children" scapegoats they use to get their foot in the door and get more control.
[+] zimpenfish|4 years ago|reply
> It never stops with CSAM either, pretty soon it's terrorism and whatever vague new definition they use.

But PhotoDNA has been scanning cloud photos (Google, Dropbox, Microsoft, etc.,) to detect CSAM content for a decade now and this "pretty soon it's terrorism" slippery slope hasn't yet manifested, has it?

If the slope was going to be slippery, wouldn't we have seen some evidence of that by now?

[+] joe_the_user|4 years ago|reply
Regardless of whether this attack works or not, you'd assume this scheme produces a wider attack surface against pictures in iCloud and against iCloud users. One attack I could imagine is a hacker uploading child porn to a hacked device to trigger immediate enforcement against a user (and sure, maybe there are more controls involved but would you carry around a very well-protected, well-designed hand grenade in your wallet just so you're bad, it'll explode).
[+] selsta|4 years ago|reply
How is this iCloud specific? You could do the same with Google Photos or OneDrive.
[+] mnd999|4 years ago|reply
Or even a hash collision with a banned image. Actually, if that could be generated this thing could fall apart pretty quickly if such collisions could be widely distributed.
[+] jl6|4 years ago|reply
For some reason, after reading the initial reporting on this system, I thought it was running against any photos on your iPhone, but now I read the actual paper, it seems like it only applies to photos destined to be uploaded to iCloud? So users can opt out by not using iCloud?
[+] foerbert|4 years ago|reply
Much of the discussion is about how trivial it would be for Apple to start scanning any photos on the phone at a later date.

Right now they are able to bill this as doing what they currently do server side, but client side. Later, they can say they are simply applying the same "protections" to all photos instead of merely the ones being uploaded to iCloud.

[+] tandav|4 years ago|reply
Friendly reminder: until ios source code is closed all privacy claims is only backed by trust. They easily can do whatever they want if you're not compiling from source.
[+] RegnisGnaw|4 years ago|reply
Also don’t upload to MS, Google, Dropbox as they also scan for CSAM.
[+] NTroy|4 years ago|reply
If Apple is to keep their word about guaranteeing the privacy of non-CSAM photos (which this whole discussion is about them not doing a very good job of), then they would only be able to do that with photos stored in iCloud because of this technical specification as to how the identification process works. That being said, other photos across your device are still monitored in a different way. For example Apple will scan photos that you send or receive via iMessage to automatically detect if they're nudes, and if you're underage, they will block them/send a notification to your parents.
[+] sharikone|4 years ago|reply
Did you ever experience that you turned some setting off but it was "accidentally" turned on again after some update/reboot?
[+] dathinab|4 years ago|reply
As far as I know apple plans to put up 2 systems, one focused on phones of people age < 13 which filters "more or less" any photos and uses AI to detect explicit photos and one which looks for known child pornographic photos and for now seems to not necessary apply to all photos.

But I haven't looked to closely into it.

[+] avianlyric|4 years ago|reply
Yeah pretty much. Another way of thinking about it, is that to upload an image to iCloud, your phone must provide a cryptographic safety voucher to prove the image isn’t CSAM.
[+] shuckles|4 years ago|reply
The question presumes the database leak also comes with the server side secret for blinding the CSAM database, which is unlikely (that’s not how HSMs work) and would be a general catastrophe (it would leak the Neural Hashes of photos in the NCMEC database, which are supposed to remain secret).
[+] gorgonzolachz|4 years ago|reply
Yeah, I've worked with HSMs in the past and to say that it's a challenge to get key material out of them is an understatement. That said, a lot of this depends on the architecture surrounding the HSM - if the key material leaves the HSM at any point, you've basically increased your attack surface from an incredibly secure box to whatever your surrounding interfaces are. At Apple's scale, I have to imagine it's more economical to have some kind of envelope encryption - maybe this is the right attack vector for a malicious actor to hit?
[+] NTroy|4 years ago|reply
The question doesn't presume that, as the the secret for blinding the CSAM database would only be helpful if a third party were also looking to see which accounts contained CSAM.

In this case, the question assumes that an attacker would more or less be creating their own database of hashes and derived keys (to search for and decrypt known photos and associate them with user accounts, or to bruteforce unknown photos), and would therefore have no need to worry about acquiring the key used for blinding the CSAM hash database.

[+] ashneo76|4 years ago|reply
Pretty soon housing your own infra and not using the mandated govt phone could be made a crime.

But think of the children and security of the society. Couple that with constant monitoring of your car and you can be monitored anywhere

[+] kaba0|4 years ago|reply
It already is. You are only allowed to use specific wavelengths, and basically every modem is proprietary.
[+] whatever1|4 years ago|reply
Why does Apple even bother with encryption? They should just skip all of the warrant requirements etc and use their iCloud keys to unlock our content and store it unencrypted at rest.

Maybe they can also build an api so that governments can search easily for dissidents without the delays that the due process of law causes.

[+] laurent92|4 years ago|reply
Funny. The way I imagine NSA’s and FBI’s secret cooperation with Google is exactly this: Provide a search API that gives access to anything.
[+] gorgonzolachz|4 years ago|reply
Facetious as this is, I can't imagine this is anything other than Apple's endgame here.

The best of both worlds: keep advertising their privacy chops to the masses, while also allowing any and every government agency a programmatic way to hash-verify the data passing through their systems in real-time.