top | item 16946552

A Few Thoughts on Ray Ozzie’s “Clear” Proposal

104 points| Hagelin | 8 years ago |blog.cryptographyengineering.com | reply

72 comments

order
[+] motohagiography|8 years ago|reply
The work I did on mobile encryption was framed thusly:

- Deriving a key for all devices from a single key creates a single, catastrophic failure mode for the solution where all devices become vulnerable together. As soon as customers figure this out, nobody serious will adopt it because they can't afford to accept that known risk exposure.

- We're assuming that the HSM we're using doesn't have a bias in its key generation RNG to limit the real key space, because if I were an intel agency, that's probably the first lever I would pull.

- The entropy of the additional derivation components we can source from the individual device to locally diversify keys is really limited, and some really smart people are going to be reversing our code. Apple (and unrelated, in my own work, I never worked for anyone affiliated with them) relied on limiting number of attempts in hardware (effectively) to mitigate this risk.

Personally, I think the Ozzie proposal is a red herring to give the feds rhetorical leverage by providing their side with something few people understand, but can get behind politically because it's sufficiently complex as to be "our" magic vs. "their" magic. This is to drown out technical objections and make the problem a political one where they can use their leverage.

As The author (Green) notes, we can design some pretty crazy things, and if the feds came out and said, "build us a ubiquitous surveillance apparatus, or at least give us complete sovereign and executive control of all electronic information." that is technically solvable problem, but in the US, legally intractable. So instead, they want those effective powers without the overt mandate.

[+] shakna|8 years ago|reply
> It literally refers to a giant, ultra-secure vault that will have to be maintained individually by different phone manufacturers

We can't even trust manufacturers to provide updates in most cases. Placing that much trust in them is nothing short of lunacy.

[+] DoctorOetker|8 years ago|reply
I don't see anything new in the alleged proposal, this is the same old crypto war. This is "just" key escrow.

One might as well propose to have the manufacturers build in the governments public key (and autobrick phone usage) such that the phone can detect if it is really the government reading the phone.

Another note:

"Ozzie’s proposal relies fundamentally on the ability of manufacturers to secure massive amounts of extremely valuable key material against the strongest and most resourceful attackers on the planet. "

This is not true: the phone encrypts the users passcode against the manufacturers public key. If the government tries to read the phone, it will get the encrypted passcode (useless) and send it to the manufacturer who decrypts the passcode. A single private key is not massive amounts of information. Not that it changes anything about protection needs: wheither its a piece of paper containing the say 4096 bits (512 bytes), or in Matthew Greens misinterpretation billions of 512 bytes (half a terrabyte) on a single HDD, they both have the same value. The whole code base needs similar protection anyway: their bootloaders already are signed by the manufacturer.

All this centralization is bad, leave the crypto genie out of the bottle please...

[+] kurthr|8 years ago|reply
I do prefer the idea of storing it on paper... at least it's a little easier to lock up. Even a big camera will only take a few thousand pictures before it fills up, and physical access is a lot easier to enforce.

If we make 2 billion phones a year (Apple itself is just over 200M) and you have a line printer running full blast (66 lines = 1page per sec) you could do Apple with one printer... and the world in 10. It would be a lot of boxes of paper though... about a box an hour.

edit: to be clear I was assuming that almost every dot in the matrix was a valid bit and there were 66 keys per page... 80 or even 132 columns at 7x5 wouldn't be enough for 4096 bits otherwise.

[+] cesarb|8 years ago|reply
The main difference between this proposal and the previous ones is the bricking step, which is supposed to make it transparent when the key has been revealed. But once the key has been revealed, what prevents an attacker from replacing the main board of the phone (keeping the phone's exterior and its SIM card), and copying all the data to the new board? A non-technical user (and even most technical users) wouldn't know the difference.
[+] irq-1|8 years ago|reply
Bricking the phone works against law enforcement by only allowing raw access to the data. Even if Clear worked correctly, law enforcement couldn't open apps and see the data in the correct context. They'd have raw data files full of indexes, hashes, and cached data. Worse, apps would start to encrypt data on the client specifically to avoid Clear.

The only significant change between plain key escrow and Clear (bricking the phone) would defeat the usefulness of Clear.

[+] allenz|8 years ago|reply
Apps could potentially work in read-only mode. Plus it's pretty easy to design a tool to pretty print iMessages given raw data, and that alone would be very useful for law enforcement.
[+] weinzierl|8 years ago|reply
> [..] and keep the secret key in a “vault” where hopefully it will never be needed.

That's only in bullet point one and where it already falls apart.

[+] AluminiumPoint|8 years ago|reply
I cant think of anything worse than my plastic metal and glass friend being forced to snitch against me. Its like my best friend betrayed me. Beyond creepy, key escrow proposals are the very definition of totalitarianism.
[+] colemannugent|8 years ago|reply
Can anyone explain why the government wouldn't just mandate that they be given all the keys from the start? Why would they put up with Apple as a middleman who could potentially refuse their requests?

Also, this key escrow scheme is near impossible to scale to more than one government. Now we need a way to authenticate government agents, good luck with that.

[+] allenz|8 years ago|reply
The government would be a single point of failure so it's cheaper and more secure to privatize. Also, private control of keys acts as a check upon government abuse.
[+] throwaway84742|8 years ago|reply
But why? Why give the government such a ripe target for abuse? Why tilt the balance of power even further in its favor?
[+] SolarNet|8 years ago|reply
Also a good question, but this post focused only on why it's a stupid idea technically on purpose.
[+] allenz|8 years ago|reply
Many people, especially those outside the tech community, do not view law enforcement as an adversary. In the US, the balance that we have struck is that the government cannot search our property, except upon probable cause (fourth amendment). While I personally don't like it, I think that warrant-based key escrow is reasonable from a policy perspective.
[+] nine_k|8 years ago|reply
Because every organization strives for more power, whether its members admit it or not.

The very idea of "checks and balances" is that different organizations would strive for power on opposite directions, thus preventing each other from gaining much.

[+] amelius|8 years ago|reply
Because now there is a patent, the government can start forcing companies to implement it. And the patent owner will profit. Quite smart, actually.
[+] prepend|8 years ago|reply
I’m not sure the benefit to Apple or other phone manufacturers. This looks like a substantial cost with zero benefit to those others than law enforcement. And substantial new risk for misuse or abuse.

What’s Ozzie’s true motivation? Is he looking to start a company running Clear and raking in patent revenue? I get why the governments want this, but not why a citizen would propose this.

If it weren’t Ray Ozzie, I would think this was just part of some propaganda push.

[+] jakelazaroff|8 years ago|reply
> I’m not sure the benefit to Apple or other phone manufacturers. This looks like a substantial cost with zero benefit to those others than law enforcement.

The benefit is that law enforcement has access to relevant information. Society has a vested interest in this provided it doesn't infringe any other rights. It's why warrants exist. If you have the ability to respect a warrant without hurting your customers, it should be illegal not to do so.

Obviously there are significant technical issues, which is why this is contentious; those are outside the scope of this comment.

[+] JustSomeNobody|8 years ago|reply
Money. His true motivation is money. Secondary to that is prestige; he "solved" this problem.
[+] valiant-comma|8 years ago|reply
Just a nitpick. Matthew Green uses the analogy of signing keys being leaked often as evidence that Ozzi’s proposed system would be similarly not secure. This is a weak analogy: signing private keys are often leaked because their use case requires them to be “online” in some fashion (code must be signed with the private key so it can be verified with the public key). Similarly, CAs must use private keys operationally (to sign customer CSRs), increasing the risk of key compromise.

In Ozzi’s proposal, the private key never actually has to exist outside the environment it was created in, only the public key does. As pointed out in other comments, LE would not need access to the private key, either, they could simply submit the encrypted passcode to the manufacturer, who would then decrypt it on their behalf using the private key.

[+] allenz|8 years ago|reply
Code signing and decryption both require access to the private key, possibly through a hardware security module. I don't see why decryption has less exposure.
[+] johnvega|8 years ago|reply
Extremely exceptional access only, in cases where thousands of people's lives could be at stake or millions. Since we can't create a fully unbreakable software/hardware security systems anyway, if ever, companies can use technology + psychology. Unintentionally create an extremely difficult to find bug that requires extremely talented engineers and large hardware resources to break, then unintentionally share it with at most discreet way probably just verbally to very few trusted 3rd parties. And it is not officially approved by the top management or even knows about it. We don't live in a perfect world and we don't have a perfect solution. JUST COMMENTS, NOT A SUGGESTION!
[+] akira2501|8 years ago|reply
> Extremely exceptional access only, in cases where thousands of people's lives could be at stake or millions.

And how do we determine when that's actually the case and when it's overhyped or flawed intelligence?

> We don't live in a perfect world and we don't have a perfect solution.

Exactly, so focusing on phone encryption is probably a waste of time.

[+] ggm|8 years ago|reply
I don't want this scheme. I don't want key escrow. But, a critique in the document is a 'if lost, lost forever' moment. If the escrow DB is compromised, the article says all phone are now pwned. For that point in time, true.

But phones are online devices. why does the escrow key have to be a constant, which if the central store is compromised means all phones prior to that date are compromised forever?

eg, re=spin the per-phone keygen on some cycle, and you define a window of risk, but it passes. re-spin clearly has to pass through some protocol, but we've been doing ephemeral re-key forever with websites.

[+] tosser00005|8 years ago|reply
He talks about “massive amounts of extremely valuable key material“ needed to be stored for billions of devices.

It’s not like this would be Fort Knox. All that data could be stored on a couple of USB sticks which, really, makes it even scarier. Someone could hold the entire contents in the palm of their hand walk away with everything.

[+] kardos|8 years ago|reply
The article makes exactly that point:

> If ever a single attacker gains access to that vault and is able to extract, at most, a few gigabytes of data (around the size of an iTunes movie), then the attackers will gain unencrypted access to every device in the world. Even better: if the attackers can do this surreptitiously, you’ll never know they did it.

[+] Zigurd|8 years ago|reply
What if someday we get political leadership so awful that, hypothetically, a former CIA chief feels compelled to warn that is is fundamentally dangerous to the nation?

One answer might be that we deserve such an outcome, and there is no reason to insulate encryption from the negative consequences. But is that a good answer?

[+] FascinatedBox|8 years ago|reply
> Also, when a phone is unlocked with Clear, a special chip inside the phone blows itself up.

no thanks

[+] pdpi|8 years ago|reply
Assuming “blows itself up” means it’s bricked rather than “does a Samsung”, I’m ok with that. As the article explains, it’s the only form of intrusion detection in the whole thing
[+] DoctorOetker|8 years ago|reply
Personally I believe real world actions should be the focus of surveillance. The empires are simply trying to cheap out by focusing on surveillance of computer activity.

This is the most profound part of Matthew Green's piece in my opinion:

"While this mainly concludes my notes about on Ozzie’s proposal, I want to conclude this post with a side note, a response to something I routinely hear from folks in the law enforcement community. This is the criticism that cryptographers are a bunch of naysayers who aren’t trying to solve “one of the most fundamental problems of our time”, and are instead just rejecting the problem with lazy claims that it “can’t work”. "

I believe the most fundamental problem is how can we decentralize real world security? I am FOR mass surveillance but AGAINST centralized mass surveillance.

Assume every crook and cranny of the world was covered by community cameras, and the cameras encrypted the streams with treshold cryptography, such that the populace has different parts of the secret, then one needs "enough" citizens agreeing to reveal the contents seen by a specific camera at a specific time. This way its public for all or public for none. Every accident, every murder, ...

Suppose a body is found, then the group decides to reveal the imagery: oh yes, in this case the person was murdered! look the perpetrator is walking out of view to the next camera, then the next,... we can trace him to where he is now. Properly trained citizens (in a now authorized police role) go and arrest the guy. He is now in prison waiting for his trial (also with community cameras, so no broomsticks in prisoner ani). At trial time, if the person denies, or claims to be a different person from the arrested one, we can trace through all the imagery from his commiting a crime to his sitting in court right there and then.

So yes, there is a real conflict between cryptographers and centralized law enforcement. We dont need no spooks!

And the spooks can not decode the camera imagery: a large enough number of citizens (chosen at random by cryptographic sortition) running instance of good citizen client software need to release their part of the shared secret.

EDIT:

So there is broadly speaking 2 kinds of crimes:

* meatspace crimes (murder, negligence, rape, making childporn (automatically rape), ...)

* cyber crimes (copyright, child porn, ...)

I argue that not implementing such a community camera system is a form of negligence in itself.

It does not adress things like copyright infringement, but ... thats not exactly the most popularly supported concept.

Then there is the problem of child porn: fake and real.

I argue that with deepfake any faked child porn will eventually become indiscernible from real child porn.

Which leaves the problem of official child porn recorded by the community cameras used to apprehend perpetrators (since these also sign the imagery to testify authenticity!).

Due too taboo many victims of child abuse didn't realize, or only had doubts that they were suffering abuse, enabling the abuse to continue. Without concrete visual examples for them to explore, to asses if they are or are not suffering child abuse, how can they alert others of their situation? We send these children extremely mixed messages: absolutely tell us if you are being abused, but absolutely never falsely report a person. Merely asking someone else for advice is automatically interpreted as a child reporting child abuse. How can a child asses his or her situation? With abstract questions using words and connotations it does not know?

I believe the number of reported child abuses would go up if we used these community cameras for decentralized mass surveillance.

Also for crime in general (theft, murder, ...), the knowledge that you will with extremely high probability be caught, will decrease a lot of crime. I would not be surprised if the crime rate of "impulsive" crimes (where the criminal was supposedly not able to control his urges) would drop substantially, revealing that in the current system they often get off the hook.

There will still be rude people, getting fines for squeezing women in the ass while drunk. But for any actual crime in general, both victim and perpetrator would know that the victim can simply report this to the group, and that the perpetrator can not escape by lack of evidence. The current lack of evidence constantly discourages people from reporting crimes (as there is risk involved: financial: lawyers, emotional: potential incredulity at police station, ...).

One might think that this will cause criminals to escalate to murder: "if you rob a victim, you should kill her, or else she will report you" but hiding a body will be very hard, and if a person goes missing the friends and relatives will report this, and instead of following the criminal we can follow the missing person from the time and place she was last reported seen!

As long as cryptographers only draw the privacy card, the law enforcement community has a point. As long as the law enforcement community only draws the centralized power card, the cryptographers have a point.

Only when we have decentralized mass surveillance can we have both privacy (as long as you don't commit crimes or go missing) and real law enforcement.

Common FAQ:

What if say a stalker repeatedly reports his ex as "missing"? Cry wolf to many times, or be blocked to report a person missing, and the good citizen client software that the citizens individually run, will refuse to comply.

What if a stalker or group of them repeatedly reports a "murderer" in a celebrities bedroom? we can send a local but randomly selected properly trained (group of) citizen (in police role) to go check the room, if the supposed dead body is not there, no reason to unlock the imagery.

(I will add more as people ask)

[+] allenz|8 years ago|reply
Regarding your distinction between real and cyber crimes, digital evidence can certainly be relevant in a murder case, e.g. iMessages, location history, search history. Also, the read-only bricking chip tries to allow search but exclude ongoing surveillance, though I don't think it's technically feasible.
[+] zAy0LfpBZLC8mAC|8 years ago|reply
> * cyber crimes ([...], child porn, ...)

I find that a disturbing classification.

[+] carapace|8 years ago|reply
I've been thinking along somwhat similar lines. Here's an old thing I wrote about it. I'd be curious to know what you think?

"Total Surveillance is the Perfection of Democracy"

For once I disagree with RMS, re: https://www.gnu.org/philosophy/surveillance-vs-democracy.htm...

I believe that it is fundamentally not possible to "roll back" the degree of surveillance in our [global] society in an effective way. Our technology is already converging to a near-total degree of surveillance all on its own. The article itself gives many examples. The end limit will be Vinge's "locator dust" or perhaps something even more ubiquitous and ephemeral. RMS advocates several "band-aid" fixes but seems to miss the logical structure of the paradox of inescapable total surveillance.

Let me attempt to illustrate this paradox. Take this quote from the article:

    "If whistleblowers don't dare reveal crimes and lies, we lose the last shred of effective control over our government and institutions."
(First of all we should reject the underlying premise that "our government and institutions" are only held in check by the fear of the discovery of their "crimes and lies". We can, and should, and must, hold ourselves and our government to a standard of not committing crimes, not telling lies. It is this Procrustean bed of good character that our technology is binding us to, not some dystopian nightmare.)

Certainly the criminally-minded who have inveigled their way into the halls of power should not be permitted to sleep peacefully at night, without concern for discovery. But why assume that ubiquitous surveillance would not touch them? Why would the sensor/processor nets and deep analysis not be useful, and used, for detecting and combating treachery? What "crimes and lies" would be revealed by a whistleblower that would not show up on the intel-feeds?

Or this quote:

    "Everyone must be free to post photos and video recordings occasionally, but the systematic accumulation of such data on the Internet must be limited."
How will this limiting be done? What authority will decide who gets to collect (archive!) what and when? And won't this authority need to see the actions of the accumulators to be able to decide whether they are following the rules?

In effect, doesn't this idea imply some sort of ubiquitous surveillance system to ensure that people are obeying the rules for preventing a ubiquitous surveillance system?

Let's say we set up some rules like the ones RMS is advocating, how do we determine that everyone is following those rules? After all, there is a very good incentive for trying to get a privileged position vis-a-vis these rules. Whoever has the inside edge, whether official spooks, enemy agents, or just criminals, gains an enormous competitive advantage over everyone else.

Someone is going to have that edge, because it's a technological thing, you can't make it go away simply because you don't like it. If the "good guys" tie their own hands (by handicapping their surveillance networks) then we are just handing control to the people who are willing to do what it takes to take it.

You can't unilaterally declare that we (all humanity) will use the kid-friendly "lite" version of the surveillance network because we cannot be sure that everyone is playing by those rules unless we have a "full" version of the surveillance network to check up on everybody!

We can't (I believe) prevent total surveillance but we can certainly control how the data are used, and we can certainly set up systems that allow the data to be used without being abused. The system must be recursive. Whatever form the system takes, it shall necessarily have to be able to detect and correct its own self-abuses.

Total surveillance is the perfection of democracy, not its antithesis.

The true horror of technological omniscience is that it shall force us for once to live according to our own rules. For the first time in history we shall have to do without hypocrisy and privilege. The new equilibrium will not involve tilting at the windmills of ubiquitous sensors and processing power but rather learning what explicit rules we can actually live by, finding, in effect, the real shape of human society.