(no title)
marcan_42 | 3 years ago
As a security researcher and given past showings from Intel, I wouldn't put much faith in SGX, even if they try to fix past flaws. SGX as a concept for tenant-provider isolation requires strong local attacker security, which is something off the shelf x86 has never had (not up to contemporary standards, ever) and certainly not in anything Intel has put out. They've demonstrated they don't have the culture nor security chops to actually engineer a system that could be trusted, IMO. Plus then there's all the microarchitectural leak vectors with a shared-CPU approach like that, and we know Intel have utterly failed there (not just Spectre; there was absolutely no excuse for L1TF and some of the others, and those really showed us just how security-oblivious Intel's design teams are).
Right now, the x86 world would probably do well to listen to Microsoft, since their Xbox division managed to coax AMD into actually putting out secure silicon (they're one of the two big companies doing proper silicon security at the consumer level, the other being Apple and Google trying to catch up as a distant third). But given the muted response to Pluton from the industry, and the poor way in which this is all being marketed and explained, I'm not sure I have much hope right now...
lawl|3 years ago
I generally agree with you. But I recently realized there might be one usecase, and it's pretty much what signal is doing. They're processing address books in SGX so that they can't see them. I don't have much faith in the system because I don't trust SGX, of course.
But there is one interesting aspect to this. If anyone comes knocking and tells them to start logging all address books and hand them over, they can say that it's not possible for them to do so.
Anyone wanting to do that covertly would at least need to bring their own SGX exploits, meaning it probably offers SOME level of protection. Certainly not if the NSA wants the data or some LEA is chasing something high-profile enough that they're willing to buy exploits and get a court order allowing them to use them. But it does allow them to respond with "we don't have this kind of data".
rektide|3 years ago
It's become a moral cause to make a lot of big-data computing deniable, to be data-oblivious. This is a responsible way to build an application, is well-built security, and I like it a lot.
hedora|3 years ago
https://www.usenix.org/system/files/conference/usenixsecurit...
The basic idea is that you can play with the clockspeed and voltage of one ARM core using code running on the other. They used this to make an AES block glitch at the right time. The cool part is that, even though the key is baked into the processor, and there are no data lines to read the key (other than the AES logic), this lets them infer the key.
Hmm. The paper is 5 years old. I still think we are a decade away.
marcan_42|3 years ago
It's also how I pulled the keys out of the Wii U main CPU (reset glitch performed from the ARM core). Heh, that was almost a decade ago now.
That's why Apple uses a dedicated SEP instead of trying to play games with trust boundaries in the main CPU. That way, they can engineer it with healthy operating margins and include environmental monitors so that if you try to mess with the power rails or clock, it locks itself out. I believe Microsoft is doing similar stuff with Xbox silicon.
Of course, all that breaks down once you're trying to secure the main CPU a la SGX. At that point the best you can do is move all this power stuff into the trust domain of the CPU manufacturer. Apple have largely done this with the M1s too; I've yet to find a way to put the main cores out of their operating envelope, though I don't think it's quite up to security standards there yet (but Apple aren't really selling something like SGX either).
mike_hearn|3 years ago
"SGX as a concept for tenant-provider isolation requires strong local attacker security, which is something off the shelf x86 has never had"
Off the shelf CPUs have never had anything like SGX, period. All other attempts like games consoles rely heavily on establishing a single vendor ecosystem in which all code is signed and the hardware cannot be modified at all. Even then it often took many rounds of break/fix to keep it secure and the vendors often failed (e.g. PS3).
So you're incorrect that Intel is worse than other vendors here. When considering the problem SGX is designed to solve:
- AMD's equivalents have repeatedly suffered class breaks that required replacing the physical CPU almost immediately, due to simple memory management bugs in firmware. SGX has never had anything even close to this.
- ARM never even tried.
SGX was designed to be re-sealable, as all security systems must, and that more or less has worked. It's been repeatedly patched in the field, despite coming out before micro-architectural side channel/Spectre attacks were even known about at all. That makes it the best effort yet, by far. I haven't worked with it for a year or so but by the time I stopped the state of the art attacks from the research community were filled with severe caveats (often not really admitted to in the papers, sigh), were often unreliable and were getting patched with microcode updates quite quickly. The other vendors weren't even in the race at all.
"there was absolutely no excuse for L1TF and some of the others, and those really showed us just how security-oblivious Intel's design teams are"
No excuse? And yet all CPU vendors were subject to speculation attacks of various kinds. I lost track of how many specex papers I read that said "demonstrating this attack on AMD is left for future work" i.e. they couldn't be bothered trying to attack second-tier vendors and often ARM wasn't even mentioned.
I've seen some some security researchers who unfortunately seemed to believe that absence of evidence = evidence of absence and argued what you're arguing above: that Intel was uniquely bad at this stuff. When studied carefully these claims don't hold water.
Frankly I think the self-proclaimed security community is shooting us all in the foot here. What Intel is learning from this stuff is that the security world:
a. Lacks imagination. The tech is general purpose but instead of coming up with interesting use cases (of which there are many), too many people just say "but it could be used for DRM so it must die".
b. Demands perfection from day one, including against attack classes that don't exist yet. This is unreasonable and no real world security technology meets this standard, but if even trying generates a constant stream of aggressive PR hits by researchers who are often over-egging what their attacks can do, then why even bother? Especially if your competitors aren't trying, this industry attitude can create a perverse incentive to not even attempt to improve security.
"the x86 world would probably do well to listen to Microsoft, since their Xbox division managed to coax AMD into actually putting out secure silicon"
SGX is hard because it's trying to preserve the open nature of the platform. Given how badly AMD fared with SEV, it's clear that they are not actually better at this. Securing a games console i.e. a totally closed world is a different problem with different strategies.
betterunix2|3 years ago
Except that was an afterthought. Originally only whitelisted developers were allowed to use SGX at all, back when DRM was the only use-case they had in mind.
kibwen|3 years ago