(no title)
spdy | 4 years ago
---
It is an interesting attack but is the above goal ever achievable? To protect against adversaries from the inside.
spdy | 4 years ago
---
It is an interesting attack but is the above goal ever achievable? To protect against adversaries from the inside.
michaelt|4 years ago
People have gotten very close to achieving similar goals.
For example, modern games consoles' anti-piracy measures guard against the device owner who has physical control and unlimited time. [1]
iPhone activation locks likewise prevent stolen phones from being used, even by thieves with physical control and unlimited time.
And neither of the systems rely on the clunky 'brick the device if the case is opened' methods of yesteryear.
(Of course there have also been a great many failed attempts - almost every console since the dawn of time has eventually been hacked, as have things like TPMs and TrustZone, many versions of the iPhone were rooted, etc etc)
[1] https://www.youtube.com/watch?v=quLa6kzzra0
steelframe|4 years ago
BeefWellington|4 years ago
Yes. To expand: to a function on the CPU an administrator is just another user. The Operating System is responsible for managing those designations.
These trusted computing pieces across all kinds of CPUs are specifically aimed at protecting against people with host-root, so it would seem like it's a goal they've set for themselves and should be reasonably achievable.
JustFinishedBSG|4 years ago
It's not important but come on, if your field is cyber security at least make sure rogue is spelled correctly.
throwaway420y|4 years ago
wizzwizz4|4 years ago
onlinejk|4 years ago
ducks
dnautics|4 years ago
unknown|4 years ago
[deleted]
dgellow|4 years ago
unknown|4 years ago
[deleted]
lima|4 years ago
Achievable in any circumstances? No. Within a well-defined threat model, definitely.
DSingularity|4 years ago
swiley|4 years ago
baybal2|4 years ago
No, safe execution of untrusted code is impossible by the very definition, not without undoing 40 years of IC design practices.
It's an almost physical limitation which makes it very hard to compute something without some electromagnetic leakage from/to the die.
Take a look on secure CPUs for credit cards. They have layer, upon layers of anti-tampering, anti-extraction measures, and yet TEM shops in China do firmware/secret extraction from them for $10k-$20k
formerly_proven|4 years ago
MayeulC|4 years ago
> No, safe execution of untrusted code is impossible by the very definition
I think this is more about data processing while hiding the data from whoever operates the hardware. Homomorphic encryption could be a partial answer to that.
evancox100|4 years ago
Also, just because something is physically possible, doesn't mean that the barriers to doing so are irrelevant. If it costs you $10k to unbrick a locked & stolen iPhone, then those countermeasures have likely succeeded at their intended purpose. This is why threat models try to quantify the time and/or monetary value of what they're protecting.
phire|4 years ago
That the CPU should be able to cryptographically prove that a VM has been setup without any interference from an inside attacker who controls the hardware.
At the very least, SEV massively raises the barrier to such attacks. It's now beyond the ability of a rogue administrator or technician, requiring complex custom motherboards. But a well-funded inside attacker can target something with high enough value.
londons_explore|4 years ago
The end of the abstract explicitly refutes this. It is claiming that a software-only solution, using keys derived with this technique, can pretend to be a suitable target to migrate a secure VM to, which then allows the rogue admin to inspect or modify anything in the VM.
Cyph0n|4 years ago
jlourenco27|4 years ago
londons_explore|4 years ago
This is about protecting a VM from people who have admin rights and hardware access outside the VM.
landr0id|4 years ago
monocasa|4 years ago
dnautics|4 years ago
theevilsharpie|4 years ago
benlivengood|4 years ago
Fundamentally, though, system security hasn't caught up with the promise of SEV. It's far more likely that a VM will be compromised by 0-day attacks than insiders at the cloud companies. But if you really need to run a secure kernel on someone else's machine then SEV is the way of the future. This includes using SEV on-premises against hardware attacks. I've wanted hardware RAM encryption for a decade or two to avoid coldboot attacks and similar hardware vulnerabilities.