I wonder why their Mojo system doesn't use mach ports on darwin platforms? You can pass port rights along to another process but they aren't forgeable. Unless the renderer process itself has a right to send messages to the network process it simply can't, no matter what it knows.
A process can also disinherit its own bootstrap port, preventing it from using most services except for port rights it already holds.
Using kernel-provided capabilities (like Mach ports or Unix file descriptors) directly might have too much overhead if the design uses a lot of them. But even if that’s true, it should be possible to implement unforgeable capabilities in userland using a trusted broker process.
It’s a classic design mistake to represent IPC access rights using secrets instead of capabilities. Secrets seem more convenient to work with: they’re just data, which can be encoded and transformed like any other data, whereas capabilities need to be kept separate throughout multiple levels of abstraction, from your IPC wrapper functions down to the low-level message sending primitives.
And secrets are theoretically secure, if you do everything right. After all, in the related domain of network services, there is no trusted broker to represent capabilities, so you have to use secrets in various forms, like MAC-ed and encrypted cookies, or TLS certificates. And it mostly works out in practice.
But secrets are risky. Even with network services, you have the fundamental downside: the security model is compromised if an attacker can merely read data they shouldn’t, like the MAC key or the TLS private key, rather than having to modify data. That greatly increases the impact of vulnerabilities that only allow reads – especially if, like in this exploit, you can only read a small amount of random data, rather than whatever data you want.
When it comes to IPC, using secrets is even riskier for multiple reasons. First, an attacker is very ‘close’ to the target, running on a different process in the same machine, which means they’re in a much better position to try to leak memory using side-channel attacks. Although the full power of hardware side-channel attacks has only recently been exposed, weaker side-channel attacks have been a known threat for a long time. There are also pure-software timing side channels, i.e. where the software does a different amount of work depending on a condition, which would let an attacker guess the value of the condition even if the CPU itself executed instructions in constant time. Second, IPC communications protocols tend to be lower-level, e.g. using shared memory, and the trusted side is often written in an unsafe language like (as here) C++. In contrast, network protocols and servers can typically afford to be somewhat higher-level because the network itself introduces a bunch of overhead anyway (something that also makes them more resistant to. But low-level means greater risk, especially for low-level vulnerability categories like memory disclosure (or, for that matter, memory corruption). Not that network services can’t be vulnerable to memory disclosure – consider Heartbleed or Cloudbleed – but it’s more likely with IPC.
Third, without getting too far into the weeds, the objectives are often somewhat different. With a network service, leaking random data is often a pretty good attack by itself, especially if the service directly handles data for many users. With IPC, the attacker usually needs to use a separate exploit to even get in position to interact with the IPC interface, so ‘only’ being able to read data is sort of a waste; you really want to escalate privileges. This is only a rule of thumb and isn’t always true – sometimes network services are only accessible after compromising a different network service; sometimes IPC is intentionally exposed to untrusted code. But it’s a factor.
The most important difference, though, is what I already said: with IPC you don’t have to use secrets. They’re risky in both network services and IPC, but with IPC you can use a trusted capability broker instead, and so you should. Capabilities are a bit less convenient, not being pure data. but precisely for that reason they’re harder to screw up, harder to leak by accident.
Another issue caused by hyperthreading. It's starting to look more and more like the OpenBSD guys made the right choice by turning it off. At this point, I think the only other option is for OS guys to implement some kind of security-aware scheduler, where threads can request secure execution on a dedicated processor. Though I suppose it might make more sense to execute on isolated processors by default and have threads explicitly permit unsecured execution for non-critical stuff.
> At the time of writing, both Apple and Microsoft are actively working on a fix to prevent this attack in collaboration with the Chrome security team.
Why is Apple a contributor to Chromium? I've thought they still use Webkit? Or maybe the bug exists in Webkit too?
Swift is an interesting language choice for something like this. As far as I am aware, support is non-existent outside of Ubuntu and OSX, so it seems limiting compared to, say, C.
Any reason P0 doesn't seem to direct some time and energy towards finding zero days in Firefox? Are they mandated to only investigate codebases of interest to Google?
I suspect it's because Firefox exploits have looked the same for the last several years -- there has not been a lot of novelty required to implement an exploit, given an arbitrary read/write primitive.
P0 does report vulnerabilities to Firefox though, and they obviously get fixed, they're just not particularly interesting to exploit.
Hah. That's so the opposite of the usual complaint! Normally people complain that P0 never does write-ups on Google stuff. Maybe there are concerns about the PR aspects of releasing info about FF vulns?
I love the work project zero does but I wish google would do some basic security things would android like streamlining the patch system to get to phones and providing more support for their older phones with security updates and more stringent security requirements for android. I feel like project zero is really good PR for them but they could really do more for the security of their apps and phones :/
[+] [-] xenadu02|6 years ago|reply
A process can also disinherit its own bootstrap port, preventing it from using most services except for port rights it already holds.
[+] [-] comex|6 years ago|reply
It’s a classic design mistake to represent IPC access rights using secrets instead of capabilities. Secrets seem more convenient to work with: they’re just data, which can be encoded and transformed like any other data, whereas capabilities need to be kept separate throughout multiple levels of abstraction, from your IPC wrapper functions down to the low-level message sending primitives.
And secrets are theoretically secure, if you do everything right. After all, in the related domain of network services, there is no trusted broker to represent capabilities, so you have to use secrets in various forms, like MAC-ed and encrypted cookies, or TLS certificates. And it mostly works out in practice.
But secrets are risky. Even with network services, you have the fundamental downside: the security model is compromised if an attacker can merely read data they shouldn’t, like the MAC key or the TLS private key, rather than having to modify data. That greatly increases the impact of vulnerabilities that only allow reads – especially if, like in this exploit, you can only read a small amount of random data, rather than whatever data you want.
When it comes to IPC, using secrets is even riskier for multiple reasons. First, an attacker is very ‘close’ to the target, running on a different process in the same machine, which means they’re in a much better position to try to leak memory using side-channel attacks. Although the full power of hardware side-channel attacks has only recently been exposed, weaker side-channel attacks have been a known threat for a long time. There are also pure-software timing side channels, i.e. where the software does a different amount of work depending on a condition, which would let an attacker guess the value of the condition even if the CPU itself executed instructions in constant time. Second, IPC communications protocols tend to be lower-level, e.g. using shared memory, and the trusted side is often written in an unsafe language like (as here) C++. In contrast, network protocols and servers can typically afford to be somewhat higher-level because the network itself introduces a bunch of overhead anyway (something that also makes them more resistant to. But low-level means greater risk, especially for low-level vulnerability categories like memory disclosure (or, for that matter, memory corruption). Not that network services can’t be vulnerable to memory disclosure – consider Heartbleed or Cloudbleed – but it’s more likely with IPC.
Third, without getting too far into the weeds, the objectives are often somewhat different. With a network service, leaking random data is often a pretty good attack by itself, especially if the service directly handles data for many users. With IPC, the attacker usually needs to use a separate exploit to even get in position to interact with the IPC interface, so ‘only’ being able to read data is sort of a waste; you really want to escalate privileges. This is only a rule of thumb and isn’t always true – sometimes network services are only accessible after compromising a different network service; sometimes IPC is intentionally exposed to untrusted code. But it’s a factor.
The most important difference, though, is what I already said: with IPC you don’t have to use secrets. They’re risky in both network services and IPC, but with IPC you can use a trusted capability broker instead, and so you should. Capabilities are a bit less convenient, not being pure data. but precisely for that reason they’re harder to screw up, harder to leak by accident.
[+] [-] tptacek|6 years ago|reply
[+] [-] brian_herman|6 years ago|reply
[+] [-] big_chungus|6 years ago|reply
[+] [-] pjmlp|6 years ago|reply
[+] [-] Marsymars|6 years ago|reply
[+] [-] est31|6 years ago|reply
Why is Apple a contributor to Chromium? I've thought they still use Webkit? Or maybe the bug exists in Webkit too?
[+] [-] Santosh83|6 years ago|reply
[+] [-] saagarjha|6 years ago|reply
[+] [-] big_chungus|6 years ago|reply
[+] [-] Santosh83|6 years ago|reply
[+] [-] kingkilr|6 years ago|reply
I suspect it's because Firefox exploits have looked the same for the last several years -- there has not been a lot of novelty required to implement an exploit, given an arbitrary read/write primitive.
P0 does report vulnerabilities to Firefox though, and they obviously get fixed, they're just not particularly interesting to exploit.
[+] [-] mmxmb|6 years ago|reply
[+] [-] asdfasgasdgasdg|6 years ago|reply
[+] [-] brian_herman__|6 years ago|reply