macOS has been designed to keep users and their data safe while respecting their privacy.
Gatekeeper performs online checks to verify if an app contains known malware and whether the developer’s signing certificate is revoked. We have never combined data from these checks with information about Apple users or their devices. We do not use data from these checks to learn what individual users are launching or running on their devices.
Notarization checks if the app contains known malware using an encrypted connection that is resilient to server failures.
These security checks have never included the user’s Apple ID or the identity of their device. To further protect privacy, we have stopped logging IP addresses associated with Developer ID certificate checks, and we will ensure that any collected IP addresses are removed from logs.
In addition, over the the next year we will introduce several changes to our security checks:
* A new encrypted protocol for Developer ID certificate revocation checks
* Strong protections against server failure
* A new preference for users to opt out of these security protections
> A new encrypted protocol for Developer ID certificate revocation checks
Apple's online verification scheme still seems to be the wrong approach both for privacy (since it leaks information) and for security (since apps still need to keep working offline and during service outages.) Encrypted queries can still leak information to observers, and we apparently still have to trust Apple to "remove" information from their logs (rather than simply not logging to begin with.)
Dev certificate revocations are rare enough that they can be handled by periodic updates to an on-device revocation list. This is similar to what Chrome does with its CRLSet.
I am still waiting for someone providing a plausible explanation as to why it has to be online check and not like AntiVirus where signatures are pushed to the client.
Instead everything were derailed into Apple Data Collection.
Let it be known: one asshole yelling on his personal blog can bully the largest company in the world into encrypting their shit and deleting their logs, and, most importantly, providing a way of turning it off.
Remember that, kids. I’m as surprised as you are.
Now I have an even bigger and more difficult writing task ahead of me: rms cold emailed me today to ask me, point blank (and presumably non-rhetorically), why I am still running macOS.
That’s going to be a doozy, because he’s damn well right.
good first start. What I still would like to see extra:
* option to notify user certificate got revoked. Short description why twitter style and link to details and give user choice to quarantine it or to still use it. This is in similar style what antiviruses do it telling you: 'this kind of malware name was potentially found but if we are wrong feel free to remove from quarantine'.
* revocation list is checked twice a day locally on the machine and twice a week (always on the same days) the current way remotely to strike a good balance as a default. Allow user to adjust frequency.
While it's good that they're going to improve Gatekeeper's certificate checks, don't forget the other major issue raised recently: all of Big Sur's system network traffic is deliberately leaked around VPNs.
Will they change course on that issue? Or will you have to carry a raspberry pi around as a hardware firewall if you want to have an actually private network connection? (or alternately, not use a Mac)
That is unbelievable. Breaking VPNs is literally a life-or-death situation for many journalists and activists in certain countries; not to mention something that would (justifiably) give network admins a heart attack.
It also increases the attack surface, as malicious programs may find ways to hijack their traffic, or attach their traffic to OS traffic. All you need is an Apple service that relays the information somehow -- for example, a hypothetical Apple service that requests information/metadata from an application specified URL.
The VPN leak is only around per-application VPNs and filters, which is the mechanism used by firewalls like Little Snitch and LuLu. These can not be applied to Apple's apps.
System-wide VPNs thankfully don't have the same problem.
Can anyone weigh in on why Apple would prevent this? It's not as though redirecting the traffic from Apple's software will let you impersonate Apple (unless you've also been able to load your own fake certificates, in which case the client is already hosed). At worst you can selectively block them, no? And you can do that anyway with a hosts file.
This is in response to the issue last week where slowness of Apple's OCSP responders caused hangs in apps launching. It was a bad look for privacy-conscious Apple especially considering that basic OCSP queries are unencrypted HTTP requests.
In my post (https://blog.cryptohack.org/macos-ocsp-disaster) I argued that OCSP was inherently a poor way to perform certificate revocation in this scenario, and that an approach based on Certificate Revocation Lists (CRLs) could be preferable. Regardless, it looks like Apple might be doubling down on OCSP but encrypting the requests, or possibly adding a new protocol altogether.
> But it’s also fundamentally different since Apple has total control over its own chain of trust. Other certificate authorities are not allowed to issue valid certificates for code signing as all certificates must chain back up to Apple.
To me this was the most curious part of the entire situation. The post briefly mentions CRLite and bloom filters; they rely on the list of all if not most valid certificates (which were impossible before CT) and it's understandable that they are not yet widely deployed. But Apple does surely know the list of all developer certificates and can simply publish a (probably compressed) list of serial IDs of revoked developer certificates that would be otherwise valid. I don't see a good reason to use big moving parts like OCSP here especially given the soft-fail behavior.
Since Bloom filters allow for false positives, wouldn’t that make them inappropriate here? You wouldn’t want a valid certificate to be perceived as revoked. (I recognize that I’m probably wrong, given that Mozilla is doing this - where is my mistake in logic?)
I really like this because of the increased privacy (apple doesn't know which apps you run) as well as the better error mode. If their servers go down, the worst that can happen is that an update of the revocation list fails, which means they can't get new revocations out to people. But it's not that apps won't start any more.
Seriously, why do they have to collect ANYTHING without opt-in? People pay them for hardware, their revenue stream has a different source than facebook or google.
I think it's fear (useful telemetry) or greed (valuable telemetry) or vanity (look how many people launch x).
Really, just do it. So some customers shoot themselves in the foot. That is how they learn. Others who are perfectly capable of managing things will also be happy with actual privacy. That is how trust is built up, and with trust comes unfettered trade.
Right now power users are alienated, regular customers feel deceived and apple could just do better.
While everyone on HN (including me) is mad about the privacy implications of what's happening here - including how all this doesn't do much good for programmers dealing with all kinds of binaries updated daily, I quickly want to point out that I think this type of functionality is a great idea for most non technical users. Apple has always been at the forefront of extreme usability (kids using iPads, seniors sending iMessages) and the internet has a lot of toxic stuff that needs to be kept away from many non technical users (for my parents' computer, I rather have this ping home to Apple with hashes of apps they open than them being exposed to tons of malware). That said they really need to work on the privacy aspect..
"Apple has always been at the forefront of extreme usability (kids using iPads, seniors sending iMessages)"
I've always understood this to be a marketing victory on Apple's side. From what I've seen, using Mac/iOS isn't any easier or more difficult than Windows/Android.
FWIW, I do think Apple is targeting non-techies a lot, and really not caring much about tech users.
The developer experience on Apple tech is pretty bad and getting worse -- but for the other 99% of the people, Apple seems to be the best tool for them.
For me the fact that they are working on improvements (including the ability to disable these checks) is a reason to take Apple devices into consideration again
The principle here is good (check if the developer’s certificate is revoked) but the implementation is completely batshit. Why not cache the result and send a push notification through the centralized channels if there is a Sev1 revocation incident?
This is a really good point, there is no good reason to have millions of devices phone home for permission on every single app open. If Apple's claim is to be believed there are a million patterns that make more sense for achieving this goal. Blacklists, whitelists, caching, etc.
I get "never attribute to malice what can be explained by incompetence", but this is Apple. Are we to believe that this public, unencrypted endpoint was set up and is being called tens of millions of times a day because Apple engineers were too incompetent to come up with a better solution for something so fundamental (to Apple) as the security of the software running on their devices? And flying so blatantly in the face of their claim to protect user privacy?
This whole incident is completely bonkers. People should be getting fired over this and there should be an apology and a massive step back from this horrible, horrible approach.
The protocol in use is OCSP. The OCSP endpoint is part of the certificate. If I had to guess, I would say that they just relied on certificate validation features of the library they are using, not necessarily aware of all of the consequences.
Checking OCSP is a standard feature of many SSL libraries.
Doing what you suggest is an implementation outside of what standard libraries would provide and was probably completely out of scope until now when stuff started burning.
Don't call relying on standard features of an underlying library "batshit implementation". For the vast majority of cases going with what's already there is the way better solution than NIHing your custom thing
> The principle here is good (check if the developer’s certificate is revoked)
How is that remotely good ? It makes it easy to bully / blackmail developer to abide to every Apple will and revoke a certificate in pure retaliation, cf. the Epic debacle. A central authority can't be trusted.
[+] [-] latexr|5 years ago|reply
<quote>
Privacy protections
macOS has been designed to keep users and their data safe while respecting their privacy.
Gatekeeper performs online checks to verify if an app contains known malware and whether the developer’s signing certificate is revoked. We have never combined data from these checks with information about Apple users or their devices. We do not use data from these checks to learn what individual users are launching or running on their devices.
Notarization checks if the app contains known malware using an encrypted connection that is resilient to server failures.
These security checks have never included the user’s Apple ID or the identity of their device. To further protect privacy, we have stopped logging IP addresses associated with Developer ID certificate checks, and we will ensure that any collected IP addresses are removed from logs.
In addition, over the the next year we will introduce several changes to our security checks:
* A new encrypted protocol for Developer ID certificate revocation checks
* Strong protections against server failure
* A new preference for users to opt out of these security protections
</quote>
[+] [-] justapassenger|5 years ago|reply
Apple has a really strong brand. If this was from Google or Facebook, there’d be angry mob about how dear they logged IP in the first place.
[+] [-] dane-pgp|5 years ago|reply
[+] [-] musicale|5 years ago|reply
Apple's online verification scheme still seems to be the wrong approach both for privacy (since it leaks information) and for security (since apps still need to keep working offline and during service outages.) Encrypted queries can still leak information to observers, and we apparently still have to trust Apple to "remove" information from their logs (rather than simply not logging to begin with.)
Dev certificate revocations are rare enough that they can be handled by periodic updates to an on-device revocation list. This is similar to what Chrome does with its CRLSet.
[+] [-] ksec|5 years ago|reply
I am still waiting for someone providing a plausible explanation as to why it has to be online check and not like AntiVirus where signatures are pushed to the client.
Instead everything were derailed into Apple Data Collection.
[+] [-] sneak|5 years ago|reply
Remember that, kids. I’m as surprised as you are.
Now I have an even bigger and more difficult writing task ahead of me: rms cold emailed me today to ask me, point blank (and presumably non-rhetorically), why I am still running macOS.
That’s going to be a doozy, because he’s damn well right.
[+] [-] purplecats|5 years ago|reply
[+] [-] throwaway546456|5 years ago|reply
* option to notify user certificate got revoked. Short description why twitter style and link to details and give user choice to quarantine it or to still use it. This is in similar style what antiviruses do it telling you: 'this kind of malware name was potentially found but if we are wrong feel free to remove from quarantine'.
* revocation list is checked twice a day locally on the machine and twice a week (always on the same days) the current way remotely to strike a good balance as a default. Allow user to adjust frequency.
[+] [-] wlesieutre|5 years ago|reply
Will they change course on that issue? Or will you have to carry a raspberry pi around as a hardware firewall if you want to have an actually private network connection? (or alternately, not use a Mac)
[+] [-] dannyw|5 years ago|reply
It also increases the attack surface, as malicious programs may find ways to hijack their traffic, or attach their traffic to OS traffic. All you need is an Apple service that relays the information somehow -- for example, a hypothetical Apple service that requests information/metadata from an application specified URL.
[+] [-] wlesieutre|5 years ago|reply
The VPN leak is only around per-application VPNs and filters, which is the mechanism used by firewalls like Little Snitch and LuLu. These can not be applied to Apple's apps.
System-wide VPNs thankfully don't have the same problem.
Can anyone weigh in on why Apple would prevent this? It's not as though redirecting the traffic from Apple's software will let you impersonate Apple (unless you've also been able to load your own fake certificates, in which case the client is already hosed). At worst you can selectively block them, no? And you can do that anyway with a hosts file.
[+] [-] judge2020|5 years ago|reply
[+] [-] hyper_reality|5 years ago|reply
In my post (https://blog.cryptohack.org/macos-ocsp-disaster) I argued that OCSP was inherently a poor way to perform certificate revocation in this scenario, and that an approach based on Certificate Revocation Lists (CRLs) could be preferable. Regardless, it looks like Apple might be doubling down on OCSP but encrypting the requests, or possibly adding a new protocol altogether.
[+] [-] lifthrasiir|5 years ago|reply
To me this was the most curious part of the entire situation. The post briefly mentions CRLite and bloom filters; they rely on the list of all if not most valid certificates (which were impossible before CT) and it's understandable that they are not yet widely deployed. But Apple does surely know the list of all developer certificates and can simply publish a (probably compressed) list of serial IDs of revoked developer certificates that would be otherwise valid. I don't see a good reason to use big moving parts like OCSP here especially given the soft-fail behavior.
[+] [-] tbodt|5 years ago|reply
This is known as CRLite, and Firefox is implementing it for HTTPS certificates. https://blog.mozilla.org/security/2020/01/09/crlite-part-1-a...
[+] [-] chrisshroba|5 years ago|reply
[+] [-] est31|5 years ago|reply
[+] [-] lawnchair_larry|5 years ago|reply
[+] [-] m463|5 years ago|reply
I think it's fear (useful telemetry) or greed (valuable telemetry) or vanity (look how many people launch x).
Really, just do it. So some customers shoot themselves in the foot. That is how they learn. Others who are perfectly capable of managing things will also be happy with actual privacy. That is how trust is built up, and with trust comes unfettered trade.
Right now power users are alienated, regular customers feel deceived and apple could just do better.
[+] [-] askmike|5 years ago|reply
[+] [-] camhart|5 years ago|reply
I've always understood this to be a marketing victory on Apple's side. From what I've seen, using Mac/iOS isn't any easier or more difficult than Windows/Android.
[+] [-] WhyNotHugo|5 years ago|reply
The developer experience on Apple tech is pretty bad and getting worse -- but for the other 99% of the people, Apple seems to be the best tool for them.
[+] [-] alacombe|5 years ago|reply
No they haven't, proof: the lack of tactile feedback on the iPhone 7+ is a usability impairments for seniors.
[+] [-] brigandish|5 years ago|reply
[+] [-] Tepix|5 years ago|reply
[+] [-] ponker|5 years ago|reply
[+] [-] jozzas|5 years ago|reply
I get "never attribute to malice what can be explained by incompetence", but this is Apple. Are we to believe that this public, unencrypted endpoint was set up and is being called tens of millions of times a day because Apple engineers were too incompetent to come up with a better solution for something so fundamental (to Apple) as the security of the software running on their devices? And flying so blatantly in the face of their claim to protect user privacy?
This whole incident is completely bonkers. People should be getting fired over this and there should be an apology and a massive step back from this horrible, horrible approach.
[+] [-] pilif|5 years ago|reply
Checking OCSP is a standard feature of many SSL libraries.
Doing what you suggest is an implementation outside of what standard libraries would provide and was probably completely out of scope until now when stuff started burning.
Don't call relying on standard features of an underlying library "batshit implementation". For the vast majority of cases going with what's already there is the way better solution than NIHing your custom thing
[+] [-] alacombe|5 years ago|reply
How is that remotely good ? It makes it easy to bully / blackmail developer to abide to every Apple will and revoke a certificate in pure retaliation, cf. the Epic debacle. A central authority can't be trusted.
[+] [-] jzer0cool|5 years ago|reply
What about kernel extensions? Is it hard to tell which are certified or not and how to go about safely seeing if you can remove/delete any.
[+] [-] saagarjha|5 years ago|reply