Anyone who wants to should be able to buy such a device, as it isn't like any of the machine code you are getting elevated access to is even secret (you can download, from Apple, unencrypted copies of the entire operating system). (You can try to make an argument that this is about keeping you from getting access to third-party encrypted assets to prevent some aspect of piracy in the App Store, but this doesn't accomplish that either as you need only have a single supported jailbroken device for that to be easy, and the world already has millions of those and you can't really prevent them as the act of fixing bugs discloses the bug for the older firmware.)
The real problem here is that Apple is so ridiculously controlling with respect to who is allowed to develop software (in Apple's perfect world, all software development would require an Apple license and all software would require Apple review)--in a legal area that isn't really conducive to that (see Sega v. Accolade, which was important enough to later ensure permanent exemptions on reverse engineering and even jailbreaking for software interoperability purposes in the original DMCA anti-tampering laws)--that they are even working right now on suing Corellium, a company which makes an iPhone emulator (which again, has strong legal precedent), in order to prevent anyone but a handful of highly controlled people from being able to debug their platform.
Apple just has such a history of being anti-security researcher--banning people like Charlie Miller from the App Store for showing faults in their review process, pulling the vulnerability detection app from Stefan Esser, slandering Google Project Zero, denying the iPhone 11 location tracking until proven wrong, requiring people in their bug bounty program to be willing to irresponsibly hold bugs indefinitely so Apple can fix things only at their leisure, and using the DMCA to try to squelch research via takedowns--that this ends up feeling like yet another flat gesture: they should have done much more than this device at least a decade ago. I'd say Apple is in store for a pretty big fall if anyone ever manages to get a bankroll large enough to actually fight them in court for any protracted length of time :/.
>that they are even working right now on suing Corellium, a company which makes an iPhone emulator (which again, has strong legal precedent), in order to prevent anyone but a handful of highly controlled people from being able to debug their platform.
Anyone can debug their platform, as they have been. You just need to be approved for this specific program.
Apple's case against Corellium is about intellectual property, and it's frankly going to be a slam dunk in court. There's already established precedent with Apple v. Psystar with an almost identical set of facts.
Not only a flat gesture, I think this by this they are actively gunning for companies like Corellium and will have a huge amount of control over security researchers who join the program. Disclose your bugs to us on our terms or have your access yanked? Pretty yikes. (And this is completely ignoring the rest of your comment, because it's pretty clear that they don't want consumers with debuggable iPhones.)
If anyone could buy this device, then tons of scammers would buy them, install malware, and sell them to people as normal phones. They could then control banking apps and whatever else they wanted.
The move is in line with their reputation. Handing out a bunch of research devices which come with a catch is a great way to exert more control and influence over vulnerability reporting, and skew the bargaining power when it comes to disclosure. I expect the motivation is largely genuine to encourage security research that bolsters their platform, but also stems in part from an increasing fear of PR ramifications outside their control.
"I'm handing out a bunch of water bottles; sign up here. The contents remain my property so when you're done please urinate back into them and return to me on demand."
If only open source software licenses could have predicted the level of vertical integration control their software would be used in. Apple continually violates the good will of developers and puts forth their own bad will. I'm tempted to make up an 'MIT minus non-free platforms' agreement. If the OS can't be completely emulated and freeley installed without restriction, then you can't use the library.
I'd like to see Apple survive having to recreate half their software from scratch.
In fairness, no one is really developing for the .NET ecosystem without VS licenses either. I’m sure it’s theoretically possible but MS de facto runs the same scam.
From experience, I'll suggest all serious security researchers to never, ever, sign any agreement with the company whose products they are researching.
This particular case is also outrageous for other reasons:
1) They are only doing this now because Corellium has been selling virtually the same thing for a while already.
2) They are doing this to try and hurt Corellium financially, while they're already suing them in parallel.
3) Agreeing to their terms here, effectively makes you a glorified Apple QA engineer. Only you don't get a salary, but rather, a bounty for whenever you find a bug. For most people that would be way, way less money than just being employed wherever.
To whatever extent these devices are distributed, my guess is that they land predominantly in the hands of consultancies and security product firms, where the bulk of bread-and-butter security research is done. Those firms will all have their legal vet the actual contract (which this page is not).
And, of course, that's the case with Corellium as well; it's not like Hopper or Binja, a tool that random people just buy to kick the tires on. The front page of Corellium's site is a "contact sales" mailto; the term of art we use for that pricing plan is "if you have to ask...".
Kind of humorous to imagine a researcher suing apple under the anti-gig California law. Would be a factual question of whether the researcher has sufficient control over their work under the agreement.
Apple would almost certainly win the suit, but I think there's reasonable odds the suit would survive an early motion to dismiss before factual discovery.
I read the terms of the SRD [1] to suggest if you get one, and use it, you aren't eligible for bounties on any bugs you find while using it. So, you are an entirely unpaid Apple QA engineer. Knowledge is its own reward I guess.
[1] "If you use the SRD to find, test, validate, verify, or confirm a vulnerability, you must promptly report it to Apple and, if the bug is in third-party code, to the appropriate third party. If you didn’t use the SRD for any aspect of your work with a vulnerability, Apple strongly encourages (and rewards, through the Apple Security Bounty) that you report the vulnerability, but you are not required to do so."
>If you report a vulnerability affecting Apple products, Apple will provide you with a publication date... Until the publication date, you cannot discuss the vulnerability with others.
In addition to the mandatory bug reporting, Apple reserve a right to dictate the researchers a mandatory publication date. No more 90/180 days responsible disclosure deadline policy. I highly doubt any serious researcher would agree to work with such conditions.
> If you use the SRD to find, test, validate, verify, or confirm a vulnerability, you must promptly report it to Apple and, if the bug is in third-party code, to the appropriate third party. If you didn’t use the SRD for any aspect of your work with a vulnerability, Apple strongly encourages (and rewards, through the Apple Security Bounty) that you report the vulnerability, but you are not required to do so.
So vulnerabilities found through this program are not eligible for any reward. Then what would be the incentive to enroll (and accepting liabilities like losing the device, Apple suspecting you of breach of contract etc)? Just bragging rights?
I think that is supposed to be read as "you must report any vulnerabilities, which will be treated as any vulnerability you chose to voluntarily submit".
This is huge. Not as a security device, but if this were the normal permission model on all iPhones (e.g. owners of devices get root on the devices they own... like a normal general purpose computing device) I could ditch my android and my mac and use an iPhone for everything.
I'm not saying this will ever happen, but in my mind this paints a bright picture of what the iPhone could be.
It's also a bit sobering as I'm quite concerned Apple is actually pushing the other direction in their shift from Intel to ARM.
I dont get the allure of this. As someone working in security, the phone is an extremely leaky thing and very bad for privacy to begin with. On top of that you want to remove all restrictions and make it a security nightmare too? I get that you want to install what you like. Sure, but I don't think the convenience is worth the security trade off.
Honestly the mac or desktop is where I enjoy the openness and do stuff I want to do. I would want to leave the phone untouched and as secure as possible.
I would like to hear your and others' take on it though.
I see it as the opposite: these iPhones are rented to you, and are clearly not what they want to "sell" to people. It's certainly a huge surprise that this exists at all, and I would certainly like more moves in the direction that you mentioned, but I am not sure that this is it.
Curious what you mean by “pushing the other direction.” I would say the opposite — it seems like everything running on ARM is exactly what it would take for your phone to run desktop programs.
I think there are other downsides to switching off of x86, but I think it strengthens the case for having one small portable computer to do everything. The question is if that device will allow real work like macOS, or if it’ll be stuck as a fancy consumer-only device..
Maybe just go for a PinePhone instead? [1] I mean, Linux GUIs aren't fully mobile and touchscreen friendly yet, but it's getting there real quick. I mean, they started in November 2019.
In my opinion the PinePhone is the most promising device, as all upstream projects use it as an official developer device and upstream linux has integrated support.
I wonder how much people are able to publish about the device. I'd expect not much, but it'd be nice to be able to compare a iPhone that was completely unlocked (at least, to whatever that means for Apple) with whatever security they put on the ARM Macs which are supposed to be "open for hobbyists". I'd expect that the ARM Macs have much of the same security stack (by default) that iOS devices have given what they said in the WWDC talks, but maybe that's not the case.
Also, if you found an exploit on a research iPhone because you made use of entitlements that were Apple-only, I wonder if that'd be worth anything bounty wise. Nobody can/should be able to write an that'll get through App Store checks if they asked for PLZ_NO_SANDBOX_ILL_BE_GOOD or something (at least, that's what I thought before the whole Snapchat system call thing happened). But hypothetically the App Store review process is vulnerable to a bad actor inside Apple pushing an update to a big app that included malware, so I'd think that private entitlements shouldn't be available at all to binaries that didn't ship with the device/in a system update (unless some kind of hobbyist flag was flipped by the consumer). So I'd say that would be worth something, even if smaller than a more interesting exploit.
We’ll see how the shipping ARM Macs are “fused” when they come out, but my guess is that they will be more locked down than these devices: their OS will be more permissive but you will not have meaningful kernel debugging.
> Nobody can/should be able to write an that'll get through App Store checks if they asked for PLZ_NO_SANDBOX_ILL_BE_GOOD or something (at least, that's what I thought before the whole Snapchat system call thing happened).
Snapchat (on iOS at least) is still subject to the app sandbox, no app has on iOS has been granted an exception there to my knowledge. On macOS there are apps that are “grandfathered in” to not require the sandbox on the App Store, but new apps are supposed to have it. Due to the way the dynamic linker works, until recently it was possible to upload an app that could bypass the sandbox, but Apple has said they have fixed this. Some apps do have an exception to this as well, as the broad way they fixed one of the issues broke legitimate functionality in library loading. You can find those hardcoded in AMFI.kext, theoretically they could turn off the sandbox for themselves if they wanted.
I want to apply (not that I am sure that Apple would consider me a security researcher) but am unsure to what extend they're going to go with
> If you use the SRD to find, test, validate, verify, or confirm a vulnerability, you must promptly report it to Apple and, if the bug is in third-party code, to the appropriate third party.
I mean, if I find a bug I might report it, but I know people who work on jailbreaks and stuff–if they tell me something will I have to promptly report it? What if I find something on a non-SRD device? If I ever hypothetically "write a jailbreak", will Apple come after me even if I say I didn't use that device for it? I can get 90% of the benefit from using a device with a bootroom exploit, with none of the restrictions here…
I’m not a lawyer nor your lawyer, but I read that to mean any vulnerability you discover as a result of your research using the SRD, not any vulnerability you otherwise discover or of which you have knowledge.
>if they tell me something will I have to promptly report it
according to the terms no, unless you use the SRD to verify the information or vulnerability
>If I ever hypothetically "write a jailbreak", will Apple come after me even if I say I didn't use that device for it
I imagine that if you sold a jailbreak for $$$$ that Apple would probably take a close look at the telemetry the device is sending. If you're confident in your ability to terminate all telemetry, and keep good opsec, and defend yourself in court, then maybe that avenue would be feasible. It certainly wouldn't be ethical.
This involves an interesting set of assumptions about the plausibility of deep-cover hacking operations.
> If you use the SRD to find, test, validate, verify, or confirm a vulnerability, you must promptly report it to Apple
But let's say you pass their review, get a device, find a vulnerability, and don't report it. Then what? You're breaching the contract, but they have no way to know that, so there's no consequence?
As a long time iOS user this single aspect has made me look over the fence to the android side the whole time. Not having full access to my own devices is insane. The poor security on android side has kept me away, but they’ve just recently been catching up enough that the scales are almost tilted.
Looks fairly cool, but I'll bet it isn't that popular with security boffins. I would be cautious about something that might not actually reflect a current "in the wild" device.
For example, if the OS isn't quite at the same level as the release OS, it could be an issue.
That said, this is not my field, and I am not qualified to offer much more than the vague speculation, above.
I would expect it to be exactly the same except that you can debug it, basically. iPhones have a special fuse in them that prevents that from being done on production hardware, and these will presumably have that "unblown". If you want to test on production hardware you always can, this just lets you do research (a metaphor might be that this is "a debug build with symbols, normal iPhones are a "release build".)
I agree this is theater; no serious whitehat researcher would sign a deal forcing them to accept dates from the manufacturer. It won't be useful for its intended purpose.
On the bright side, it will be very useful for jailbreak research and in a way, those bugs _do_ get disclosed to Apple for them to subsequently fix. Not necessarily the way Apple wants, but it does shine daylight on their code.
These guys keep working exploits close to their hearts and don't release them specifically so they can get a look at new hardware. That will no longer be necessary. You find an exploit, you can release it right away.
And on the gripping hand, it will also be used by malicious criminals and state actors to develop zero days for various evil purposes.
[+] [-] saurik|5 years ago|reply
The real problem here is that Apple is so ridiculously controlling with respect to who is allowed to develop software (in Apple's perfect world, all software development would require an Apple license and all software would require Apple review)--in a legal area that isn't really conducive to that (see Sega v. Accolade, which was important enough to later ensure permanent exemptions on reverse engineering and even jailbreaking for software interoperability purposes in the original DMCA anti-tampering laws)--that they are even working right now on suing Corellium, a company which makes an iPhone emulator (which again, has strong legal precedent), in order to prevent anyone but a handful of highly controlled people from being able to debug their platform.
Apple just has such a history of being anti-security researcher--banning people like Charlie Miller from the App Store for showing faults in their review process, pulling the vulnerability detection app from Stefan Esser, slandering Google Project Zero, denying the iPhone 11 location tracking until proven wrong, requiring people in their bug bounty program to be willing to irresponsibly hold bugs indefinitely so Apple can fix things only at their leisure, and using the DMCA to try to squelch research via takedowns--that this ends up feeling like yet another flat gesture: they should have done much more than this device at least a decade ago. I'd say Apple is in store for a pretty big fall if anyone ever manages to get a bankroll large enough to actually fight them in court for any protracted length of time :/.
[+] [-] Despegar|5 years ago|reply
Anyone can debug their platform, as they have been. You just need to be approved for this specific program.
Apple's case against Corellium is about intellectual property, and it's frankly going to be a slam dunk in court. There's already established precedent with Apple v. Psystar with an almost identical set of facts.
[+] [-] saagarjha|5 years ago|reply
[+] [-] ladberg|5 years ago|reply
[+] [-] rkagerer|5 years ago|reply
"I'm handing out a bunch of water bottles; sign up here. The contents remain my property so when you're done please urinate back into them and return to me on demand."
[+] [-] devwastaken|5 years ago|reply
I'd like to see Apple survive having to recreate half their software from scratch.
[+] [-] markstos|5 years ago|reply
Still, it seems for overall security to have this program exist then not have a program at all.
[+] [-] jimbob45|5 years ago|reply
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] keyme|5 years ago|reply
This particular case is also outrageous for other reasons:
1) They are only doing this now because Corellium has been selling virtually the same thing for a while already.
2) They are doing this to try and hurt Corellium financially, while they're already suing them in parallel.
3) Agreeing to their terms here, effectively makes you a glorified Apple QA engineer. Only you don't get a salary, but rather, a bounty for whenever you find a bug. For most people that would be way, way less money than just being employed wherever.
[+] [-] tptacek|5 years ago|reply
And, of course, that's the case with Corellium as well; it's not like Hopper or Binja, a tool that random people just buy to kick the tires on. The front page of Corellium's site is a "contact sales" mailto; the term of art we use for that pricing plan is "if you have to ask...".
[+] [-] ballenf|5 years ago|reply
Apple would almost certainly win the suit, but I think there's reasonable odds the suit would survive an early motion to dismiss before factual discovery.
[+] [-] ghshephard|5 years ago|reply
[1] "If you use the SRD to find, test, validate, verify, or confirm a vulnerability, you must promptly report it to Apple and, if the bug is in third-party code, to the appropriate third party. If you didn’t use the SRD for any aspect of your work with a vulnerability, Apple strongly encourages (and rewards, through the Apple Security Bounty) that you report the vulnerability, but you are not required to do so."
[+] [-] donarb|5 years ago|reply
Apple announced these devices last year at Black Hat.
[+] [-] shantara|5 years ago|reply
In addition to the mandatory bug reporting, Apple reserve a right to dictate the researchers a mandatory publication date. No more 90/180 days responsible disclosure deadline policy. I highly doubt any serious researcher would agree to work with such conditions.
[+] [-] saagarjha|5 years ago|reply
[+] [-] guidovranken|5 years ago|reply
So vulnerabilities found through this program are not eligible for any reward. Then what would be the incentive to enroll (and accepting liabilities like losing the device, Apple suspecting you of breach of contract etc)? Just bragging rights?
[+] [-] saagarjha|5 years ago|reply
[+] [-] natvert|5 years ago|reply
This is huge. Not as a security device, but if this were the normal permission model on all iPhones (e.g. owners of devices get root on the devices they own... like a normal general purpose computing device) I could ditch my android and my mac and use an iPhone for everything.
I'm not saying this will ever happen, but in my mind this paints a bright picture of what the iPhone could be.
It's also a bit sobering as I'm quite concerned Apple is actually pushing the other direction in their shift from Intel to ARM.
[+] [-] yalogin|5 years ago|reply
Honestly the mac or desktop is where I enjoy the openness and do stuff I want to do. I would want to leave the phone untouched and as secure as possible.
I would like to hear your and others' take on it though.
[+] [-] saagarjha|5 years ago|reply
[+] [-] jacobkania|5 years ago|reply
I think there are other downsides to switching off of x86, but I think it strengthens the case for having one small portable computer to do everything. The question is if that device will allow real work like macOS, or if it’ll be stuck as a fancy consumer-only device..
[+] [-] dcow|5 years ago|reply
[+] [-] cookiengineer|5 years ago|reply
Maybe just go for a PinePhone instead? [1] I mean, Linux GUIs aren't fully mobile and touchscreen friendly yet, but it's getting there real quick. I mean, they started in November 2019.
In my opinion the PinePhone is the most promising device, as all upstream projects use it as an official developer device and upstream linux has integrated support.
[1] https://wiki.pine64.org/index.php/PinePhone_Software_Release...
[+] [-] jedieaston|5 years ago|reply
Also, if you found an exploit on a research iPhone because you made use of entitlements that were Apple-only, I wonder if that'd be worth anything bounty wise. Nobody can/should be able to write an that'll get through App Store checks if they asked for PLZ_NO_SANDBOX_ILL_BE_GOOD or something (at least, that's what I thought before the whole Snapchat system call thing happened). But hypothetically the App Store review process is vulnerable to a bad actor inside Apple pushing an update to a big app that included malware, so I'd think that private entitlements shouldn't be available at all to binaries that didn't ship with the device/in a system update (unless some kind of hobbyist flag was flipped by the consumer). So I'd say that would be worth something, even if smaller than a more interesting exploit.
[+] [-] saagarjha|5 years ago|reply
> Nobody can/should be able to write an that'll get through App Store checks if they asked for PLZ_NO_SANDBOX_ILL_BE_GOOD or something (at least, that's what I thought before the whole Snapchat system call thing happened).
Snapchat (on iOS at least) is still subject to the app sandbox, no app has on iOS has been granted an exception there to my knowledge. On macOS there are apps that are “grandfathered in” to not require the sandbox on the App Store, but new apps are supposed to have it. Due to the way the dynamic linker works, until recently it was possible to upload an app that could bypass the sandbox, but Apple has said they have fixed this. Some apps do have an exception to this as well, as the broad way they fixed one of the issues broke legitimate functionality in library loading. You can find those hardcoded in AMFI.kext, theoretically they could turn off the sandbox for themselves if they wanted.
[+] [-] bluesign|5 years ago|reply
I don’t think this is technically possible.
[+] [-] saagarjha|5 years ago|reply
> If you use the SRD to find, test, validate, verify, or confirm a vulnerability, you must promptly report it to Apple and, if the bug is in third-party code, to the appropriate third party.
I mean, if I find a bug I might report it, but I know people who work on jailbreaks and stuff–if they tell me something will I have to promptly report it? What if I find something on a non-SRD device? If I ever hypothetically "write a jailbreak", will Apple come after me even if I say I didn't use that device for it? I can get 90% of the benefit from using a device with a bootroom exploit, with none of the restrictions here…
[+] [-] phnofive|5 years ago|reply
[+] [-] jentist_retol|5 years ago|reply
according to the terms no, unless you use the SRD to verify the information or vulnerability
>If I ever hypothetically "write a jailbreak", will Apple come after me even if I say I didn't use that device for it
I imagine that if you sold a jailbreak for $$$$ that Apple would probably take a close look at the telemetry the device is sending. If you're confident in your ability to terminate all telemetry, and keep good opsec, and defend yourself in court, then maybe that avenue would be feasible. It certainly wouldn't be ethical.
[+] [-] ebg13|5 years ago|reply
> If you use the SRD to find, test, validate, verify, or confirm a vulnerability, you must promptly report it to Apple
But let's say you pass their review, get a device, find a vulnerability, and don't report it. Then what? You're breaching the contract, but they have no way to know that, so there's no consequence?
[+] [-] saagarjha|5 years ago|reply
[+] [-] brutopia|5 years ago|reply
[+] [-] rimliu|5 years ago|reply
[+] [-] ChrisMarshallNY|5 years ago|reply
For example, if the OS isn't quite at the same level as the release OS, it could be an issue.
That said, this is not my field, and I am not qualified to offer much more than the vague speculation, above.
[+] [-] saagarjha|5 years ago|reply
[+] [-] dpifke|5 years ago|reply
[+] [-] thamer|5 years ago|reply
A few days later another researcher reported earning $75k for webcam access vulnerabilities: https://www.ryanpickren.com/webcam-hacking
These payments are not uncommon.
[+] [-] GekkePrutser|5 years ago|reply
[+] [-] gowld|5 years ago|reply
[+] [-] saagarjha|5 years ago|reply
[+] [-] hendersoon|5 years ago|reply
On the bright side, it will be very useful for jailbreak research and in a way, those bugs _do_ get disclosed to Apple for them to subsequently fix. Not necessarily the way Apple wants, but it does shine daylight on their code.
These guys keep working exploits close to their hearts and don't release them specifically so they can get a look at new hardware. That will no longer be necessary. You find an exploit, you can release it right away.
And on the gripping hand, it will also be used by malicious criminals and state actors to develop zero days for various evil purposes.
[+] [-] saagarjha|5 years ago|reply
It’s useless for jailbreak research because Apple will force you to shut up about it at least until they patch it, so now you can’t jailbreak.