> If you do not have a paid Apple Developer Account, please sign up at <our site>, pay the membership fee and send us the associated Developer ID.
You can’t make this stuff up! It’s completely and utterly ridiculous that even when you do get a bounty, they’ll take a cut from it. Leave aside the fact that some stingy hands found a way to devalue a person’s contributions to their platforms by offering a tiny fraction as a reward.
I didn’t understand the later parts of this post well, but the correspondence frequency and the way this has been handled is a mark of shame to all the information security folks working at Apple.
P.S.: I intentionally put <our site> instead of the actual link. That site doesn’t deserve to be linked in this context.
Did you just stop reading the post at the line you quoted, and came straight here to moan? They refund it. It's how they verify your identity and banking information.
In the second half of the post he suggests remotely determining an iDevice passcode was possible with his rate limit bypass. Essentially a salted version of the devices passcode is uploaded to the apple server which he could then bypass the rate limit to brute force it.
Isn't this a backdoor that would enable passcode bypass like was requested for the San Bernardino and Pensacola shooters phones?
This vulnerability is a massive deal. With the passcode determined there's nothing stopping bad actors surreptitiously access your data.
After all this a $2,180,000,000,000 market cap company offers a reward of $18,000. What a disgrace!
Why are the Security Departments of ever company so unfriendly? I feel like every blog post of a disclosed vulnerability has had some form or another of:
- no replies
- delayed replies
- vague replies
- playing down the vulnerability
- reducing the bounty
It’s like they still treat a white hat hacker as a risk, instead o cooperating with them. I don’t get the corporations. The white hat hacker is in this case your best friend. They proved their ethics already by reporting it to them, and they know it already because they found it. There is literally no reason to try to keep the white hat hacker in the dark, not update them, etc. The white hat hacker could have exploited the vulnerability already!
I found and reported a security issue to Microsoft. They responded nearly right away and it was a real person . I was soon talking directly to the right team to explain it to. I provided assets required to replicate and they jumped right on it and fixed it. They even told me what KB* it was resolved in via follow up email. I didn’t want any kind of monetary reward - just happy to have it fixed. I regularly report security issues in open source projects too. Now, I also once tried to report a fairly serious issue that impacted iOS and MacOS X (at the time) and you’d have thought (naively) they’d have been super interested and helpful as Microsoft were. Wrong. In fact their first response basically meant I never ended up getting past their first auto reply.
To be fair, there is also some whining involved on the "white hat hacker" side. Triaging reports is not an easy task.
The issue I have often witnessed, is a higher up wanting to play down a vulnerability, or even make so nobody hears about it because they fear it will impact the stock price. You should not forget that there is a financial impact...
Because they're not the only person the security team is dealing with?
Because they need to verify the claim and see what it really affects? If there are other repercussions?
Because you don't want to tell more than you need? (a security researcher should know that)
Vulnerability disclosure is the closest thing to a protection racket that is actually legal. So it's natural that people will be on the edge. Sure, it beats the alternative.
Confirmation bias? If the disclosure doesn't go as expected, that's an interesting event, which is likely to draw more attention than if everything was all right.
Security departments are there to protect against threats to the company. Bad publicity is a threat for companies. So never expect some company official to admit that there is a glaring hole in their security, until they are very, very sure that they have fixed it. Fixing a hole likes this can take a lot of time because you need a lot of testing. Apple is not going to tell the rest of the world what they are doing exactly because of possible bad publicity. When they have fixed the problem there are very good reasons for Apple to downplay its importance, which implies that they should pay some money but not too much.
The signal-to-noise ratio is very poor, especially for large companies running bounty programs. In addition to honest and diligent researchers, there are also scam artists, script kiddies, and soccer moms sending in false reports saying “my ten-year-old found this bug in FaceTime, pay up!”
They get the vuln in all cases if you are talking to them: either for free if you post it to f-d, or for some reduced rate (and hushed up) if you agree to the NDA to get the bounty.
The people selling to rogue states and TAO aren't talking to the vendor in the first place.
Bug bounty programs are, for the most part, bullshit.
How sophisticated the attack/exploit may be is not the point here. The salient point is that he demonstrated complete iCloud account takeover, and Apple lists that as a $100k bounty reward, but are only offering him $18k. Please correct me if there is something I'm missing.
The sophistication is relevant because he proves that the vulnerability he originally reported could take over any icloud account but he wasn't able to do so himself as it was patched between the time he first reported it and 8 months later when he tries it again. Apple then seems to refuse to acknowledge this and offers only 18K vs. 350K
We see, time and time again, password recovery systems getting exploited.
Aside from Apple's shitty response to the brute force vulnerabilities discovered here, I'm also annoyed that Apple isn't nearly as paranoid as they ought to be about the security of what is likely to be the #1 hacker target in their system.
Instead of using a 6 digit 2 factor key, they could easily use 12 character alphanumeric key. That's (20 + 10) ** 12 / (10 ** 6) = 500 billion times harder to brute force. And honestly, is having to type 12 characters such a burden for the exceptional case of a password reset? I don't think so.
Building secure systems is hard, I get that. I've made dumb mistakes myself. But Apple's iCloud contains people's locations, their photos, where they live, their email, notes and other secrets, and iCloud also circumvents all Apple's on-device encryption. It's fundamentally a system that sacrifices security for convenience, and it really sucks that all the real and serious security efforts made by other teams at Apple are negated by iCloud.
There are many people, myself included, who use password managers and who will never ever lose their full disk encryption keys, passwords, or recovery keys. I don't want backdoors. I don't want forgot-my-password systems. I want to opt out of all of it.
Apple is right, if you login with an iPhone, or iOS the device is upgraded to not use SMS verification anymore.
While the author of the article has found something its not anywhere as serious as they think. The second part of the article with the on device based codes vs sms where 29/30 requests didn't work before was most likely that way before they found the vulnerability.
It is upsetting not to get the $100k but the post comes off a bit as a lash out against that.
I agree. This is a sorry situation to see, and neither of parties seem correct here. $18k is seems a little lowballing for a vulnerability that does actually work on a subset of iCloud accounts. and provides a method to bypass 2FA. At the same time the person lashing at Apple for handling seems quite unprofessional. In a perfect world a issue like this would be solved with better communication by both parties.
To me this looks like a case of someone looking that this dude was based in India, doing a cost of living/Apple US-India salary comparison and coming up with a ratio of ~1:5 (for $100K) or ~1:14 (for $250K) and deciding that he would be happy with $18K.
Cost of living indexation being applied to salary of a knowledge worker is a flawed economic thinking that is being perpetuated by some.
Firstly, knowledge workers should be paid market competitive rates – and the definition of market in a digital economy is global.
Secondly, cost of living index for a software engineer in India vs US is not that different - cost of electronics, housing, clothing, accessories, vacation/travel etc are all same.
In some cases, things are more expensive due to global trade economics – for example cars/bikes, fuel, travel, luxury goods etc are way more expensive in India than US.
Food and housekeeping was assumed to be cheaper – but that was based on flawed logic that someone is cooking food at home for you for free (an unemployed family member) and you are exploiting some poor person for housekeeping without caring about their healthcare or their children's education (these are basically un-costed externality that keeps poor families poor).
In reality, today's generation of software engineers have to cook their own food (costs time which is learning opportunity cost which is nothing but money) or hire professional help (costs money) or eat catered food (which is possible due to app based delivery services in major cities like Bangalore, costs money) every day.
Besides, no matter where you are living, only a small part of your paycheck goes towards non-discretionary expenses like basic food and basic shelter.
A large part of your paycheck should be going towards future savings/investments and discretionary spending like leisure/travel and enhancing your quality of life through better nutrition/healthcare, continued education etc. None of these things cost less in India than US. Expecting Indian engineers to do this any less than US engineers is just another form of discrimination.
This assumes a person doesn't travel. Often going on holiday to Europe will be more expensive to a person in India than in the US.
This kind of thinking in Apple has racist connotations.
Do companies want people to ignore responsible disclosure and/or sell these vulnerabilities on grey/black markets?
I suspect they don't care in the end. The privacy/security stories are more there for marketing. End consumers won't know if the technicalities actually hold up in practice, so there's little incentive to run a tight and honest bounty program.
I still don't understand what the author means with his exploits (also against Instagram[0]) involving a race condition.
A race condition means that something unexpected happens when you do something concurrently. Not "it gives the same result as when done one by one, but we're doing it faster" (which is what the author appears to be doing). It has to be different from the result any sequential operation could achieve.
They used a few thousand IPs to hammer an endpoint, staying below the per-IP rate limit. Did it matter that this was done during the same time from all the IPs? If yes, it's not clear from the blog post how/why it mattered. If no (i.e. the result is the same when first completing all requests from the first IP, then the second, etc.), then it is not a race condition - just concurrency.
From my understanding, there is a hard limit on how many attempts are allowed for entering the code for a specific account, regardless of the IP address. You can bypass that limit by sending all the attempts concurrently at once. The multiple IP addresses were used to bypass a different limit (a limit on concurrent connections).
A slightly more detailed answer than the other two you got:
It's likely a well-designed password (or similar) validation endpoint will both limit attempts per IP and per user, to avoid exactly the attack you describe.
This limit probably isn't permanent (though I think the design of Apple's HSM may be different here; too many attempts may lock or delete user data entirely?). Rather, it would be something like "allow 5 attempts per hour per user."
So, first, even if the attacker only exploits concurrency to speed things up--which implies the limits are only per IP--they can conduct attacks which are otherwise infeasible. (E.g. with 10k IPs at 5 attempts per hour per IP, they can brute force a six digit PIN in an average of an hour, as opposed to over a year.)
But second, what I think the attacker is describing is actually worse: he's saying that the quota "counter" is updated with a "read-modify-write pattern that's not safe for concurrent HTTP requests, so that you might have something like:
[request 0] read counter value = 0
[request 1] read counter value = 0
[request 0] set counter value = 1
[request 1] set counter value = 1
In this error, the counter is never incremented atall.
It's easy to imagine people making this mistake when they store the counter in something that does not support transactional updates.
While it feels HN to hate any companies that didn't do security "right", I think what Apple says makes total sense here. At least, the author claims that under his assumptions, his exploit would have worked and affected a majority of iCloud accounts, but it won't. You can't claim that you found a vulnerability without actually demonstrating it.
Key takeaway from Apple:
> They concluded that the only way to brute force the passcode is through brute forcing the Apple device which is not possible due to the local system rate limits.
The author did not understand that sentence accurately:
> There is very bleak chance for this endpoint to be not vulnerable to race hazard before my report because all the other endpoints I tested was vulnerable – SMS code validation, email code validation, two factor authentication, password validation was all vulnerable.
How I interpret that sentence is that, while other forms of verification are done on the server and thus subject to the vulnerability, the passcode verification is done on the device. When you send a passcode from another device, it is sent to the server and then routed to the device storing the passcode to perform the verification. Apple's servers do not store the hash throughout the process, and no form of brute force would have worked against the server. Instead, they are routed to the device storing the passcode, say iPhone, and the iPhone's HSM performs the verification. It's the HSM doing the rate limit here, and thus it's not subject to the vulnerability.
This isn't directly related to the article, but does anyone know of any good resources or best practices on how to report a vulnerability?
A few days ago I discovered a pretty major vulnerability on a certain website, but security isn't the focus of my day job and I wasn't sure where to begin and what to keep in mind. The author of this article had some problems with the disclosure process; maybe there are best practices that could avoid these.
I found the OWASP cheat sheet [0] really useful, but other than that, I didn't find too many other relevant resources.
The vulnerability I reported has now been fixed, but I'm still pondering whether to publish the details or if it would just stir up unnecessary trouble. So it would be good to have resources that will help inform my decision.
I think a lot of people who want to report vulnerabilities probably feel like they don't know what they're doing, and they probably don't feel very well supported through the disclosure process. At least, that's my experience.
Just 6 blog posts prior, the author was writing about "How To Create A Blog On Bluehost In 3 Simple Steps".[1] While admittedly that was 3 years ago, still quite an impressive feat of leveling up!
Good finds :). Most of 2 factor auth or password reset flows I came across while consulting had bugs. One of the more fun findings was an authenticated encrypted username was used for password resets. Another part of the application used the same encryption key and acted as an encryption oracle. Copy ciphertext for the target username into the password reset link, and voila.
I mean honestly if money is what you're after, you should go to the dark market directly. You don't owe these corporations shit. And their consistent haggling with people who responsibly disclose vulns is proof of that.
It seems that the author found a vulnerability in the iCloud password reset that could have potentially allowed you to not only gain access to an iCloud account but also the passcode of a device. The reason why I think Apple decided not to give him a combined 350,000$ bounty is because from my understanding he didn’t actually realise the severity of what he found initially to exploit it and provide a proof of concept and so his bug bounty claim was limited to what he found initially and then Apple patched it (not a coincidence really) before anyone could do anything more. As a result now he wants the full bounty but Apple has decided to come to a random number as bounty. It’s easy to see why. Apple doesn’t want the bad PR from the fact that some random enthusiast found a way to compromise both iCloud and passcode of an iPhone without even having the targets physical device (insert Hollywood movie scene) and the fact that Apple with all their might may be vulnerable to something like this. On the other hand the author is pissed he did not fully exploit it in the first place and claim the full bounty by maybe showing a proof of concept and tried to be the good guy.
It's dangerous to get this kind of publicity. What happens the next time a security researcher finds a way to bypass Apple? They'll find this post and ask themselves "who is going to pay me more, Apple or the NSA?".
Not every other program. I have seen apple bug bounty reports of others completed in a month or so. But I don't know what took so long for them in my case.
All Apple have done here is ensure that if an account take over or information disclosure vulnerability is found in the future, nobody will trust they will get paid the expected bug bounty.
Congrats, Apple. You just helped increase the chance that researchers sell very bad exploits to state-sanctioned attackers, and you won’t ever know about it.
[+] [-] AnonC|4 years ago|reply
You can’t make this stuff up! It’s completely and utterly ridiculous that even when you do get a bounty, they’ll take a cut from it. Leave aside the fact that some stingy hands found a way to devalue a person’s contributions to their platforms by offering a tiny fraction as a reward.
I didn’t understand the later parts of this post well, but the correspondence frequency and the way this has been handled is a mark of shame to all the information security folks working at Apple.
P.S.: I intentionally put <our site> instead of the actual link. That site doesn’t deserve to be linked in this context.
[+] [-] rmkrmk|4 years ago|reply
Seems they'll refund the paid account, still a weird thing to do.
[+] [-] swiley|4 years ago|reply
[+] [-] edgeform|4 years ago|reply
Did you just stop reading the post at the line you quoted, and came straight here to moan? They refund it. It's how they verify your identity and banking information.
[+] [-] mariuolo|4 years ago|reply
[+] [-] intricatedetail|4 years ago|reply
[+] [-] stirlo|4 years ago|reply
Isn't this a backdoor that would enable passcode bypass like was requested for the San Bernardino and Pensacola shooters phones?
This vulnerability is a massive deal. With the passcode determined there's nothing stopping bad actors surreptitiously access your data.
After all this a $2,180,000,000,000 market cap company offers a reward of $18,000. What a disgrace!
[+] [-] justshowpost|4 years ago|reply
It’s like they still treat a white hat hacker as a risk, instead o cooperating with them. I don’t get the corporations. The white hat hacker is in this case your best friend. They proved their ethics already by reporting it to them, and they know it already because they found it. There is literally no reason to try to keep the white hat hacker in the dark, not update them, etc. The white hat hacker could have exploited the vulnerability already!
[+] [-] beermonster|4 years ago|reply
[+] [-] _notreallyme_|4 years ago|reply
The issue I have often witnessed, is a higher up wanting to play down a vulnerability, or even make so nobody hears about it because they fear it will impact the stock price. You should not forget that there is a financial impact...
[+] [-] lima|4 years ago|reply
It can be hard to separate signal from noise.
[+] [-] nh2|4 years ago|reply
[+] [-] raverbashing|4 years ago|reply
Because they need to verify the claim and see what it really affects? If there are other repercussions?
Because you don't want to tell more than you need? (a security researcher should know that)
Vulnerability disclosure is the closest thing to a protection racket that is actually legal. So it's natural that people will be on the edge. Sure, it beats the alternative.
[+] [-] rhn_mk1|4 years ago|reply
[+] [-] aliasEli|4 years ago|reply
[+] [-] Zanneth|4 years ago|reply
[+] [-] de6u99er|4 years ago|reply
[+] [-] sneak|4 years ago|reply
The people selling to rogue states and TAO aren't talking to the vendor in the first place.
Bug bounty programs are, for the most part, bullshit.
[+] [-] cantsingh|4 years ago|reply
[+] [-] Magodo|4 years ago|reply
[+] [-] trollied|4 years ago|reply
[+] [-] gizmo|4 years ago|reply
Aside from Apple's shitty response to the brute force vulnerabilities discovered here, I'm also annoyed that Apple isn't nearly as paranoid as they ought to be about the security of what is likely to be the #1 hacker target in their system.
Instead of using a 6 digit 2 factor key, they could easily use 12 character alphanumeric key. That's (20 + 10) ** 12 / (10 ** 6) = 500 billion times harder to brute force. And honestly, is having to type 12 characters such a burden for the exceptional case of a password reset? I don't think so.
Building secure systems is hard, I get that. I've made dumb mistakes myself. But Apple's iCloud contains people's locations, their photos, where they live, their email, notes and other secrets, and iCloud also circumvents all Apple's on-device encryption. It's fundamentally a system that sacrifices security for convenience, and it really sucks that all the real and serious security efforts made by other teams at Apple are negated by iCloud.
There are many people, myself included, who use password managers and who will never ever lose their full disk encryption keys, passwords, or recovery keys. I don't want backdoors. I don't want forgot-my-password systems. I want to opt out of all of it.
[+] [-] Black101|4 years ago|reply
[+] [-] neximo64|4 years ago|reply
While the author of the article has found something its not anywhere as serious as they think. The second part of the article with the on device based codes vs sms where 29/30 requests didn't work before was most likely that way before they found the vulnerability.
It is upsetting not to get the $100k but the post comes off a bit as a lash out against that.
[+] [-] Anunayj|4 years ago|reply
[+] [-] yumraj|4 years ago|reply
[+] [-] vinay_ys|4 years ago|reply
Firstly, knowledge workers should be paid market competitive rates – and the definition of market in a digital economy is global.
Secondly, cost of living index for a software engineer in India vs US is not that different - cost of electronics, housing, clothing, accessories, vacation/travel etc are all same.
In some cases, things are more expensive due to global trade economics – for example cars/bikes, fuel, travel, luxury goods etc are way more expensive in India than US.
Food and housekeeping was assumed to be cheaper – but that was based on flawed logic that someone is cooking food at home for you for free (an unemployed family member) and you are exploiting some poor person for housekeeping without caring about their healthcare or their children's education (these are basically un-costed externality that keeps poor families poor).
In reality, today's generation of software engineers have to cook their own food (costs time which is learning opportunity cost which is nothing but money) or hire professional help (costs money) or eat catered food (which is possible due to app based delivery services in major cities like Bangalore, costs money) every day.
Besides, no matter where you are living, only a small part of your paycheck goes towards non-discretionary expenses like basic food and basic shelter.
A large part of your paycheck should be going towards future savings/investments and discretionary spending like leisure/travel and enhancing your quality of life through better nutrition/healthcare, continued education etc. None of these things cost less in India than US. Expecting Indian engineers to do this any less than US engineers is just another form of discrimination.
[+] [-] intricatedetail|4 years ago|reply
[+] [-] throwawaybchr|4 years ago|reply
[deleted]
[+] [-] xvector|4 years ago|reply
I suspect they don't care in the end. The privacy/security stories are more there for marketing. End consumers won't know if the technicalities actually hold up in practice, so there's little incentive to run a tight and honest bounty program.
Disgusting behavior by Apple.
[+] [-] intricatedetail|4 years ago|reply
[+] [-] anshumankmr|4 years ago|reply
[+] [-] MauranKilom|4 years ago|reply
A race condition means that something unexpected happens when you do something concurrently. Not "it gives the same result as when done one by one, but we're doing it faster" (which is what the author appears to be doing). It has to be different from the result any sequential operation could achieve.
They used a few thousand IPs to hammer an endpoint, staying below the per-IP rate limit. Did it matter that this was done during the same time from all the IPs? If yes, it's not clear from the blog post how/why it mattered. If no (i.e. the result is the same when first completing all requests from the first IP, then the second, etc.), then it is not a race condition - just concurrency.
[0]: https://thezerohack.com/hack-any-instagram
[+] [-] PufPufPuf|4 years ago|reply
[+] [-] md_|4 years ago|reply
It's likely a well-designed password (or similar) validation endpoint will both limit attempts per IP and per user, to avoid exactly the attack you describe.
This limit probably isn't permanent (though I think the design of Apple's HSM may be different here; too many attempts may lock or delete user data entirely?). Rather, it would be something like "allow 5 attempts per hour per user."
So, first, even if the attacker only exploits concurrency to speed things up--which implies the limits are only per IP--they can conduct attacks which are otherwise infeasible. (E.g. with 10k IPs at 5 attempts per hour per IP, they can brute force a six digit PIN in an average of an hour, as opposed to over a year.)
But second, what I think the attacker is describing is actually worse: he's saying that the quota "counter" is updated with a "read-modify-write pattern that's not safe for concurrent HTTP requests, so that you might have something like:
[request 0] read counter value = 0 [request 1] read counter value = 0 [request 0] set counter value = 1 [request 1] set counter value = 1
In this error, the counter is never incremented at all.
It's easy to imagine people making this mistake when they store the counter in something that does not support transactional updates.
[+] [-] Closi|4 years ago|reply
[+] [-] renonce|4 years ago|reply
Key takeaway from Apple:
> They concluded that the only way to brute force the passcode is through brute forcing the Apple device which is not possible due to the local system rate limits.
The author did not understand that sentence accurately:
> There is very bleak chance for this endpoint to be not vulnerable to race hazard before my report because all the other endpoints I tested was vulnerable – SMS code validation, email code validation, two factor authentication, password validation was all vulnerable.
How I interpret that sentence is that, while other forms of verification are done on the server and thus subject to the vulnerability, the passcode verification is done on the device. When you send a passcode from another device, it is sent to the server and then routed to the device storing the passcode to perform the verification. Apple's servers do not store the hash throughout the process, and no form of brute force would have worked against the server. Instead, they are routed to the device storing the passcode, say iPhone, and the iPhone's HSM performs the verification. It's the HSM doing the rate limit here, and thus it's not subject to the vulnerability.
[+] [-] ldjb|4 years ago|reply
A few days ago I discovered a pretty major vulnerability on a certain website, but security isn't the focus of my day job and I wasn't sure where to begin and what to keep in mind. The author of this article had some problems with the disclosure process; maybe there are best practices that could avoid these.
I found the OWASP cheat sheet [0] really useful, but other than that, I didn't find too many other relevant resources.
The vulnerability I reported has now been fixed, but I'm still pondering whether to publish the details or if it would just stir up unnecessary trouble. So it would be good to have resources that will help inform my decision.
I think a lot of people who want to report vulnerabilities probably feel like they don't know what they're doing, and they probably don't feel very well supported through the disclosure process. At least, that's my experience.
[0] https://cheatsheetseries.owasp.org/cheatsheets/Vulnerability...
[+] [-] miles|4 years ago|reply
[1] https://thezerohack.com/create-blog-bluehost
[+] [-] justshowpost|4 years ago|reply
[1] https://thezerohack.com/how-i-hacked-your-facebook-photos
[+] [-] jtaft|4 years ago|reply
[+] [-] curiousgal|4 years ago|reply
[+] [-] tumblewit|4 years ago|reply
[+] [-] mabbo|4 years ago|reply
[+] [-] alexashka|4 years ago|reply
Well done Apple.
[+] [-] ishiz|4 years ago|reply
[+] [-] laxmanmuthiyah|4 years ago|reply
[+] [-] amelius|4 years ago|reply
[+] [-] ergwwrt|4 years ago|reply
[+] [-] chris_wot|4 years ago|reply
Congrats, Apple. You just helped increase the chance that researchers sell very bad exploits to state-sanctioned attackers, and you won’t ever know about it.