Seems the only way to trust the companies in such situations is to exploit the vulnerabilities from multiple, unconnectable devices and locations, over as long a period as possible. If the company cannot list all of the attacks, you know they're bullshitting.
The one case (and about the only case) I can think of where they can claim above is:
If they have a log of all JWTs issued that records which user requested and which email in JWT, then they can retroactively check if they issued any (user, email) pair that they shouldn't have.
Then they can assert that there was no misuse, if they only found this researcher's attempt.
It depends what the fix was. If the fix was just to add a validation check to the POST endpoint to validate that the logged in user session matched the payload (and session data was comprehensively logged/stored), this may be verifiable.
There are obviously lots hypotheticals for which this might not be verifiable.
It’s no secret that Apple isn’t great at webservices and they have strong initiative not to keep user data. I could imagine a world where they just didn’t have enough logs to properly investigate and validate it.
I agree, especially given how many developer “eyes” were on this from having to integrate the log in with Apple flow into their apps.
Just as a first-hand anecdote to back this up, a dev at my former company which did a mix of software dev and security consulting found a much more complex security issue with Apple Pay within the first hour of starting to implement the feature for a client and engaging with the relevant docs.
How did no one else notice this? The only thing I can think of is the “hidden in plain sight” thing? Or maybe the redacted URL endpoint here was not obvious?
I'd also like the exact wording of their claim. "There is no evidence of misuse or account compromise" is what I would expect them to say, as "There was no misuse or account compromise" likely opens them up to legal repercussions if that isn't 100% accurate.
> The Sign in with Apple works similarly to OAuth 2.0.
> similarly
I understand why they wanted to modify OAuth 2.0, but departing from a spec is a very risky move.
> $100,000
That was a good bounty. Appropriate given scope and impact. But it would have been a lot cheaper to offer a pre-release bounty program. We (Remind) occasionally add unreleased features to our bounty program with some extra incentive to explore (e.g. "Any submissions related to new feature X will automatically be considered High severity for the next two weeks"). Getting some eyeballs on it while we're wrapping up QA means we're better prepared for public launch.
This particular bug is fairly run-of-the-mill for an experienced researcher to find. The vast majority of bug bounty submissions I see are simple "replay requests but change IDs/emails/etc". This absolutely would have been caught in a pre-release bounty program.
> I understand why they wanted to modify OAuth 2.0, but departing from a spec is a very risky move.
The token described in this disclosure is an OpenID Connect 1.0 Token. OIDC is a state of the art AuthN protocol that supersets OAuth with additional security controls. It's used by Google, Facebook and Twitch amongst others.
I'd do more analysis, but the author leaves off the most important part here (not sure why)
Apple supposedly marks certain beta builds with a bounty multiplier. I say supposedly because like their "research iPhones" they mentioned it in a presentation once and I never heard about it again.
How is this something that can happen? I mean, the only responsibility of an "authentication" endpoint is to release a JWT authenticating the current user.
At least from the writeup, the bug seems so simple that it is unbelievable that it could have passes a code review and testing.
I suspect things were maybe not as simple as explained here, otherwise this is at the same incompetence level as storing passwords in plaintext :O.
My guess is that it has to do with that private relay because OAuth isn't too complex by itself. During the OAuth flow they probably collect the user preference, (if needed) go out to the relay service and get a generated email, and POST back to their own service with the preferred email to use in the token.
If that's it, it's about as bad as doing password authentication in JavaScript and passing authenticated=true as a request parameter.
Edit: Looking at the OAuth picture in the article, my guess would be like adding a step in between 1 and 2 where the server says "what email address do you want here" and the (client on the) user side is responsible for interacting with the email relay service and posting back with a preferred email address. Or the server does it but POSTS back to the same endpoint which means the user could just include whatever they want right from the start.
The only thing that makes me think I might not be right is that doing it like that is just way too dumb.
AND I'm guessing a bunch of Apple services probably use OAuth amongst themselves, so this might be the worst authentication bug of the decade. The $100k is a nice payday for the researcher, but I bet the scope of the damage that could have been done was MASSIVE.
Edit 2: I still don't understand why the token wouldn't mainly be linked to a subject that's a user id. Isn't 'sub' the main identifier in a JWT? Maybe it's just been too long and I don't remember right.
This is basically bad coding, I never used OAuth system but you are supposed to just validate token, not any additional incoming data as number one rule of distributed systems is “never trust the client”.
They basically made a huge fundamental design mistake.
Sometime code review is just "Please change the name of this function" and testing is just testing the positive cases not the negative ones. Yes, even in companies like apple and google.
Wow. That's almost inexcusable, especially due to the requirement of forcing iOS apps to implement this. If they didn't extend the window (from originally April 2020 -> July 2020) so many more apps would have been totally exploitable from this.
After this, they should remove the requirement of Apple Sign in. How do you require an app to implement this with such a ridiculous zero day?
I’m of the mind that just about any security bug is “excusable” if it passed a good faith effort by a qualified security audit team and the development process is in place to minimize such incidents.
The problem I have is that I can’t tell what their processes are beyond the generic wording on this page[1]
No, it's completely inexcusable. There should never be such a simple, major security vulnerability like this. Overlooking something this basic is incompetence.
This is an amazing bug, I am indeed surprised this happened in such a critical protocol. My guess is that nobody must have clearly specified the protocol, and anyone would have been able to catch that in an abstracted english spec.
If this is not the issue, then the implementation might be too complex for people to compare it with the spec (gap between the theory and the practice). I would be extremely interested in a post mortem from Apple.
I have a few follow up questions.
1. seeing how simple the first JWT request is, how can Apple actually authenticate the user at this point?
2. If Apple does not authenticate the user for the first request, how can they check that this bug wasn’t exploited?
The bug is not in the protocol. The bug is about the extra value addition that apple was doing by letting the user choose any other email address.
1. The account take over happens on the third party sites that use the apple login.
2. This seems like a product request to add value to user by providing a relay email address of a user's choice.
From the report- `I found I could request JWTs for any Email ID from Apple and when the signature of these tokens was verified using Apple’s public key, they showed as valid.`
It's not a bug with protocol or security algorithm. A lock by itself does not provides any security if its not put in the right place.
It's exploitable through apple's web-based login flow used by web sites and Android devices. There are multiple round trips between the user and apple, and state is passed over the wire. The state could be modified at a certain point in the flow to cause the final result (the JWT) to be compromised. The flow is still the same, they seem to have fixed it entirely by adding checks server-side.
All your questions can be answered by reading “Sign in with Apple REST API” [1][2]:
1. User clicks or touches the “Sign in with Apple” button
2. App or website redirects the user to Apple’s authentication service with some information in the URL including the application ID (aka. OAuth Client ID), Redirect URL, scopes (aka. permissions) and an optional state parameter
3. User types their username and password and if correct Apple redirects them back to the “Redirect URL” with an identity token, authorization code, and user identifier to your app
4. The identity token is a JSON Web Token (JWT) and contains the following claims:
• aud: Your client_id in your Apple Developer account.
• exp: The expiry time for the token. This value is typically set to five minutes.
• iat: The time the token was issued.
• nonce: A String value used to associate a client session and an ID token. This value is used to mitigate replay attacks and is present only if passed during the authorization request.
• nonce_supported: A Boolean value that indicates whether the transaction is on a nonce-supported platform. If you sent a nonce in the authorization request but do not see the nonce claim in the ID token, check this claim to determine how to proceed. If this claim returns true you should treat nonce as mandatory and fail the transaction; otherwise, you can proceed treating the nonce as optional.
• email: The user's email address.
• email_verified: A Boolean value that indicates whether the service has verified the email. The value of this claim is always true because the servers only return verified email addresses.
• c_hash: Required when using the Hybrid Flow. Code hash value is the base64url encoding of the left-most half of the hash of the octets of the ASCII representation of the code value, where the hash algorithm used is the hash algorithm used in the alg Header Parameter of the ID Token's JOSE Header. For instance, if the alg is HS512, hash the code value with SHA-512, then take the left-most 256 bits and base64url encode them. The c_hash value is a case sensitive string
Let's start with the fact that Apple is forcing people to use an E-mail address as a user ID. That's just straight-up stupid.
How many members of the public think that they have to use their E-mail account password as their password for Apple ID and every other amateur-hour site that enforces this dumb rule?
MILLIONS. I would bet a decent amount of money on it. So if any one of these sites is hacked and the user database is compromised, all of the user's Web log-ins that have this policy are wide open.
Then there's the simple fact that everyone's E-mail address is on thousands of spammers' lists. A simple brute-force attack using the top 100 passwords is also going to yield quite a trove, I'd imagine.
Apple IDs didn't originally have to be E-mail addresses. They're going backward.
I think the write up is so short because the bug is so simple. Send a POST to appleid.apple.com with an email address of your choice, and get back an auth token for that user. Use the auth token to log-in as that user. It's that simple.
It seems low on details because the exploit was incredibly simple. AFAICT you didn't have to do anything special to get the signed token, they just gave it out.
> Here on passing any email, Apple generated a valid JWT (id_token) for that particular Email ID.
Based on the information given, I don't know if you can really impersonate people. Rather, you can give an arbitrary email address and have it represented as valid, _against your account_.
You need an additional bug on the relying party for this to allow someone to gain access - that they associate the apple account based on the unstable email address claim rather than the stable "sub" claim.
Wow, I'm so glad that apple forced me to implement this broken garbage into my apps!
For those not aware, some time ago apple decided it would be a good idea to develop their own sing in system, and then force all apps on their store (that already support e.g. Google Account login) to implement it.
So they brought a huge amount of additional complexity in a large amount of apps, and then they fucked up security. Thank you apple!
Actually, developers are only forced to implement it _if_ they support logging in with other social auths.
A big problem of many apps is that they only had a "log in with google"/"log in with facebook" button, which is very problematic for people who have neither.
On Android this is more acceptable since you need a Google account for the OS itself anyway.
Just want to mention something about the id_token provided. I'm on my phone, so I don't have apples implementation handy, but in OIDC, the relying party (Spotify for example) is supposed to use the id_token to verify the user that is authenticated, specifically the sub claim in the jwt id_token.
The apple endpoint returned an apple-signed jwt with an email of the attacker's choice in the sub field. It didn't even have to be an email associated with an apple id. Relying parties verify the id_token against Apple's cert and that is Apple's guarantee that the email is correct.
So the way I believe that it works is that the vulnerability was that a valid email is used to generate an Apple signed JWT. The server side validation would be unable to tell that the token wasn’t issued in behalf of the user since Apple actually signed it.
true, email shouldnt be used when you can identify by unique id. I doubt the bug was even exploitable with most apps. Apple just paid magnitudes more than its severity.
Wow, I'm in shock. How could Apple let this one slip in? When I was a junior fullstack I had to design a very similar system and this was one of the very basic checks that I had in mind back then. I don't know how could anyone excuse this very basic bug in such critical service.
Excellent writeup! About 4 months ago, I wrote a comment[0] on HN telling folks how Apple simply omitted the server-side validations from their WWDC videos. And given the lack of good documentation at the time, WWDC videos were what most developers were following.
Even then, the only "security" that developers had was that the attacker wouldn't know the victim's Apple userId easily. With this zero-day attack, it would have been trivial for many apps to get taken over.
your original post has several replies explaining why this is not a security issue. the token you ultimately get is a signed concatenation of 3 base64 encoded fields, and unless you decided to manually separate and decode these without verification (instead of doing the easy thing, just using a standard OIDC library) you would not have any user data that could ultimately result in a security issue
After observing its endless flow of security and reliability bugs, I begin to think that the recent decline of Apple's overall software quality over the several years is probably a more of systematic problem.
Looks like Federighi agrees with this diagnosis and tries to improve the overall development process but not sure if it can be really improved without changing the famous secretive corporate culture. At the level of Apple's software complexity, you cannot really design and write a quality software without involving many experts' eyes. And I have been complained by my friends at Apple about how hard to get high level contexts of their works and do a cross-team collaboration.
And IMO, this systematic degradation of the software quality coincides with Bertrand's leaving, who had allowed relatively open culture at least within Apple's software division. I'm not an insider, so this is just a pure guess though.
If Apple launched a product called Apple Zero Day - like haveibeenpwned maybe - Then the top search results for apple exploits would be an advertisement :)
The write-up is not very clear in my opinion.
The graph seems to show that there're 3 API calls (maybe there're more API calls in reality?).
And if I understand this correctly, the issue is in the first API call, where the server does not validate whether the requester owns the Email address in the request.
What confuses me are where're the "decoded JWT’s payload" comes from. Is it coming from a different API call or it's somewhere in the response?
"A lot of developers have integrated Sign in with Apple since it is mandatory for applications that support other social logins" -- How pathetic Apple is to force their own service on developers!!
Why are you surprised? They force you to use the App Store. They force you to process payments through their systems. They force you to comply with many things. How is this any different?
Wow that’s a really simple bug. Kudos to the OP to even try that. Most people would just look elsewhere thinking Apple of all companies would get such a basic thing right.
Am I understanding the article right: the endpoint would accept any email address and generate a valid JWT without verifying the caller owned the email address?
If so, what extra validation did Apple add to patch the bug?
With all those high-profile third parties using Apple ID, what would happen if somebody stole/deleted/damaged my data/assets on Dropbox/Spotify/Airbnb/...?
Would I sue the provider who would sue Apple? But does Apple provide any guarantees to the relying parties? And if not and the only way is to depend on the reputation when choosing the ID providers you want to support, how would anyone want to support Apple ID after this? And could they not use it if Apple forces them to...?
I always have a minute of nervousness while I read these security posts hoping that the bottom will say it's already been fixed with XYZ security team. Glad it's fixed w/ Apple already. The "they still haven't fixed it" or "still haven't responded" ones are scary.
Some people commenting this to be overpriced, but I don't think so even if they are considering the INR value. The bug is quite critical considering how large the mac and iOS ecosystem is.
To me this seems like a poor protocol design that created an opportunity for an implementation error, and that opportunity was seized.
In the initial authorization request rather than passing a string with an email address, the caller could pass a boolean `usePrivateRelay`. If true generate a custom address for the third party, if false use the email address on file.
With that one change the implementer no longer has the opportunity to forget to validate the provided email address, and the vuln is impossible.
There are few other issues with how websites implemented it. For example, at work, appleid or few apple domains are banned (they wanted to ban iTunes streaming etc.) when I tried to login into Pocket (Read It Later) [Web Version], due to this blocking, the whole login form get hidden once page load complete, and I cannot even login with my username and password.
It’s unclear to me exactly where the vulnerability is given the authors description in “technical details”. Does this occur in the implicit flow as well as the code flow? Is the token request unauthenticated? This seems highly unlikely. Or does Sign In With Apple deviate from the Open ID specification in a way that I’m unfamiliar with?
The author found out that the HTTP endpoint used to generate a JWT token would accept any email and respond with a valid JWT token for that email address.
He could literally send a POST request to that endpoint with arbitrary email addresses and get a valid JWT.
This is clearly explained under the "BUG" section.
What level of incompetence will it take for the government to step in and create some laws surrounding companies exposing user's private data because 'oops, we don't want to pay security experts what they're actually worth, even though we have billons sitting in bank accounts doing nothing'.
I'm still dreaming about a world where OpenID is the norm. Just think if Apple forced all apps to use that instead, that would be a great move for privacy and security.
But no. Instead they make more proprietary shit without having the basic skills to do so. Then they force that shit on their users.
Does it rely on a service to log you in with same email that you provide? Because normally services don’t do that. They suggest you to attach new apple account to old account with that email, but allowing outright logging in would be very bad practice.
I find it crazy that Apple can force devs to support apple id if they support a competing service.
The US has gone soft on Monopoly abuse. People have got so used to it they dont notice.
Gaping holes in security is only one of the consequences.
Is there any bug bounty program for small businesses/apps? I only found hackerone but it seems to be only for enterprise. Is there any recommended platform for small businesses to create their own public bounty program?
“I found I could request JWTs for any Email ID from Apple and when the signature of these tokens was verified using Apple’s public key, they showed as valid.”
What are they teaching them in computer school these days. How can you write a security function and not test it for these kind of bugs. Unless all there accidental backdoors have a more nefarious purpose <shoosh>
honestly I'm not surprised people didn't run into it during testing... you make a test email account and get a signin token for it. And then realize wait... how does apple know I own that email??
Since this was an extremely simple exploit, I can't help but wonder if it was a purposeful one on Apple's part.
Apple has been spending a lot of money on a security-focused marketing campaign these past few years, and encouraging a high-price payout of $100k is sage marketing.
In general I agree with you that it's good to run fuzzers against any endpoints, public or internal (as you never know if someone can wrangle data to go from public -> internal somehow), but in this particular case, you'd only find a issue if the fuzzer somehow randomly used the ID of another user that was already created, and verified that it couldn't access it.
In that case, you'd catch it way before even implementing the fuzzer.
So in this case, I don't think a fuzzer would have helped. Some E2E tests written by humans should have caught this though.
Why haven't they just implemented OAuth 2.0, like everyone else has done? They've tried to reinvent the wheel with their own implementation of a three-legged user authentication that doesn't add anything to what OAuth does and, surprise, they've exposed themselves to a critical vulnerability that could have been completely avoided.
> This bug could have resulted in a full account takeover of user accounts on that third party application irrespective of a victim having a valid Apple ID or not.
The headline makes me think the entire problem lies with Apple, when that’s not the case.
This one rests squarely on Apple, as it was their auth service that contained the bug.
While an application could potentially (not that I know exactly how in this case) further verify the received token, that verification is exactly what an authentication service is supposed to provide, hence the responsibility absolutely rests on Apple who provides the service.
> I found I could request JWTs for any Email ID from Apple and when the signature of these tokens was verified using Apple’s public key, they showed as valid. This means an attacker could forge a JWT by linking any Email ID to it and gaining access to the victim’s account.
Great writeup there. Looks like a Apple JWT bug and the verification went through despite it being 'signed' and 'tamperproof'. Clearly its footguns allowed this to happen, thus JWTs is the gift that keeps on giving to researchers.
What did I just outline days before? [0]. Just don't use JWTs, there are already secure alternatives available.
No one should be using JWT but it's unfair to blame JWT here. Apple wasn't verifying the supplied email address belonged to the signed in user - that's completely outside of the token format they chose.
In the apps I write for my org integrating with the org SSO provider, I treat the JWT tokens mostly like a non-JWT token. Verify the token with the IDP, map the token to a specific user and never relying on the JWT payload user info for the resource auth. It takes additional 0.25s during the login process but has never let me down. As the SSO provider was issuing non-JWT tokens few years back, this was the way we went about making sure the user is who they are saying they are and just stuck with the same approach when they moved to JWT tokens.
ani-ani|5 years ago
Given the simplicity of the exploit, I really doubt that claim. Seems more likely they just don't have a way of detecting whether it happened.
soonoutoftime|5 years ago
drivebycomment|5 years ago
If they have a log of all JWTs issued that records which user requested and which email in JWT, then they can retroactively check if they issued any (user, email) pair that they shouldn't have. Then they can assert that there was no misuse, if they only found this researcher's attempt.
thephyber|5 years ago
There are obviously lots hypotheticals for which this might not be verifiable.
justapassenger|5 years ago
lordofmoria|5 years ago
Just as a first-hand anecdote to back this up, a dev at my former company which did a mix of software dev and security consulting found a much more complex security issue with Apple Pay within the first hour of starting to implement the feature for a client and engaging with the relevant docs.
How did no one else notice this? The only thing I can think of is the “hidden in plain sight” thing? Or maybe the redacted URL endpoint here was not obvious?
deathgrips|5 years ago
unknown|5 years ago
[deleted]
phamilton|5 years ago
joering2|5 years ago
Thank you to everyone who educated me.
rantwasp|5 years ago
phamilton|5 years ago
> similarly
I understand why they wanted to modify OAuth 2.0, but departing from a spec is a very risky move.
> $100,000
That was a good bounty. Appropriate given scope and impact. But it would have been a lot cheaper to offer a pre-release bounty program. We (Remind) occasionally add unreleased features to our bounty program with some extra incentive to explore (e.g. "Any submissions related to new feature X will automatically be considered High severity for the next two weeks"). Getting some eyeballs on it while we're wrapping up QA means we're better prepared for public launch.
This particular bug is fairly run-of-the-mill for an experienced researcher to find. The vast majority of bug bounty submissions I see are simple "replay requests but change IDs/emails/etc". This absolutely would have been caught in a pre-release bounty program.
zemnmez|5 years ago
The token described in this disclosure is an OpenID Connect 1.0 Token. OIDC is a state of the art AuthN protocol that supersets OAuth with additional security controls. It's used by Google, Facebook and Twitch amongst others.
I'd do more analysis, but the author leaves off the most important part here (not sure why)
https://openid.net/specs/openid-connect-core-1_0.html#IDToke...
saagarjha|5 years ago
tyrion|5 years ago
At least from the writeup, the bug seems so simple that it is unbelievable that it could have passes a code review and testing.
I suspect things were maybe not as simple as explained here, otherwise this is at the same incompetence level as storing passwords in plaintext :O.
enitihas|5 years ago
https://news.ycombinator.com/item?id=15800676 (Anyone can login as root without any technical effort required)
And to top it off (https://news.ycombinator.com/item?id=15828767)
Apple keeps having all sorts of very simple "unbelievable" bugs.
Jaxkr|5 years ago
donmcronald|5 years ago
If that's it, it's about as bad as doing password authentication in JavaScript and passing authenticated=true as a request parameter.
Edit: Looking at the OAuth picture in the article, my guess would be like adding a step in between 1 and 2 where the server says "what email address do you want here" and the (client on the) user side is responsible for interacting with the email relay service and posting back with a preferred email address. Or the server does it but POSTS back to the same endpoint which means the user could just include whatever they want right from the start.
The only thing that makes me think I might not be right is that doing it like that is just way too dumb.
AND I'm guessing a bunch of Apple services probably use OAuth amongst themselves, so this might be the worst authentication bug of the decade. The $100k is a nice payday for the researcher, but I bet the scope of the damage that could have been done was MASSIVE.
Edit 2: I still don't understand why the token wouldn't mainly be linked to a subject that's a user id. Isn't 'sub' the main identifier in a JWT? Maybe it's just been too long and I don't remember right.
randomfool|5 years ago
1. Don't add these.
2. If you must add something, structure it so it can only exist in test-only binaries.
3. If you really really need to add a 'must not enable in prod' flag then you must also continuously monitor prod to ensure that it is not enabled.
Really hoping they follow up with a root-cause explanation.
fulldecent2|5 years ago
Took about two years to fix. Gave me credit. No money.
I'm not surprised here.
DethNinja|5 years ago
They basically made a huge fundamental design mistake.
mkagenius|5 years ago
Sometime code review is just "Please change the name of this function" and testing is just testing the positive cases not the negative ones. Yes, even in companies like apple and google.
cfors|5 years ago
After this, they should remove the requirement of Apple Sign in. How do you require an app to implement this with such a ridiculous zero day?
thephyber|5 years ago
The problem I have is that I can’t tell what their processes are beyond the generic wording on this page[1]
[1] support.apple.com/guide/security/introduction-seccd5016d31/web
driverdan|5 years ago
No, it's completely inexcusable. There should never be such a simple, major security vulnerability like this. Overlooking something this basic is incompetence.
yreg|5 years ago
[0] - https://developer.apple.com/news/?id=03262020b
mazeltovvv|5 years ago
If this is not the issue, then the implementation might be too complex for people to compare it with the spec (gap between the theory and the practice). I would be extremely interested in a post mortem from Apple.
I have a few follow up questions.
1. seeing how simple the first JWT request is, how can Apple actually authenticate the user at this point?
2. If Apple does not authenticate the user for the first request, how can they check that this bug wasn’t exploited?
3. Anybody can explain what this payload is?
{ "iss": "https://appleid.apple.com", "aud": "com.XXXX.weblogin", "exp": 158XXXXXXX, "iat": 158XXXXXXX, "sub": "XXXX.XXXXX.XXXX", "c_hash": "FJXwx9EHQqXXXXXXXX", "email": "contact@bhavukjain.com", // or "XXXXX@privaterelay.appleid.com" "email_verified": "true", "auth_time": 158XXXXXXX, "nonce_supported": true }
My guess is that c_hash is the hash of the whole payload and it is kept server side.
sdhankar|5 years ago
It's not a bug with protocol or security algorithm. A lock by itself does not provides any security if its not put in the right place.
arcdigital|5 years ago
m_herrlich|5 years ago
guessmyname|5 years ago
1. User clicks or touches the “Sign in with Apple” button
2. App or website redirects the user to Apple’s authentication service with some information in the URL including the application ID (aka. OAuth Client ID), Redirect URL, scopes (aka. permissions) and an optional state parameter
3. User types their username and password and if correct Apple redirects them back to the “Redirect URL” with an identity token, authorization code, and user identifier to your app
4. The identity token is a JSON Web Token (JWT) and contains the following claims:
• iss: The issuer-registered claim key, which has the value https://appleid.apple.com.
• sub: The unique identifier for the user.
• aud: Your client_id in your Apple Developer account.
• exp: The expiry time for the token. This value is typically set to five minutes.
• iat: The time the token was issued.
• nonce: A String value used to associate a client session and an ID token. This value is used to mitigate replay attacks and is present only if passed during the authorization request.
• nonce_supported: A Boolean value that indicates whether the transaction is on a nonce-supported platform. If you sent a nonce in the authorization request but do not see the nonce claim in the ID token, check this claim to determine how to proceed. If this claim returns true you should treat nonce as mandatory and fail the transaction; otherwise, you can proceed treating the nonce as optional.
• email: The user's email address.
• email_verified: A Boolean value that indicates whether the service has verified the email. The value of this claim is always true because the servers only return verified email addresses.
• c_hash: Required when using the Hybrid Flow. Code hash value is the base64url encoding of the left-most half of the hash of the octets of the ASCII representation of the code value, where the hash algorithm used is the hash algorithm used in the alg Header Parameter of the ID Token's JOSE Header. For instance, if the alg is HS512, hash the code value with SHA-512, then take the left-most 256 bits and base64url encode them. The c_hash value is a case sensitive string
[1] https://developer.apple.com/documentation/sign_in_with_apple...
[2] https://developer.apple.com/documentation/sign_in_with_apple...
PunksATawnyFill|5 years ago
How many members of the public think that they have to use their E-mail account password as their password for Apple ID and every other amateur-hour site that enforces this dumb rule?
MILLIONS. I would bet a decent amount of money on it. So if any one of these sites is hacked and the user database is compromised, all of the user's Web log-ins that have this policy are wide open.
Then there's the simple fact that everyone's E-mail address is on thousands of spammers' lists. A simple brute-force attack using the top 100 passwords is also going to yield quite a trove, I'd imagine.
Apple IDs didn't originally have to be E-mail addresses. They're going backward.
gruez|5 years ago
1. what sign in with apple is
2. sign in with apple is like oauth2
3. there's some bug (not explained) that allows JWTs to be generated for arbitrary emails
4. this bug is bad because you can impersonate anyone with it
5. I got paid $100k for it
antoncohen|5 years ago
ahupp|5 years ago
> Here on passing any email, Apple generated a valid JWT (id_token) for that particular Email ID.
dwaite|5 years ago
You need an additional bug on the relying party for this to allow someone to gain access - that they associate the apple account based on the unstable email address claim rather than the stable "sub" claim.
cheez|5 years ago
oauea|5 years ago
For those not aware, some time ago apple decided it would be a good idea to develop their own sing in system, and then force all apps on their store (that already support e.g. Google Account login) to implement it.
So they brought a huge amount of additional complexity in a large amount of apps, and then they fucked up security. Thank you apple!
WhyNotHugo|5 years ago
A big problem of many apps is that they only had a "log in with google"/"log in with facebook" button, which is very problematic for people who have neither.
On Android this is more acceptable since you need a Google account for the OS itself anyway.
toomuchtodo|5 years ago
Ronnie76er|5 years ago
https://openid.net/specs/openid-connect-core-1_0-final.html#...
It's likely (although like others have noted, this is scant on details), that this value was correct and represented the authenticated user.
A relying party should not use the email value to authenticate the user.
Not contesting that this is a bug that should be fixed and a potential security issue, but perhaps not as bad.
Anyone else? Am I reading this right?
unknown|5 years ago
[deleted]
m_herrlich|5 years ago
cfors|5 years ago
homakov|5 years ago
outime|5 years ago
enitihas|5 years ago
e.g https://news.ycombinator.com/item?id=15800676 and
https://news.ycombinator.com/item?id=15828767
So I don't get shocked anymore seeing Apple security issues.
tusharsoni|5 years ago
Even then, the only "security" that developers had was that the attacker wouldn't know the victim's Apple userId easily. With this zero-day attack, it would have been trivial for many apps to get taken over.
[0] https://news.ycombinator.com/item?id=22172952
zemnmez|5 years ago
summerlight|5 years ago
https://www.bloomberg.com/news/articles/2019-11-21/apple-ios...
Looks like Federighi agrees with this diagnosis and tries to improve the overall development process but not sure if it can be really improved without changing the famous secretive corporate culture. At the level of Apple's software complexity, you cannot really design and write a quality software without involving many experts' eyes. And I have been complained by my friends at Apple about how hard to get high level contexts of their works and do a cross-team collaboration.
And IMO, this systematic degradation of the software quality coincides with Bertrand's leaving, who had allowed relatively open culture at least within Apple's software division. I'm not an insider, so this is just a pure guess though.
NightlyDev|5 years ago
This defenitly wasn't complex in any shape or form. This was very basic.
afrcnc|5 years ago
That's not how zero-day works
saagarjha|5 years ago
awinter-py|5 years ago
(sign in) with (apple zero day)
which is kind of appealing
xwes|5 years ago
xkcd-sucks|5 years ago
Retr0spectrum|5 years ago
saagarjha|5 years ago
1f60c|5 years ago
supernova87a|5 years ago
Sign in with Apple: zero day flaw
tly_alex|5 years ago
And if I understand this correctly, the issue is in the first API call, where the server does not validate whether the requester owns the Email address in the request.
What confuses me are where're the "decoded JWT’s payload" comes from. Is it coming from a different API call or it's somewhere in the response?
tly_alex|5 years ago
earth2mars|5 years ago
wmichelin|5 years ago
yalogin|5 years ago
Yajirobe|5 years ago
nick-garfield|5 years ago
If so, what extra validation did Apple add to patch the bug?
moralestapia|5 years ago
Props to Apple for raising the bar on bounties!
gwintrob|5 years ago
saagarjha|5 years ago
mormegil|5 years ago
foobarbazetc|5 years ago
planetjones|5 years ago
tpmx|5 years ago
I've had multiple occasions of "Seriously, Apple hired person X? lol" over the past five years or so.
dandigangi|5 years ago
ksec|5 years ago
I am not sure if I am understanding the blog post correctly, because its simplicity is beyond ridiculous.
jedberg|5 years ago
The author even says that Apple found no evidence of it being exploited.
By definition when this blog post was published it was not the 0th day.
will_raw|5 years ago
kag0|5 years ago
In the initial authorization request rather than passing a string with an email address, the caller could pass a boolean `usePrivateRelay`. If true generate a custom address for the third party, if false use the email address on file.
With that one change the implementer no longer has the opportunity to forget to validate the provided email address, and the vuln is impossible.
m_herrlich|5 years ago
geekit|5 years ago
broooder|5 years ago
catlifeonmars|5 years ago
gouggoug|5 years ago
He could literally send a POST request to that endpoint with arbitrary email addresses and get a valid JWT.
This is clearly explained under the "BUG" section.
alexashka|5 years ago
NightlyDev|5 years ago
But no. Instead they make more proprietary shit without having the basic skills to do so. Then they force that shit on their users.
homakov|5 years ago
teknopaul|5 years ago
XCSme|5 years ago
Is there any bug bounty program for small businesses/apps? I only found hackerone but it seems to be only for enterprise. Is there any recommended platform for small businesses to create their own public bounty program?
beamatronic|5 years ago
zelphirkalt|5 years ago
Who is implementing that stuff?
ljm|5 years ago
Fucking hell. Even after tax, that's a substantial pay-out.
totalZero|5 years ago
unknown|5 years ago
[deleted]
xyst|5 years ago
zucker42|5 years ago
Stierlitz|5 years ago
What are they teaching them in computer school these days. How can you write a security function and not test it for these kind of bugs. Unless all there accidental backdoors have a more nefarious purpose <shoosh>
hank_z|5 years ago
alfalfasprout|5 years ago
sparker72678|5 years ago
neop1x|5 years ago
jasoneckert|5 years ago
Apple has been spending a lot of money on a security-focused marketing campaign these past few years, and encouraging a high-price payout of $100k is sage marketing.
danans|5 years ago
https://en.m.wikipedia.org/wiki/Fuzzing
capableweb|5 years ago
In that case, you'd catch it way before even implementing the fuzzer.
So in this case, I don't think a fuzzer would have helped. Some E2E tests written by humans should have caught this though.
spartak|5 years ago
playpause|5 years ago
playpause|5 years ago
cmauniada|5 years ago
YetAnotherNick|5 years ago
calimac|5 years ago
[deleted]
a4wtaw4tawtawt|5 years ago
[deleted]
fortran77|5 years ago
[deleted]
duskwuff|5 years ago
lotsofpulp|5 years ago
fermienrico|5 years ago
PunksATawnyFill|5 years ago
unknown|5 years ago
[deleted]
blacklight|5 years ago
jagged-chisel|5 years ago
The headline makes me think the entire problem lies with Apple, when that’s not the case.
saagarjha|5 years ago
lostmyoldone|5 years ago
While an application could potentially (not that I know exactly how in this case) further verify the received token, that verification is exactly what an authentication service is supposed to provide, hence the responsibility absolutely rests on Apple who provides the service.
rvz|5 years ago
Great writeup there. Looks like a Apple JWT bug and the verification went through despite it being 'signed' and 'tamperproof'. Clearly its footguns allowed this to happen, thus JWTs is the gift that keeps on giving to researchers.
What did I just outline days before? [0]. Just don't use JWTs, there are already secure alternatives available.
[0] https://news.ycombinator.com/item?id=23315026
arkadiyt|5 years ago
blntechie|5 years ago
quesera|5 years ago
They demonstrate no issues with JWTs. There are issues with JWTs, but you have not hit on any of them.
NicoJuicy|5 years ago
I think we can wrap up the security and anonymous part that Apple has been claiming for their overpriced devices.
matchbok|5 years ago
Plus, iPhones actually work longer than a year.
saagarjha|5 years ago