Used jwt for our web app before - changed to session cookies. To make jwt secure for web apps it feels like you are reinventing session authentication.
Another interesting method, overkill for most applications but absolutely required for end-to-end encrypted apps where no password must be sent to the server (eg: password managers): the Secure Remote Password (SRP) protocol[1].
It's a form of zero-knowledge proof-based verification that the password provided during account creation matches the one provided during an authentication challenge, all without transmitting the password at all. As a bonus, it also act as a key exchange on the client and server, that can be used for securing transmissions over untrusted channels (at the cost of having stateful connections).
SRP was good for its time, but things have moved forward in the password-authenticated key exchange (PAKE) world. See for example OPAQUE:
> Currently, the most widely deployed (PKI-free) aPAKE is SRP [RFC2945], which is vulnerable to pre-computation attacks, lacks a proof of security, and is less efficient relative to OPAQUE. Moreover, SRP requires a ring as it mixes addition and multiplication operations, and thus does not work over plain elliptic curves. OPAQUE is therefore a suitable replacement for applications that use SRP.
> This draft complies with the requirements for PAKE protocols set forth in [RFC8125].
It's not absolutely required for end-to-end encrypted apps (that encrypt data with password or use it for authentication). The two benefits of SRP-like (aPAKE) protocols are:
- a password hash (verifier in SRP) is not transmitted during authentication. (But is transmitted during registration and stored on the server). This isn't really required if the secure channel is already established — e.g. TLS — you can as well just send the password hash via TLS, which the server will hash again and check against the stored hash. For registration you already need some kind of non-password based secure protocol to transfer the verifier anyway.
- mutual authentication (that is, client also learns if the server knows the password/verifier). For this case, an app would already come with some kind of mechanism to ensure that the server it talks to is authentic (e.g. TLS with pinned certificates or some other public key protocol with hard-coded keys). So the benefit of verifying that this authentic server is the one that stored your verifier is also limited.
To summarize, there's not much use for SRP in a typical e2e app, since it already needs a secure connection between a client and the server, and SRP verifier is a client-side password hash.
SRP is useful if you need to establish a secure connection with a password and nothing else, but it requires first to store the verifier on the server somehow.
Meh. I'm getting a sense that the author's grasp of authentication and authorization protocols is incomplete. I don't mean in the sense that the author left out some popular protocols, which is true, but in the sense that there's details wrong. For example, oauth and OpenID are mentioned only in the context of social login, and the differences between oauth, oauth2, OpenID and OIDC aren't covered (and OIDC is almost entirely unrelated to the original OpenID). Also, the author has a section at the end "GitHub social auth" as if it were somehow different from regular oauth used all over the internet.
I didn't get past the Basic Auth part where the author said the cons were both that you have to send credentials with every request and that there was no way to log out.
For one, both of those cannot simultaneously be true (as omitting credentials becomes equivalent to logging out).
In addition, all of the methods described require credentials with every request. Some just are stored in the cookie jar instead of the browser's auth handler. But whether it ends up in an auth header or a cookie header, it's still part of the HTTP request.
It makes no sense whatsoever to "compare" Web Authentication methods for humans in 2020 without even mentioning WebAuthn since that's literally why it's called WebAuthn (Web Authentication) and that's exactly what it's for.
In one sense this is bad news for this comparison, but it's also bad news for the general state of security on the web, because this item is 8 hours old and I'm writing this top level item now, which means for eight hours people who think of themselves as "web developers" didn't even ask "Why not WebAuthn?"
I think that's a bit uncharitable. WebAuthn isn't terribly useful right now simply because most users lack hardware tokens. And from a backend services perspective, which the article seems to take, there's no good place to install hardware tokens for cloud services. If you own the physical server, only then does WebAuthn provide advantages.
I hope that someday, users can just plug their phone into their computer and use it as a security token.
Just tried webauthn.io. Doesn't even work :/ I clicked register and it said "Use your security key now or cancel". What does that even mean? What's a security key? Is it a setting in my browser? I have lastpass installed, can it use that? Is it supposed to integrate with the Mac keychain somehow? Gave me no options but cancel.
Pretty underwhelming if that's the future of web auth..
"To mitigate replay attacks (re-use of a sniffed cookie), the value of the cookie used for authentication SHOULD NOT contain the users credentials but rather a key associated with the authentication session, and this key SHOULD be renewed (and expired) frequently."
Session cookies are often defined as cookies that expire when the browser is closed. The truth is that session cookies do not necessarily expire when the browser is closed. Indeed they are lost to the browser, but the user can save them to a text file. If they are truly expired then they should no longer work. However, in some cases, the cookie in the text file can be reused to avoid login, long after after the browser is closed and across subsequent reboots. For example, one such case is a very popular webmail provider. Since the provider now forces users to run Javascript in order to login, this technique can be used to check, read and send mail from the command line using clients that do not include a Javascript interpreter. There are many other examples where session cookies can be used after the browser is closed to avoid having to keep logging in. Unless the website is one that logs the user out after a period of inactivity, there is a good chance "sessions cookies" can be used long-term.
In my mind, of course the server doesn't know if you took a cookie you received from the previous instance of your browser and placed it into the next instance of your browser, because the act of closing your browser doesn't call every server saying "don't consider this cookie valid ever again," the act of closing your browser just deletes cookies that have a lifetime of 0 (but does not delete cookies with a nonzero lifetime, IIRC).
In passwordless authentication, the device creates a public and private key when registered. The private key can only be unlocked using fingerprint or PIN. If an attacker knows the PIN he also needs the device. If he has the device he also needs the PIN.
Good overview, but I'm not sure I agree with the assertion that OAuth/OpenID is unconditionally more secure. It still depends on both the provider and the intermediate site doing things properly, like generating actually random values, not reusing randomness, not leaking tokens, and all the stuff you have to worry about normally.
With the web developer hat on I most prefer working on apps with cookie based authentication since it makes it easy to use the browser to debug/make additional requests. If the cookie already exists I can go to any API url in the browser and it just works. Putting the secret in localstorage or only in app memory makes that not possible.
On the API side I prefer to support both cookie and Authorization header so you get the benefits in browser but don't overcomplicate the CLI side of things by requiring cookie state.
>Tokens cannot be deleted. They can only expire. This means that if the token gets leaked, an attacker can misuse it until expiry. Thus, it's important to set token expiry to something very small, like 15 minutes.
Sure, or your token issuer could implement https://tools.ietf.org/html/rfc7009 (if the JWTs are generated using OAuth) or you could build a revocation system.
But at that point you've negated one of the two benefits of such tokens. (The other one being you can verify that the contents haven't been modified, since they are signed.)
It calls some of these stateless, but don't they all have to run a secondary look-up?
For example, in Basic Authentication, you still have to check the username and password against a database, whether that be a file, a relational database, LDAP, etc. For JWT, to verify the signature you must look up the issuer's pre-shared or public key.
Is it even possible for a request be stateless if it requires authentication?
Basic Auth is stateless on the client side but not on the server side.
Token auth is stateless on server side; it does not need to store any more public/private key pairs as the number of authenticating users increases. It can just use one. So authenticating users does not affect state
Hey auth gurus! I have a newb question for a scenario that this article doesn't appear to cover, so I hope folks don't mind if I ask it here.
Is there any way to authorize a front-end app to use private back-end services without requiring a login? (I have people abusing APIs which were intended to be private, and which may otherwise force me to require user accounts.)
X.509 certificates issued by your private PKI would do the trick.
You'd have to implement a registration / enrollment process during which you'd handle the setup but that'd be a one-time thing (plus a "renewal" process every few years or so).
Although it isn't necessarily the most "user-friendly", pretty much every HTTP(S) client and server in existence supports using certificates to authenticate clients.
As a security nerd, this is what I think I'd prefer, however...
--
An alternative that's probably more popular and "user-friendly" -- and more likely to be recommended (especially here on HN) -- would be to allow users to generate and manage API keys tied to their accounts.
You could then either 1) require everyone to authenticate to the back-end services using their API keys (even "free" users) or 2) make authentication optional but implement strict rate-limiting and/or quotas for unauthenticated requests.
--
EDIT: There are two other similar / closely related methods that I forgot to mention which are quite easy to deal with (both client-side and server-side) and supported practically everywhere (as, if memory serves, they've been around since HTTP 1.0 and 1.1, respectively!): HTTP Basic Authentication and HTTP Digest Authentication. The latter is basically the former with MD5 hashing added, although neither are particularly "secure" nowadays compared to the alternatives. This is much less of a concern if all requests and responses are being carried over a TLS-encrypted session, however.
What about having your front end app authenticate with an auth provider to get a limited life token which in turn is sent with you api request? this is typically how I'd adopt a JWT scenario
Coincidentally, just yesterday I was looking for the usual simple in-server form&session Web site authn functionality for a Rust Web framework (and, ideally, also client certs).
I get why OAuth2/OpenID is popular, in a tech professional environment comfortable with leaking information about users, and also desensitized to the risks of a single Web site having numerous third-party service&CDN dynamic dependencies for a page load... but I still expected the basic default authn mechanism to be form&session, and then some of the other mechanisms to build upon some of the same authn (and perhaps autho) session&events&UX foundation.
Session based auth is what we used to do in the olden days. It was horrible. Sure it was easy to log in a keep a session cookie, but man... restarting servers = logged out, unless you persisted sessions. Session replication. It adds all sort of complexities after the initial implementation.
It seems we're at a (local?) maximum with JWT over cookies + refresh tokens + blacklisting.
I'm not sold on refresh tokens being a strict benefit because in the end you still have to maintain state to make sure they can't be used indefinitely.
In the end though, opaque tokens are still the easiest and are way simpler to wrap your head around. Allow calls to redis or the auth server itself early on, and if that extra call really becomes a problem, start doing a push model (where the auth server pushes out updates to others) with caches near every service.
Also, TLS mutual authentication (what you have with client certificates) is impossible with MITM unless both parties agree to the MITM (in which case WTF?) which can help free you from obnoxious management imposed middleboxes at an organisation where it's futile to try to get anybody senior enough to just understand why they're a bad idea.
Suppose you work for HugeCorp. You built a service available on say https://service.huge.example/ and it has a bunch of users who are thus HugeCorp customers. Maybe they are prisons, or fast food restaurants, or whatever, it doesn't matter, except these are clearly not humans. (Client certificate management UX is awful for humans)
Ordinarily sooner or later HugeCorp IT will decide you need a fancy middlebox - from say Cisco, or Fortinet, or dozens of other companies - to tick a box on some executive's list. Ordinarily they just impose the middlebox and, since it can't do its job otherwise, they insist your private keys and certificates get copied to the middlebox. Now it's impersonating your https://service.huge.example/ site and every bug in that middlebox is now a bug in your service. Does it offer any benefits? Probably not really unless you did a very poor job of building the service, but it did tick a box on a list, and the manufacturer got paid. Good luck, have fun.
But with mutual authentication that can't work. They could reach out to every single user of the service and agree that all these users will actually now be separately authenticating to the middlebox. If any of them don't want to, you can't offer the service to them any more. So this won't end up happening, although feel free to propose it in meetings if you want the executives to explode.
Instead they'll have to except your service from the stupid middlebox, and you are freed from wasting your time chasing bugs that are in somebody else's product. Remember to send pity donuts to the teams trying to "fix" such problems in other services that weren't as lucky.
Finally TLS 1.3 makes client certificates work a little better, because it allows a server to give more sophisticated guidance on what sort of certificates it actually wants. In prior versions the only guidance the server is permitted to give is, "I trust these CAs, show me a client certificate they signed".
Still never use this for humans though, the human facing UX is not at all good. Good for machines talking to machines.
> It's stateful. The server keeps track of each session on the server-side. The session store, used for storing user session information, needs to be shared across multiple services to enable authentication. Because of this, it doesn't work well for RESTful services, since REST is a stateless protocol.
What stops you from keeping the JWT token in there? In fact, I doubt that it's some random session ID and not some encrypted payload that gets decrypted instead of looking it up in the database.
Nothing, except that then you're inheriting all the complexity of JWT for not even a pretense of the JWT's supposed benefit of statelessness. You should do the simplest thing that works for your application; usually, the simplest and safest thing is session-based authentication with a random session ID.
It seems like for simple 2-party exchanges, as soon as you are using TLS and assuming you trust it, I'm not sure what the advantage is of doing anything more than basic authentication? Perhaps if you want to throttle authentications to prevent brute forcing it becomes a problem.
For sure, if you have more complex auth needs (more parties, granular access, etc etc) then you can start to justify more complex things ... but I'm curious what other weaknesses are there in that scheme?
Well if you don't want employers finding their users' private passwords in network logs it's worth doing more than basic auth. Inspecting TLS traffic is commonplace nowadays.
I find it amusing that this article starts off explaining the difference between authN and authZ and then immediately proceeds to a describe an authN scheme where the authN transaction utilizes a request header called "Authorization" (used in reply to a response header that doesn't have this quirk, oddly enough). Would be nice to see a footnote apologizing for what I'd consider a linguistic blunder on par with "Referer"...
Basic Authentication is a pain with a web app sitting behind Apache as it strips all "sensitive" headers before passing along the request to the web app.
That's why could should use nginx and cooperate with the app. Maybe 'http_auth_request_module' for subrequesting to the right service, but I'm guessing passing correct headers will could do as well
[+] [-] groundthrower|5 years ago|reply
[+] [-] combatentropy|5 years ago|reply
[+] [-] tptacek|5 years ago|reply
[+] [-] franky47|5 years ago|reply
It's a form of zero-knowledge proof-based verification that the password provided during account creation matches the one provided during an authentication challenge, all without transmitting the password at all. As a bonus, it also act as a key exchange on the client and server, that can be used for securing transmissions over untrusted channels (at the cost of having stateful connections).
[1] https://en.wikipedia.org/wiki/Secure_Remote_Password_protoco...
[+] [-] throw0101a|5 years ago|reply
> Currently, the most widely deployed (PKI-free) aPAKE is SRP [RFC2945], which is vulnerable to pre-computation attacks, lacks a proof of security, and is less efficient relative to OPAQUE. Moreover, SRP requires a ring as it mixes addition and multiplication operations, and thus does not work over plain elliptic curves. OPAQUE is therefore a suitable replacement for applications that use SRP.
> This draft complies with the requirements for PAKE protocols set forth in [RFC8125].
* https://tools.ietf.org/html/draft-irtf-cfrg-opaque
More on PAKE generally:
* https://blog.cryptographyengineering.com/2018/10/19/lets-tal...
* https://news.ycombinator.com/item?id=18259393
If you're looking for a challenge-response system for your application, then it's hard to go wrong with SCRAM (which Postgres went with a while ago):
* https://en.wikipedia.org/wiki/Salted_Challenge_Response_Auth...
[+] [-] dchest|5 years ago|reply
- a password hash (verifier in SRP) is not transmitted during authentication. (But is transmitted during registration and stored on the server). This isn't really required if the secure channel is already established — e.g. TLS — you can as well just send the password hash via TLS, which the server will hash again and check against the stored hash. For registration you already need some kind of non-password based secure protocol to transfer the verifier anyway.
- mutual authentication (that is, client also learns if the server knows the password/verifier). For this case, an app would already come with some kind of mechanism to ensure that the server it talks to is authentic (e.g. TLS with pinned certificates or some other public key protocol with hard-coded keys). So the benefit of verifying that this authentic server is the one that stored your verifier is also limited.
To summarize, there's not much use for SRP in a typical e2e app, since it already needs a secure connection between a client and the server, and SRP verifier is a client-side password hash.
SRP is useful if you need to establish a secure connection with a password and nothing else, but it requires first to store the verifier on the server somehow.
[+] [-] cratermoon|5 years ago|reply
https://www.wikiwand.com/en/List_of_OAuth_providers
[+] [-] IX-103|5 years ago|reply
For one, both of those cannot simultaneously be true (as omitting credentials becomes equivalent to logging out).
In addition, all of the methods described require credentials with every request. Some just are stored in the cookie jar instead of the browser's auth handler. But whether it ends up in an auth header or a cookie header, it's still part of the HTTP request.
[+] [-] tialaramex|5 years ago|reply
In one sense this is bad news for this comparison, but it's also bad news for the general state of security on the web, because this item is 8 hours old and I'm writing this top level item now, which means for eight hours people who think of themselves as "web developers" didn't even ask "Why not WebAuthn?"
[+] [-] webstrand|5 years ago|reply
I hope that someday, users can just plug their phone into their computer and use it as a security token.
[+] [-] PudgePacket|5 years ago|reply
Pretty underwhelming if that's the future of web auth..
[+] [-] 1vuio0pswjnm7|5 years ago|reply
https://tools.ietf.org/id/draft-broyer-http-cookie-auth-00.h...
"To mitigate replay attacks (re-use of a sniffed cookie), the value of the cookie used for authentication SHOULD NOT contain the users credentials but rather a key associated with the authentication session, and this key SHOULD be renewed (and expired) frequently."
Session cookies are often defined as cookies that expire when the browser is closed. The truth is that session cookies do not necessarily expire when the browser is closed. Indeed they are lost to the browser, but the user can save them to a text file. If they are truly expired then they should no longer work. However, in some cases, the cookie in the text file can be reused to avoid login, long after after the browser is closed and across subsequent reboots. For example, one such case is a very popular webmail provider. Since the provider now forces users to run Javascript in order to login, this technique can be used to check, read and send mail from the command line using clients that do not include a Javascript interpreter. There are many other examples where session cookies can be used after the browser is closed to avoid having to keep logging in. Unless the website is one that logs the user out after a period of inactivity, there is a good chance "sessions cookies" can be used long-term.
[+] [-] hunter2_|5 years ago|reply
[+] [-] edoceo|5 years ago|reply
[+] [-] flowerlad|5 years ago|reply
See:
https://www.microsoft.com/en-us/security/business/identity/p...
https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE...
In passwordless authentication, the device creates a public and private key when registered. The private key can only be unlocked using fingerprint or PIN. If an attacker knows the PIN he also needs the device. If he has the device he also needs the PIN.
[+] [-] toomuchtodo|5 years ago|reply
https://fidoalliance.org/fido2/
https://www.w3.org/TR/webauthn-2/
[+] [-] ketamine__|5 years ago|reply
I actually prefer this as long as the email is instant.
[+] [-] l0b0|5 years ago|reply
[+] [-] eatonphil|5 years ago|reply
On the API side I prefer to support both cookie and Authorization header so you get the benefits in browser but don't overcomplicate the CLI side of things by requiring cookie state.
[+] [-] tester756|5 years ago|reply
But can be black listed.
[+] [-] mooreds|5 years ago|reply
But at that point you've negated one of the two benefits of such tokens. (The other one being you can verify that the contents haven't been modified, since they are signed.)
Why not just use session cookies then?
[+] [-] pier25|5 years ago|reply
[+] [-] combatentropy|5 years ago|reply
For example, in Basic Authentication, you still have to check the username and password against a database, whether that be a file, a relational database, LDAP, etc. For JWT, to verify the signature you must look up the issuer's pre-shared or public key.
Is it even possible for a request be stateless if it requires authentication?
[+] [-] Shmebulock|5 years ago|reply
Basic Auth is stateless on the client side but not on the server side.
Token auth is stateless on server side; it does not need to store any more public/private key pairs as the number of authenticating users increases. It can just use one. So authenticating users does not affect state
[+] [-] CharlesW|5 years ago|reply
Is there any way to authorize a front-end app to use private back-end services without requiring a login? (I have people abusing APIs which were intended to be private, and which may otherwise force me to require user accounts.)
[+] [-] jlgaddis|5 years ago|reply
You'd have to implement a registration / enrollment process during which you'd handle the setup but that'd be a one-time thing (plus a "renewal" process every few years or so).
Although it isn't necessarily the most "user-friendly", pretty much every HTTP(S) client and server in existence supports using certificates to authenticate clients.
As a security nerd, this is what I think I'd prefer, however...
--
An alternative that's probably more popular and "user-friendly" -- and more likely to be recommended (especially here on HN) -- would be to allow users to generate and manage API keys tied to their accounts.
You could then either 1) require everyone to authenticate to the back-end services using their API keys (even "free" users) or 2) make authentication optional but implement strict rate-limiting and/or quotas for unauthenticated requests.
--
EDIT: There are two other similar / closely related methods that I forgot to mention which are quite easy to deal with (both client-side and server-side) and supported practically everywhere (as, if memory serves, they've been around since HTTP 1.0 and 1.1, respectively!): HTTP Basic Authentication and HTTP Digest Authentication. The latter is basically the former with MD5 hashing added, although neither are particularly "secure" nowadays compared to the alternatives. This is much less of a concern if all requests and responses are being carried over a TLS-encrypted session, however.
[+] [-] respondo2134|5 years ago|reply
[+] [-] xupybd|5 years ago|reply
[+] [-] neilv|5 years ago|reply
I was confused by what this page says is available, and what it didn't: https://www.arewewebyet.org/topics/auth/
I get why OAuth2/OpenID is popular, in a tech professional environment comfortable with leaking information about users, and also desensitized to the risks of a single Web site having numerous third-party service&CDN dynamic dependencies for a page load... but I still expected the basic default authn mechanism to be form&session, and then some of the other mechanisms to build upon some of the same authn (and perhaps autho) session&events&UX foundation.
[+] [-] Justsignedup|5 years ago|reply
[+] [-] hardwaresofton|5 years ago|reply
I'm not sold on refresh tokens being a strict benefit because in the end you still have to maintain state to make sure they can't be used indefinitely.
In the end though, opaque tokens are still the easiest and are way simpler to wrap your head around. Allow calls to redis or the auth server itself early on, and if that extra call really becomes a problem, start doing a push model (where the auth server pushes out updates to others) with caches near every service.
[+] [-] forgotmypw17|5 years ago|reply
Basic auth functions as a captcha and invite code, eliminating users not invited and almost all bots.
After that,s done, the user gets a cookie (and a private key in LocalStorage) and is not prompted next time.
The beauty of cookies and basic is that the cookie is sent before auth takes place.
[+] [-] mauflows|5 years ago|reply
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] darkhorn|5 years ago|reply
[+] [-] eqvinox|5 years ago|reply
(Both are very much a corporate thing, but to my knowledge certs are used significantly more widely than krb5)
[+] [-] tialaramex|5 years ago|reply
Suppose you work for HugeCorp. You built a service available on say https://service.huge.example/ and it has a bunch of users who are thus HugeCorp customers. Maybe they are prisons, or fast food restaurants, or whatever, it doesn't matter, except these are clearly not humans. (Client certificate management UX is awful for humans)
Ordinarily sooner or later HugeCorp IT will decide you need a fancy middlebox - from say Cisco, or Fortinet, or dozens of other companies - to tick a box on some executive's list. Ordinarily they just impose the middlebox and, since it can't do its job otherwise, they insist your private keys and certificates get copied to the middlebox. Now it's impersonating your https://service.huge.example/ site and every bug in that middlebox is now a bug in your service. Does it offer any benefits? Probably not really unless you did a very poor job of building the service, but it did tick a box on a list, and the manufacturer got paid. Good luck, have fun.
But with mutual authentication that can't work. They could reach out to every single user of the service and agree that all these users will actually now be separately authenticating to the middlebox. If any of them don't want to, you can't offer the service to them any more. So this won't end up happening, although feel free to propose it in meetings if you want the executives to explode.
Instead they'll have to except your service from the stupid middlebox, and you are freed from wasting your time chasing bugs that are in somebody else's product. Remember to send pity donuts to the teams trying to "fix" such problems in other services that weren't as lucky.
Finally TLS 1.3 makes client certificates work a little better, because it allows a server to give more sophisticated guidance on what sort of certificates it actually wants. In prior versions the only guidance the server is permitted to give is, "I trust these CAs, show me a client certificate they signed".
Still never use this for humans though, the human facing UX is not at all good. Good for machines talking to machines.
[+] [-] neilv|5 years ago|reply
[+] [-] slezyr|5 years ago|reply
> It's stateful. The server keeps track of each session on the server-side. The session store, used for storing user session information, needs to be shared across multiple services to enable authentication. Because of this, it doesn't work well for RESTful services, since REST is a stateless protocol.
What stops you from keeping the JWT token in there? In fact, I doubt that it's some random session ID and not some encrypted payload that gets decrypted instead of looking it up in the database.
[+] [-] tptacek|5 years ago|reply
[+] [-] zmmmmm|5 years ago|reply
For sure, if you have more complex auth needs (more parties, granular access, etc etc) then you can start to justify more complex things ... but I'm curious what other weaknesses are there in that scheme?
[+] [-] 0xbadcafebee|5 years ago|reply
[+] [-] hunter2_|5 years ago|reply
[+] [-] jgalt212|5 years ago|reply
[+] [-] Jiocus|5 years ago|reply
[+] [-] caseyohara|5 years ago|reply