I'd love to fill that in! If anyone would like a comparison, please add links in this thread and I'll reply. Later, I can collect it into a published page.
"Dex is NOT a user-management system, but acts as a portal to other identity providers through "connectors." This lets dex defer authentication to LDAP servers, SAML providers, or established identity providers like GitHub, Google, and Active Directory."
It seems like AuthN IS a user management system. So that's a big difference right there.
Hydra and Dex both support OAuth and OpenID Connect. This apparently supports neither, but comes with its own JWT structure.
With inbound federation that shouldn't be much of a problem, but with outbound federation you'll have some very difficult questions to answer (especially because all major identity solutions are pretty much OIC centric these days)
Strong choice! My dream is for AuthN to provide authentication and account functionality for folks who have not yet invested in an API gateway, and then seamlessly plug in when their architecture matures later. Extracting user accounts can be a difficult barrier for transitions like that.
This looks pretty reasonable! I would love to see a Cloud Storage backend. A minor quibble is that I think that managing your own metrics in Redis is probably not the simplest or most flexible approach - instead, you should consider exposing a /metrics endpoint that can be ingested by the user's monitoring tool of choice (Prometheus/InfluxDB/etc).
Have you seen the /stats endpoint? It exposes the metrics as JSON, which may be a good match for your suggestion. I'd also like to export the key events to a STATSD-compatible sink so a sophisticated user can manage metrics in their own system.
Redis is already on hand because of other features though, and HLL is a pretty cheap integration. I figure it's a decent starting point for many people.
Am I missing something, or does this really have no support for TOTP/HOTP? An authentication system without 2FA or U2F support in 2017 seems... lacking (or unfinished).
Keycloak does some really great things. It does require managing a Java runtime though, and is missing the streamlining that allows AuthN to run as an invisible API.
Keycloak (and similar) hosts and renders your login page. You customize through theming. You're expected to redirect users through a standard OAuth2/OIDC flow on a different domain.
AuthN doesn't render any HTML. That's all you, from start to finish. This means you have control over the UX and can build the login page directly into your own app, just like you would when using an auth library in a typical monolith.
Yeah, name/pass sounds pretty simple, doesn't it? But doing it correctly, securely, with a service architecture? That gets interesting.
> It would be much more interesting to me if it also did Oauth2 login with Google/Facebook/Twitter/etc.
Totally agreed. The reason I designed AuthN around accounts first is because I believe that's the best way to launch an app. OAuth2 and OIC logins are powerful, but they're secondary to the classic login.
almost no test coverage. did i miss them ? for proper use in production you would need to have hundreds of unittests and a whole bunch of component + integration and e2e tests.
Tests are colocated inside packages (folders) using a `_test.go` convention.
Service tests[1] are the main unit tests, and use mock implementations of the data store interfaces.
Data (DAO) tests[2] are generally run across every implementation using only the public interface. This helps me stay sane with the mock implementations.
The API tests[3] are integration tests, and use Go's excellent httptest package to boot a real server and execute real HTTP commands.
This is true for extremely generous definitions of "perform".
Microservices can lead to better performance by making for smaller, more clearly defined codebases, fewer unnecessary imports, and so on. They can also be easy to scale, because you can scale specifically that one component (e.g. identity management) by moving it to a separate database server.
We use microservices, and we have probably close to 2TB of MySQL database, but because we have our services and their database schemas cleanly separated, we can break that up into a set of databases which all fit into memory and one database which, being basically append-only, doesn't need to access historical data and so doesn't need to fit entirely in RAM.
It also lets us easily look at our cluster stats and see where bottlenecks are, by easily seeing which servers or services are under load.
We do pay a penalty for this; layers of indirection, network latency, protocol overhead, serialization/deserialization, and so on, but designing our systems like that from the start lets us tackle those problems at the start and account for them in our design.
Sort of agree, that's a very noisy and broad statement. I'd argue that the underlying I/O and event loop implementation matters more. At the end of the day it's all about highly available systems. Am I wrong?
[+] [-] praxxis|8 years ago|reply
[+] [-] krullie|8 years ago|reply
[+] [-] cainlevy|8 years ago|reply
[+] [-] danesparza|8 years ago|reply
"Dex is NOT a user-management system, but acts as a portal to other identity providers through "connectors." This lets dex defer authentication to LDAP servers, SAML providers, or established identity providers like GitHub, Google, and Active Directory."
It seems like AuthN IS a user management system. So that's a big difference right there.
[+] [-] mindcrash|8 years ago|reply
With inbound federation that shouldn't be much of a problem, but with outbound federation you'll have some very difficult questions to answer (especially because all major identity solutions are pretty much OIC centric these days)
[+] [-] rmetzler|8 years ago|reply
We evaluated Traefik and Kong. Decision was for Kong, since we need more features like auth, logging, rate limit.
[+] [-] cainlevy|8 years ago|reply
[+] [-] cortesi|8 years ago|reply
[+] [-] cainlevy|8 years ago|reply
Have you seen the /stats endpoint? It exposes the metrics as JSON, which may be a good match for your suggestion. I'd also like to export the key events to a STATSD-compatible sink so a sophisticated user can manage metrics in their own system.
Redis is already on hand because of other features though, and HLL is a pretty cheap integration. I figure it's a decent starting point for many people.
[+] [-] kuschku|8 years ago|reply
[+] [-] cainlevy|8 years ago|reply
[+] [-] nmenglund|8 years ago|reply
http://www.keycloak.org
[+] [-] cainlevy|8 years ago|reply
Keycloak (and similar) hosts and renders your login page. You customize through theming. You're expected to redirect users through a standard OAuth2/OIDC flow on a different domain.
AuthN doesn't render any HTML. That's all you, from start to finish. This means you have control over the UX and can build the login page directly into your own app, just like you would when using an auth library in a typical monolith.
[+] [-] tetraodonpuffer|8 years ago|reply
/keratin/authn-server/docs/config.md
which is a 404 presumably instead of
/keratin/authn-server/blob/master/docs/config.md
[+] [-] cainlevy|8 years ago|reply
[+] [-] ehc|8 years ago|reply
[+] [-] cainlevy|8 years ago|reply
My current plan is to set up a HackerOne page. I know that bug bounties don't replace good penetration testing, but it's a start.
[+] [-] galvanium|8 years ago|reply
[+] [-] cainlevy|8 years ago|reply
[+] [-] xena|8 years ago|reply
[+] [-] cainlevy|8 years ago|reply
[+] [-] brianolson|8 years ago|reply
It would be much more interesting to me if it also did Oauth2 login with Google/Facebook/Twitter/etc.
[+] [-] cainlevy|8 years ago|reply
> It would be much more interesting to me if it also did Oauth2 login with Google/Facebook/Twitter/etc.
Totally agreed. The reason I designed AuthN around accounts first is because I believe that's the best way to launch an app. OAuth2 and OIC logins are powerful, but they're secondary to the classic login.
[+] [-] je42|8 years ago|reply
[+] [-] cainlevy|8 years ago|reply
Service tests[1] are the main unit tests, and use mock implementations of the data store interfaces.
Data (DAO) tests[2] are generally run across every implementation using only the public interface. This helps me stay sane with the mock implementations.
The API tests[3] are integration tests, and use Go's excellent httptest package to boot a real server and execute real HTTP commands.
[1] https://github.com/keratin/authn-server/tree/master/services
[2] https://github.com/keratin/authn-server/tree/master/data
[3] https://github.com/keratin/authn-server/tree/master/api/acco...
[+] [-] praxxis|8 years ago|reply
[+] [-] Lord_Zero|8 years ago|reply
https://github.com/keratin/authn-server/blob/master/Makefile...
Notice it runs "go test $(shell glide nv)"
`glide nv` is a command that gets you all packages except the vendor directory https://github.com/Masterminds/glide#glide-novendor-aliased-...
then read the go doc for test https://golang.org/cmd/go/#hdr-Test_packages:
> 'Go test' recompiles each package along with any files with names matching the file pattern "*_test.go".
[+] [-] plexicle|8 years ago|reply
[+] [-] ahoka|8 years ago|reply
Nonsense.
[+] [-] dang|8 years ago|reply
If you have a substantive point to make, make it thoughtfully; if you don't, please don't comment until you do.
[+] [-] danudey|8 years ago|reply
Microservices can lead to better performance by making for smaller, more clearly defined codebases, fewer unnecessary imports, and so on. They can also be easy to scale, because you can scale specifically that one component (e.g. identity management) by moving it to a separate database server.
We use microservices, and we have probably close to 2TB of MySQL database, but because we have our services and their database schemas cleanly separated, we can break that up into a set of databases which all fit into memory and one database which, being basically append-only, doesn't need to access historical data and so doesn't need to fit entirely in RAM.
It also lets us easily look at our cluster stats and see where bottlenecks are, by easily seeing which servers or services are under load.
We do pay a penalty for this; layers of indirection, network latency, protocol overhead, serialization/deserialization, and so on, but designing our systems like that from the start lets us tackle those problems at the start and account for them in our design.
[+] [-] ShabbosGoy|8 years ago|reply