https://medium.com/@justinjsmith/hatin-on-microservice-passwords-4f8f0c0143ec#.rmxrg2h9l
Justin's argument is summarized thus "It’s true that a client must still authenticate with the security service, but the security service provides a central place to focus on and to harden. As I mentioned previously, it’s less about how the client authenticates, and more about where the client authenticates". Do you think this argument holds up? It sounds a bit like throwing up our hands and saying that instead of trying to solve the problem we'll just shift responsibility for failure.
[+] [-] xemdetia|10 years ago|reply
A security policy is much easier to prove correct if authorization and authentication stem from the same place. You don't have to aggregate many systems and then try to fold that data down before analysis.
I prefer this centralized authentication to be restricted to a particular realm (like a particular website/company) and not universal though.
[+] [-] iends|10 years ago|reply
[+] [-] PaulHoule|10 years ago|reply
Consider an intranet. If you have one password to log into all of your enterprise applications you are going to know how to log in and you will use all those applications.
If you have 20 different applications you use at work and you have 20 different logins, probably some of those services will end up in a state where you have to bug IT people to get a password reset when you ever need to use them. There will be others you just don't use at all, and maybe that suits management because they'd rather you didn't take time off from work anyway. (ex. application you use to claim time off from work)
[+] [-] ecesena|10 years ago|reply
[+] [-] jesseryoung|10 years ago|reply
[+] [-] mpbm|10 years ago|reply
If both the user and the service independently trust the security agent, then each of them only needs to trust one actor.
It does imply an important discussion regarding how many eggs should go in one basket.
[+] [-] jiantastic|10 years ago|reply
[+] [-] ejcx|10 years ago|reply
It gives you a lot of benefits.
- Easier to scale securely. Adding 100 auth'ed services that may or may not work differently is a recipe for disaster. Ad-hoc security by people who aren't security experts generally does not work...And maintaining it will not be easy....
- One way to auth. Developers make fewer auth mistakes this way. For example, if you are a Go shop, you have a company wide auth package that can be used by everyone. Mistakes are limited this way. Fewer vulnerabilities. Less developer friction and more productivity, not having to have developers worry about auth themselves.
- Control. Revocation and adding new permissions is hard if it is not central. How many services have to restart when someone quits or a service is hacked. How do you even know how to revoke something if some auth scheme is built where one service can talk to several services with the same auth token? As complexity is added, the process is more unclear. This leads to mistakes.
There are more but these three bullet points came to mind first.
[+] [-] awinder|10 years ago|reply
1. "Certain Network Topologies" should be "Most", or at least "Your" network topology. And this statement doesn't need to be confusing: You're going to run sensitive services on your network, you're going to run databases, and those services really shouldn't be exposed to the outside world. Even your applications should ideally not be exposed directly to the outside world, they should be proxied by a load balancer / network gateway that forwards only expected traffic to your service. Make your LBs default to not allowing traffic so that you can confirm this. Use services like ELB and VPC if you're in Amazon.
2. "Untrusted code running in the perimeter" is a situation that you should A) control, and B) aggressively be defensive about.
3. If you're going to go all the way with service auth, you need to think about how safe your secrets are. If your code is shipping with the secrets, then you're really gunning for security by obscurity. Keeping secrets on disk is a security hole. Heck, even keeping plaintext secrets in memory is not fool-proof.
I liked the article, but generally speaking, you really do need a comprehensive solution to service-to-service auth if you want to go down that road, because taking shortcuts is going to just expose you to a ton of downsides without even attaining the upshot. And regardless of if you set up service-to-service auth, you really should be segmenting networks anyways for a host of reasons, and layering auth on top of that. So these two things are inter-related on some levels, but form a layered security approach together :-).
[+] [-] detaro|10 years ago|reply
To me it sounds like recognizing a good solution to the problem and implementing it. Unless you have a good counter-reason to offer why a centralized authentication service is a bigger problem?
(Note there is a sliding scale on how often you need to hit the security service. You can have every single request going there, or you can have it give out tokens with varying permissions your services then can validate themselves. Cloudflare's blog recently described how they use internal CAs to authenticate services against each other, basically only requiring a service to once get a certificate, there are concepts like Google Macaroons, ...)
[+] [-] vox_mollis|10 years ago|reply
We live in a world of asymmetric warfare in which attackers can screw up thousands of times, but defenders only need to screw up once. Dramatically increase the payoff of compromise, and the risk increases accordingly.
I know this smacks of security by obscurity, but in the real world, incentives for attackers are absolutely part of the risk calculation.
[+] [-] dogma1138|10 years ago|reply
Another pitfall is when people centralize both authentication and authorization which leads to allot of issues mostly loss of granular control over permissions and information exposure.
If you are using some sort of SSO/Global Token based authentication you should let your applications handle the authorization, and you should not store information about the user's privileges within the token itself.
[+] [-] markc|10 years ago|reply
>There are several open source security services out there. Use them!
Something like https://www.vaultproject.io/ would be a solid foundation for either running your own CA with narrowly scoped / time limited TLS client certs, or managing secrets for generating signed identity assertions, e.g. a JWT.
[+] [-] CM30|10 years ago|reply
That's one of the reasons I've always been skeptical of centralised anything. It's another potential group to fall out with or point of weakness.
[+] [-] throwanem|10 years ago|reply
[+] [-] pbreit|10 years ago|reply
[+] [-] kspaans|10 years ago|reply
[+] [-] Spooky23|10 years ago|reply
If you're worried about infrastructure being online, issue certificates and trust certificates signed by your CA.
[+] [-] EGreg|10 years ago|reply
Centralized has the fundamental problem of power and choice: witness how facebook, amazon, twitter, Apple, etc. can do whatever they want in their ecosystem. Facebook's users aren't going anywhere, they're essentially trapped if they want to get updates from their "friends", so they'll have to put up with whatever choices the social network makes.
But the biggest power imabance is with developers / publishers. Facebook, Twitter, Amazon, Apple, etc. have all been known to push around their third party publishers, compete with them, and even simply disconnect them whenever they want. Those publishers cannot connect directly with their users without a decentralized login.
And if you're into privacy, you might want to consider how much easier it is for state actors (and hackers) to have backdoors into just one server farm that has everyone's auth information rather than if profiles were stored like the web -- each person could choose their own host.
Distributed auth is possible. What you need is a distributed protocol and reference implementations. Something like OpenID or oAuth is a good start. You can sign up with network X and then use X to auth with other networks. Sadly, xauth was discontinued and everyone assumes Facebook, Twitter et al can be the only OpenID or oAuth providers.
What we need is a new protocol, and that's something we've been working hard on, and have successfully designed.
It doesn't even require you to share your user id, name, etc. with the consumer sites you visit. They can be instantly personalized for you and show you all your friends without knowing who you are. When you are ready, you use oAuth (or something essentially similar) to start building up your profiles in other communities.
No third party can know that user A in community CA authenticated as user B in community CB, unless you sharethat information. You know that thing, "Your friend FooBar is on Instagram as Baz?" That's stuff I might not want everyone to know if Instagram is, say, a porn site. A few years ago, there was a huge uproar about Facebook's "instant personalization" with "trusted partners". Today, it came back and no one cares.
Truth is, we are giving up our power as consumers, and even more so as producers who eventually build our own communities on the back of large, entrenched, centralized communities. Do we really want to centralize power when we see all the bad stuff it can lead to? (Internet 2.0 in India because FB is the only option, Net Neutrality fight because telcos are too centralized, etc.)
I say, once we get the tech right, it can be replicated. After all, bitcoin distributed money. The Web, Email, Git, etc are all distributed. Why not social??
[+] [-] unknown|10 years ago|reply
[deleted]