This is missing an extremely important upfront concept: you need to know what you're protecting and how valuable it is.
It does no good whatsoever to require every user of a grocery-list app to have a Yubikey to verify their identity. It might not even make sense to have users login at all.
The balance between usability and security must be consonant with the costs of implementation.
I believe that was covered, but it was under the context of security policy vs a more direct description. The key point I'd pull out is: "The goal isn't to eliminate risk entirely, but bring it down to an acceptable level."
There could be (and probably are) entire books written about how to define what "an acceptable level" means... but that is the same point you are getting at - security is not a guaranteed lockdown of your assets, it is self-defined sufficient deterrence to attack. Sometimes that means light security, sometime that means heavy... but it is up to you to make those decisions.
That should come out in threat modelling, which is covered. when you're looking at who the adverseries for a specific system are, you'll necessarily cover what your data is and how valuable it is.
What may not be covered, and something which often causes problems with a lot of systems, is your threat model, may not be the same as that of your customers and, depending on what you're selling, you may not be able to know your customers threat model in advance.
To provide a couple of examples. If you provide server hosting, and a crypto exchange starts using your service, suddenly you may attract a load of attention from high-end attackers looking to compromise your systems as a means to get at other peoples.
or if you provide something like a consumer photo sharing/storage system, if "celebrities" start using it, suddenly you can find that people with a lot of time and interest start targeting you.
The tricky part is, commercially, do you have the resources to secure to the level required by the most sensitive customer...
Unfortunately, if you force users to pick between usability and security, they'll ignore security every time.
Or as I often say "no one ever says, 'wow, that was a great login experience', they just want to get to the features behind that experience (hopefully securely behind it)".
Security people always want to "set policy" "educate on practices" and "enforce". You've already lost the battle.
PROVIDE SOLUTIONS. Why recommend all this "policy" when what you need to do is provide, at a minimum, a reference implementation. If you get called in as part of security architecture, PROVIDE A SOLUTION.
Because if you don't the devs will do the absolute minimum, and likely will have backdoors galore, especially as your policies impose real restrictions on their systems support quality of life, ability to respond to production issues, and iterate to produce features.
The other persistent issue with security is that it is anathema to automation, and therefore efficiency. So dovetailing with providing a solution, these practices for 2FA and SSO (which invariably involves horrible popup UIs and other hacky things) will block, say, automated backups, auditing, monitoring, etc that also require access. So be ready with those.
I think this is correct but is tricky to put into action as companies rarely staff security departments to do this, typically you'll see ratios of 20 devs or even 50 devs to 1 security person. At that level it's very difficult for the security person to know enough/have enough time for detail work.
Ideally technical security implementation should be seen as a function of the development/DevOps teams. You can have security teams to provide specific advice but the work of designing and implementing controls is best done within the team managing the system.
E.g., in Azure, in theory, we should PoLP the access controls. But Azure's tutorials and guides often recommend using Contributor (Azure role that entails access to almost everything, except granting more access), and which permissions an API call requires is, AFAICT, undocumented. And sometimes, the error doesn't tell you.¹
I want to allow SSH into systems. Copying keys about the landscape is one employee departure away from having keys on systems that don't need to be there. The last time I set up LDAP … I had to learn about object classes, and some sort of object-oriented tree database when all I want is a list of users & perms. (I understand LDAP's design better now, and I even like it, but it the onboarding is braindeath.)
There are any number of k8s dashboards that would give my coworkers better vis … and basically none that have an auth story.
The examples are endless.
¹heck, sometimes the error isn't even grammatically correct English.
Tyically there's a hierarchy of security documents/practices. you start with principles work, like in this repo. then you look at technologies you use and start getting into specific examples.
For many platforms/services there will be security best practice sections on their sites and that's a starting point, but then as you mention even their tutorials often don't follow good practice.
The challenge for people writing standards docs is similar, new things come along all the time. How much time is availble to be dedicated to writing detailed guidance.
To give one example, the CIS benchmarks that a lot of orgs use to harden their environments are written almost purely by volunteers, so keeping them updated is a tricky game.
These are about adding stuff. The overwhelmingly sost effective way to secure things is to simplify them until you have good confidence you understand the thing as a whole which enables you to reason about its security properties on a much better level with high confidence, and enable other people to do the same.
dsr_|4 years ago
It does no good whatsoever to require every user of a grocery-list app to have a Yubikey to verify their identity. It might not even make sense to have users login at all.
The balance between usability and security must be consonant with the costs of implementation.
codingdave|4 years ago
There could be (and probably are) entire books written about how to define what "an acceptable level" means... but that is the same point you are getting at - security is not a guaranteed lockdown of your assets, it is self-defined sufficient deterrence to attack. Sometimes that means light security, sometime that means heavy... but it is up to you to make those decisions.
raesene9|4 years ago
What may not be covered, and something which often causes problems with a lot of systems, is your threat model, may not be the same as that of your customers and, depending on what you're selling, you may not be able to know your customers threat model in advance.
To provide a couple of examples. If you provide server hosting, and a crypto exchange starts using your service, suddenly you may attract a load of attention from high-end attackers looking to compromise your systems as a means to get at other peoples.
or if you provide something like a consumer photo sharing/storage system, if "celebrities" start using it, suddenly you can find that people with a lot of time and interest start targeting you.
The tricky part is, commercially, do you have the resources to secure to the level required by the most sensitive customer...
mooreds|4 years ago
Or as I often say "no one ever says, 'wow, that was a great login experience', they just want to get to the features behind that experience (hopefully securely behind it)".
AtlasBarfed|4 years ago
Security people always want to "set policy" "educate on practices" and "enforce". You've already lost the battle.
PROVIDE SOLUTIONS. Why recommend all this "policy" when what you need to do is provide, at a minimum, a reference implementation. If you get called in as part of security architecture, PROVIDE A SOLUTION.
Because if you don't the devs will do the absolute minimum, and likely will have backdoors galore, especially as your policies impose real restrictions on their systems support quality of life, ability to respond to production issues, and iterate to produce features.
The other persistent issue with security is that it is anathema to automation, and therefore efficiency. So dovetailing with providing a solution, these practices for 2FA and SSO (which invariably involves horrible popup UIs and other hacky things) will block, say, automated backups, auditing, monitoring, etc that also require access. So be ready with those.
fsflover|4 years ago
>PROVIDE SOLUTIONS.
Here you go: https://qubes-os.org
raesene9|4 years ago
Ideally technical security implementation should be seen as a function of the development/DevOps teams. You can have security teams to provide specific advice but the work of designing and implementing controls is best done within the team managing the system.
deathanatos|4 years ago
E.g., in Azure, in theory, we should PoLP the access controls. But Azure's tutorials and guides often recommend using Contributor (Azure role that entails access to almost everything, except granting more access), and which permissions an API call requires is, AFAICT, undocumented. And sometimes, the error doesn't tell you.¹
I want to allow SSH into systems. Copying keys about the landscape is one employee departure away from having keys on systems that don't need to be there. The last time I set up LDAP … I had to learn about object classes, and some sort of object-oriented tree database when all I want is a list of users & perms. (I understand LDAP's design better now, and I even like it, but it the onboarding is braindeath.)
There are any number of k8s dashboards that would give my coworkers better vis … and basically none that have an auth story.
The examples are endless.
¹heck, sometimes the error isn't even grammatically correct English.
raesene9|4 years ago
For many platforms/services there will be security best practice sections on their sites and that's a starting point, but then as you mention even their tutorials often don't follow good practice.
The challenge for people writing standards docs is similar, new things come along all the time. How much time is availble to be dedicated to writing detailed guidance.
To give one example, the CIS benchmarks that a lot of orgs use to harden their environments are written almost purely by volunteers, so keeping them updated is a tricky game.
mooreds|4 years ago
The 3rd edition is expansive (1000 pages, plenty of references) but readable. Free PDFs of previous editions are available at that link.
mhoad|4 years ago
1970-01-01|4 years ago
fulafel|4 years ago
jnalley|4 years ago
raesene9|4 years ago
I've been in the industry for 20+ years now and I can see things in that repo. that were old when I started :)
The technologies change, and the implementations change, but concepts like "defence in depth" don't.
wanderer_|4 years ago
:)