I’ve been thinking about this topic thru the lens of moral philosophy lately.
A lot of the “big lists of controls” security approaches correspond to duty ethics: following and upholding rules is the path to ethical behaviour. IT applies this control, manages exceptions, tracks compliance, and enforces adherence. Why? It’s the rule.
Contrast with consequentialism (the outcome is key) or virtue ethics (exercising and aligning with virtuous characteristics), where rule following isn’t the main focus. I’ve been part of (heck, I’ve started) lots of debates about the value of some arbitrary control that seemed out of touch with reality, but framed my perspective on virtues (efficiency, convenience) or outcomes (faster launch, lower overhead). That disconnect in ethical perspectives made most of those discussions a waste of time.
A lot of security debates are specific instances of general ethical situations; threat models instead of trolley problems.
I work at medium to large government orgs as a consultant and it’s entertaining watching beginners coming in from small private industries using - as you put it - consequentialism and virtue ethics to fight against an enterprise that admits only duty ethics: checklists, approvals, and exemptions.
My current favourite one is the mandatory use of Web Application Firewalls (WAFs). They’re digital snake oil sold to organisations that have had “Must use WAF” on their checklists for two decades and will never take them off that list.
Most WAF I’ve seen or deployed are doing nothing other then burning money to heat the data centre air because they’re generally left them in “audit only mode”, sending logs to a destination accessed by no-one. This is because if a WAF enforces its rules it’ll break most web apps outright, and it’s an expensive exercise to tune them… and maintain this tuning to avoid 403 errors after every software update or new feature. So no-one volunteers for this responsibility which would be a virtuous ethical behaviour in an org where that’s not rewarded.
This means that recently I spun up a tiny web server that costs $200/mo with a $500/mo WAF in front of it that does nothing just so a checkbox can be ticked.
I have friends who are very scary drivers but insist on backseat driving and telling you about best driving practices, and coworkers who are insistent on implementing excessive procedures at work but constantly are the ones breaking things.
I think following rules gives some people a sense of peace in a chaotic and unpredictable world. And I can't stand them.
The vast majority of the security "industry" is about useless compliance, rather than actual security. The chimps have put their fears into large enterprise compliance documents. This teaches the junior security people at enterprise companies that these useless fears are necessary, and they pass them along to their friends. Why? Not just because of chimps and fear, but also $$. There is a ton of money to be made off of silly chimps.
I’m an engineer who now works security. Very few of us come from an engineering background. Most lack the technical skill to do much than apply controls and run tooling. Some try to do design work but imagine a junior dev with 2-3 years experience trying to write a service.
Those of us who are architects and coders don’t often get to do it anymore because we’re not working on single projects or solutions.. so we become people who swoop in on a project for a month at a time to make sure there’s no major smells before moving on. Our understanding our your system is shallow as a result.
* You get a cool industry certification that you can put on your website to justify the vague "we take your security seriously" platitudes we spew.
* It lets you stop putting money and effort into security once you've renewed your certs this year.
* You don't need to hire a dedicated security person, any sysadmin can check boxes.
* You can say you followed industry best practices and "did all you could" when you get breached.
It's the answer to "how do we not care about security?" across an entire industry that stands to make billions from said lack of care. In a depressing way, the company with useless performative security certs will fare better after a breach then the one without them but that actually tried.
My less cynical take about this is that if you need to actually care about security because you'll be up against sophisticated targeted attacks then you probably already know that. For everyone else there's checkboxes to stop companies from getting owned by drive-by attacks.
Well one of the big problems is that businesses don't do root cause analysis on incidents and learn what controls failed, or should have been in place that may have prevented the incident.
Additionally, actually testing if the controls works. I work in testing controls and I find a lot of controls might be developed well, but just simply aren't being done due to resource constraints.
The ironic thing about the chimp story is that probably chimps are immune to the problem and humans are the only species that would fall for it. It takes chimps a long time to learn to copy others. I doubt they could sustain a superstition like this for long even if you managed to induce it through great effort.
It's humans that copy each other without a second thought. It's a great heuristic on average. These kinds of fables are correctives against our first instinct to replicate other's behaviors, but if we actually tried to reason through everything from first principles we'd never get anything done.
Copying is the plain pieces in the lucky charms, thinking things through is the marshmallows.
I just read the book The Phoenix Project. It's over a decade old so some of the principles are obvious/quaint at this point, or perhaps not quite as applicable.
That said, one of the things that caught me off guard is the dressing down of the head of security by a member of the board. More or less, they were told what they did was clog the flow of useful work. The message conveyed is similar to this post.
> More or less, they were told what they did was clog the flow of useful work.
That sounds like a very valid complaint, too rarely heard these days.
People seem to forget that security always comes at a cost, so security decisions are always trade-offs. The only perfectly secure system is the one that does absolutely nothing at all.
Does forcing everyone's machine to run real-time scans on all file I/O improves our security more than it costs us in crippling all software devs? Maybe. Being on the receiving end of such policies, including this particular one, I sometimes doubt this question was even asked, much less that someone bothered to estimate the expected loss on both sides of the equation. Ignoring the risks doesn't make them go away, but neither do costs go away when you pretend they don't exist.
The Phoenix Project has been very influential on me in my security career, at least partially because I share the name of the ineffectual CISO and want so desperately to avoid the link.
I think the book is still very applicable, and every security practitioner needs to be hit over the head with it (or at least The DevOps Handbook or Accelerate). Security generally is decades behind engineering operations, even though security is basically just a more paranoid lens for doing engineering ops; the ideas from Phoenix are still depressingly revolutionary in my field.
Security is always an economic activity too. The crash engrs at Ford could demand 30 mph speed governers, quarter inch steel plates, 5 point harnesses, and helmets, but people need a car that costs less than $200k and gets more than 2 miles per gallon.
Sometimes security requirements _are_ too onerous.
I've been thinking about this a lot. First, the author should replace security with compliance. Currently they are two different things. There is a huge divide between compliance teams and developers, they speak completely different languages. I'm writing an entire series about it. I do think we can fix the problem, but it is going to be a lot more work than it was to get development and operations on the same page.
This is quite a simplification. There are a lot of useless/dubious controls out there, but the problem is rather the contradiction between security pragmatism and compliance regimes.
####
Government: I need a service.
Contractor: I can provide that.
Government: Does it comply with NIST 123.456?
Contractor: Well not completely, because control XYZ is ackshually useless and doesn't contribute--
I think it's fine to implement a useless control to get a customer.
Just don't pretend that you're doing it because it is a useful control, pretend that you're doing it because jumping through that hoop gets you that customer, and "we're a smaller fish than the government". Especially with the government (especially if it's the USA…) there are going to be utterly pointless hoops. I can pragmatically smile & jump, … but that doesn't make it useful.
Note, though, that "the government" (NIST to be specific) says that requiring passwords to be changed every 90 days is counterproductive and shouldn't be done, yet many corporations (including my employer) still mandate it. Corporate bureaucracy can be as backward and counterproductive as government bureaucracy.
I was kind of shocked by just how gosh-darned reasonable it is when it came out a couple of years ago. It's my absolute favorite thing to cite during audits.
"Are you requiring password resets every 90 days?"
"No. We follow the federal government's NIST SP800-63B guidelines which explicitly states that passwords should not be arbitrarily reset."
I've been pleasantly surprised that I haven't really had an auditor push back so far. I'm sure I eventually will, but it's been incredibly effective ammunition so far.
I bumped into controls mandating security scans, when people running the scans don't need to know anything about the results. One example prevented us from serving public data using Google Web Services because the front-end was still offering 3DES among the offered ciphers. This raised alerts because of the possibility of Sweet32 vulnerability, which is completely impractical to exploit with website scale data sizes and short-lived sessions (and modern browsers generally don't opt to use 3DES). Still, it was a hard 'no', but nobody could explain the risk beyond the risk of non-compliance and the red 'severe' on the report.
We also had scans report GPL licenses in our dependencies, which for us was a total non-issue, but security dug in, not because of legal risk, but compliance with the scans.
Password resets are definitely one, and I still have to tell prospects and customers that I can't both comply with NIST 800-63 and periodically rotate my passwords, every single day. Other ones I often counter include other aggressive login requirements, WAFs, database isolation, weird single tenancy or multitenancy asks, or for anti-virus to be in places that they don't need to be.
Agreed. As an ISO 27001 auditor I see a growing demand for security compliance certification / attestations (ISO 27001, SOC 2), and it's client driven 95% of the time. So, in the end, it’s often worth it to go ahead and do it.
ISO 27001 is more affordable (2k-3k for audit, and additional 1k-3k for external provider to manage everything for you), SOC 2 will set you back at least 10k
The chimps in a cage metaphor is a great introduction to a problem that exists in all software development. I call it the Walls of Assumptions.
When we write software, we answer three questions: "What?", "How?", and "Why?".
We write out the answers to "What?" and "How?" explicitly as data and source code. The last answer, alas, can never be written; at least, not explicitly. When we are good programmers, we do our best to write the answer Why implicitly. We write documentation, tutorials, examples, etc. These construct a picture whose negative space looks similar enough to live in Why's place.
No matter what, the question "Why?" is always answered. How can this be, if that answer is never written? It is encoded into the entropy of the very act of writing. When we write software, we must make decisions. There are many ways a problem could be solved: choose only one solution. A chosen solution is what I call an "Assumption". It is assumed that the solution you chose will be the best fit for your program: that it is the answer your users need, or at least that it will be good enough for them to accomplish what they want.
Inevitably, our Assumptions will be wrong. Users will bring unique problems that your Assumption isn't compatible with. While you hoped your Assumption would be a bridge, it is instead a Wall.
The Walls of Assumptions in every program define a unique maze that every software user must traverse to meet their goals. Monolithic design cultivates a walled garden, where an efficient maze may fail entirely to lead the user to their goal. Modular design cultivates an ecosystem of compatible mazes that, while less efficient, can be restructured to reach more goals.
---
The eternal hype around Natural Language Processing and Artificial Intelligence is readily explained with this metaphor. The most powerful feature of Natural Language is Ambiguity. Ambiguity allows us to encode more than one answer into data, which means we actually can write the answer to Why; we just can't read it computationally. Artificial Intelligence hinges on the ability for decision to be encoded into software. I'm not talking about logical branches here: I'm talking about the ability to fully postpone the answering of Why from time-of-writing to runtime.
---
For the last year or two, I've been chewing on a potential solution to this problem that I call the Story Empathizer. So far, the idea is too abstract; but I still think it has potential.
Security is having a bit of a hay day as everyone fights to build a moat against smart kids and AI. SOC2 and friends are a pain in the ass, but are a moat more than most these days. Security theater? The answer is at least “mostly”, but a moat nonetheless. You can feel the power swinging back into the hands of the customer.
When all software is trivial, the salesman and the customer will reign again. Not that I’m hoping for that day, but that day may be coming.
I think the "chimps in a cage" needs some followup experiments to tell the whole story -- replacing the banana with a much higher value reward, or placing another water hose which fires if chimps stopped trying to reach the reward ;)
Most likely, useless controls exist because the company thinks they are good enough for the business and there's no incentive to improve or replace them.
The chimps story is made up. There was a study that tried to test something like that but only in one case, out of many trials, was a chimp discouraged from doing something by another chimp, due to the second chimp’s fear.
I wrote this! I'm excited to see this get attention here. I'll be responding to folks' comments where I feel like I have something to add, but please let me know if you have any questions or feedback!
There's certainly a lot of cargo cult security controls out there. One of the big issues is simply that it is very hard to change established practices. It takes a lot of effort, and senior people who are not security experts have to sign off on the "risk" of not doing what all their peers are doing.
There is one word I would change in your post title. Security has a useless controls problem, not security is a useless controls problem.
If money was no object I would just hire continuous pen testers to test your infra and every time they are able to do something they shouldn't be able to then fix how they did it and then repeat endlessly. I think it is analogous to immersing a tire in water and looking for bubbles to find leaks and then patching them.
> Cross-site scripting (XSS) safe front-end frameworks like React are good because they prevent XSS. XSS is bad because it allows an attacker to take over your active web session and do horrible things
What? React is not "Cross-site scripting safe"
Many security controls do require more than a 2-3 sentence explanation. Trying to condense your response in such a way strips out any sort of nuance such as examples of how react can be susceptible to XSS. Security is a subset of engineering and security decisions often require a trade off. React does protect against some classes of attacks, but also exposes applications to new ones.
[+] [-] sharkbot|1 year ago|reply
A lot of the “big lists of controls” security approaches correspond to duty ethics: following and upholding rules is the path to ethical behaviour. IT applies this control, manages exceptions, tracks compliance, and enforces adherence. Why? It’s the rule.
Contrast with consequentialism (the outcome is key) or virtue ethics (exercising and aligning with virtuous characteristics), where rule following isn’t the main focus. I’ve been part of (heck, I’ve started) lots of debates about the value of some arbitrary control that seemed out of touch with reality, but framed my perspective on virtues (efficiency, convenience) or outcomes (faster launch, lower overhead). That disconnect in ethical perspectives made most of those discussions a waste of time.
A lot of security debates are specific instances of general ethical situations; threat models instead of trolley problems.
[+] [-] jiggawatts|1 year ago|reply
My current favourite one is the mandatory use of Web Application Firewalls (WAFs). They’re digital snake oil sold to organisations that have had “Must use WAF” on their checklists for two decades and will never take them off that list.
Most WAF I’ve seen or deployed are doing nothing other then burning money to heat the data centre air because they’re generally left them in “audit only mode”, sending logs to a destination accessed by no-one. This is because if a WAF enforces its rules it’ll break most web apps outright, and it’s an expensive exercise to tune them… and maintain this tuning to avoid 403 errors after every software update or new feature. So no-one volunteers for this responsibility which would be a virtuous ethical behaviour in an org where that’s not rewarded.
This means that recently I spun up a tiny web server that costs $200/mo with a $500/mo WAF in front of it that does nothing just so a checkbox can be ticked.
[+] [-] treflop|1 year ago|reply
I have friends who are very scary drivers but insist on backseat driving and telling you about best driving practices, and coworkers who are insistent on implementing excessive procedures at work but constantly are the ones breaking things.
I think following rules gives some people a sense of peace in a chaotic and unpredictable world. And I can't stand them.
[+] [-] xxpor|1 year ago|reply
[+] [-] DyslexicAtheist|1 year ago|reply
[+] [-] danjl|1 year ago|reply
[+] [-] goalieca|1 year ago|reply
Those of us who are architects and coders don’t often get to do it anymore because we’re not working on single projects or solutions.. so we become people who swoop in on a project for a month at a time to make sure there’s no major smells before moving on. Our understanding our your system is shallow as a result.
[+] [-] Spivak|1 year ago|reply
* You get a cool industry certification that you can put on your website to justify the vague "we take your security seriously" platitudes we spew.
* It lets you stop putting money and effort into security once you've renewed your certs this year.
* You don't need to hire a dedicated security person, any sysadmin can check boxes.
* You can say you followed industry best practices and "did all you could" when you get breached.
It's the answer to "how do we not care about security?" across an entire industry that stands to make billions from said lack of care. In a depressing way, the company with useless performative security certs will fare better after a breach then the one without them but that actually tried.
My less cynical take about this is that if you need to actually care about security because you'll be up against sophisticated targeted attacks then you probably already know that. For everyone else there's checkboxes to stop companies from getting owned by drive-by attacks.
[+] [-] NoPicklez|1 year ago|reply
Additionally, actually testing if the controls works. I work in testing controls and I find a lot of controls might be developed well, but just simply aren't being done due to resource constraints.
[+] [-] habitue|1 year ago|reply
It's humans that copy each other without a second thought. It's a great heuristic on average. These kinds of fables are correctives against our first instinct to replicate other's behaviors, but if we actually tried to reason through everything from first principles we'd never get anything done.
Copying is the plain pieces in the lucky charms, thinking things through is the marshmallows.
[+] [-] brandall10|1 year ago|reply
That said, one of the things that caught me off guard is the dressing down of the head of security by a member of the board. More or less, they were told what they did was clog the flow of useful work. The message conveyed is similar to this post.
[+] [-] TeMPOraL|1 year ago|reply
That sounds like a very valid complaint, too rarely heard these days.
People seem to forget that security always comes at a cost, so security decisions are always trade-offs. The only perfectly secure system is the one that does absolutely nothing at all.
Does forcing everyone's machine to run real-time scans on all file I/O improves our security more than it costs us in crippling all software devs? Maybe. Being on the receiving end of such policies, including this particular one, I sometimes doubt this question was even asked, much less that someone bothered to estimate the expected loss on both sides of the equation. Ignoring the risks doesn't make them go away, but neither do costs go away when you pretend they don't exist.
[+] [-] too_pricey|1 year ago|reply
I think the book is still very applicable, and every security practitioner needs to be hit over the head with it (or at least The DevOps Handbook or Accelerate). Security generally is decades behind engineering operations, even though security is basically just a more paranoid lens for doing engineering ops; the ideas from Phoenix are still depressingly revolutionary in my field.
[+] [-] eYrKEC2|1 year ago|reply
Sometimes security requirements _are_ too onerous.
[+] [-] alsetmusic|1 year ago|reply
Similar to seeing IT as a cost rather than a benefit.
[+] [-] colek42|1 year ago|reply
https://productgovernance.substack.com/publish/posts/detail/...
[+] [-] pphysch|1 year ago|reply
####
Government: I need a service.
Contractor: I can provide that.
Government: Does it comply with NIST 123.456?
Contractor: Well not completely, because control XYZ is ackshually useless and doesn't contribute--
Government: hangs up
[+] [-] deathanatos|1 year ago|reply
Just don't pretend that you're doing it because it is a useful control, pretend that you're doing it because jumping through that hoop gets you that customer, and "we're a smaller fish than the government". Especially with the government (especially if it's the USA…) there are going to be utterly pointless hoops. I can pragmatically smile & jump, … but that doesn't make it useful.
[+] [-] not2b|1 year ago|reply
[+] [-] count|1 year ago|reply
[+] [-] EE84M3i|1 year ago|reply
[+] [-] commandar|1 year ago|reply
Just because this is my favorite soapbox - anyone that has to deal with passwords should go read NIST SP800-63B:
https://pages.nist.gov/800-63-3/sp800-63b.html
I was kind of shocked by just how gosh-darned reasonable it is when it came out a couple of years ago. It's my absolute favorite thing to cite during audits.
"Are you requiring password resets every 90 days?"
"No. We follow the federal government's NIST SP800-63B guidelines which explicitly states that passwords should not be arbitrarily reset."
I've been pleasantly surprised that I haven't really had an auditor push back so far. I'm sure I eventually will, but it's been incredibly effective ammunition so far.
[+] [-] encomiast|1 year ago|reply
We also had scans report GPL licenses in our dependencies, which for us was a total non-issue, but security dug in, not because of legal risk, but compliance with the scans.
[+] [-] too_pricey|1 year ago|reply
Password resets are definitely one, and I still have to tell prospects and customers that I can't both comply with NIST 800-63 and periodically rotate my passwords, every single day. Other ones I often counter include other aggressive login requirements, WAFs, database isolation, weird single tenancy or multitenancy asks, or for anti-virus to be in places that they don't need to be.
[+] [-] convolvatron|1 year ago|reply
[+] [-] mozzieman|1 year ago|reply
[+] [-] ISO27Auditor|1 year ago|reply
ISO 27001 is more affordable (2k-3k for audit, and additional 1k-3k for external provider to manage everything for you), SOC 2 will set you back at least 10k
[+] [-] thomastjeffery|1 year ago|reply
When we write software, we answer three questions: "What?", "How?", and "Why?".
We write out the answers to "What?" and "How?" explicitly as data and source code. The last answer, alas, can never be written; at least, not explicitly. When we are good programmers, we do our best to write the answer Why implicitly. We write documentation, tutorials, examples, etc. These construct a picture whose negative space looks similar enough to live in Why's place.
No matter what, the question "Why?" is always answered. How can this be, if that answer is never written? It is encoded into the entropy of the very act of writing. When we write software, we must make decisions. There are many ways a problem could be solved: choose only one solution. A chosen solution is what I call an "Assumption". It is assumed that the solution you chose will be the best fit for your program: that it is the answer your users need, or at least that it will be good enough for them to accomplish what they want.
Inevitably, our Assumptions will be wrong. Users will bring unique problems that your Assumption isn't compatible with. While you hoped your Assumption would be a bridge, it is instead a Wall.
The Walls of Assumptions in every program define a unique maze that every software user must traverse to meet their goals. Monolithic design cultivates a walled garden, where an efficient maze may fail entirely to lead the user to their goal. Modular design cultivates an ecosystem of compatible mazes that, while less efficient, can be restructured to reach more goals.
---
The eternal hype around Natural Language Processing and Artificial Intelligence is readily explained with this metaphor. The most powerful feature of Natural Language is Ambiguity. Ambiguity allows us to encode more than one answer into data, which means we actually can write the answer to Why; we just can't read it computationally. Artificial Intelligence hinges on the ability for decision to be encoded into software. I'm not talking about logical branches here: I'm talking about the ability to fully postpone the answering of Why from time-of-writing to runtime.
---
For the last year or two, I've been chewing on a potential solution to this problem that I call the Story Empathizer. So far, the idea is too abstract; but I still think it has potential.
[+] [-] erulabs|1 year ago|reply
When all software is trivial, the salesman and the customer will reign again. Not that I’m hoping for that day, but that day may be coming.
[+] [-] zzyzxd|1 year ago|reply
Most likely, useless controls exist because the company thinks they are good enough for the business and there's no incentive to improve or replace them.
[+] [-] satisfice|1 year ago|reply
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] too_pricey|1 year ago|reply
[+] [-] MattPalmer1086|1 year ago|reply
There is one word I would change in your post title. Security has a useless controls problem, not security is a useless controls problem.
[+] [-] UltraSane|1 year ago|reply
[+] [-] nshkrdotcom|1 year ago|reply
But, reflecting on XSS: What a shame that we can't evolve our standards, protocols, software, and hardware to fix such issues fundamentally.
[+] [-] buctober|1 year ago|reply
[+] [-] boveus|1 year ago|reply
What? React is not "Cross-site scripting safe"
Many security controls do require more than a 2-3 sentence explanation. Trying to condense your response in such a way strips out any sort of nuance such as examples of how react can be susceptible to XSS. Security is a subset of engineering and security decisions often require a trade off. React does protect against some classes of attacks, but also exposes applications to new ones.
[+] [-] souixuy|1 year ago|reply
[deleted]