I’ve been thinking about this topic thru the lens of moral philosophy lately.
A lot of the “big lists of controls” security approaches correspond to duty ethics: following and upholding rules is the path to ethical behaviour. IT applies this control, manages exceptions, tracks compliance, and enforces adherence. Why? It’s the rule.
Contrast with consequentialism (the outcome is key) or virtue ethics (exercising and aligning with virtuous characteristics), where rule following isn’t the main focus. I’ve been part of (heck, I’ve started) lots of debates about the value of some arbitrary control that seemed out of touch with reality, but framed my perspective on virtues (efficiency, convenience) or outcomes (faster launch, lower overhead). That disconnect in ethical perspectives made most of those discussions a waste of time.
A lot of security debates are specific instances of general ethical situations; threat models instead of trolley problems.
I work at medium to large government orgs as a consultant and it’s entertaining watching beginners coming in from small private industries using - as you put it - consequentialism and virtue ethics to fight against an enterprise that admits only duty ethics: checklists, approvals, and exemptions.
My current favourite one is the mandatory use of Web Application Firewalls (WAFs). They’re digital snake oil sold to organisations that have had “Must use WAF” on their checklists for two decades and will never take them off that list.
Most WAF I’ve seen or deployed are doing nothing other then burning money to heat the data centre air because they’re generally left them in “audit only mode”, sending logs to a destination accessed by no-one. This is because if a WAF enforces its rules it’ll break most web apps outright, and it’s an expensive exercise to tune them… and maintain this tuning to avoid 403 errors after every software update or new feature. So no-one volunteers for this responsibility which would be a virtuous ethical behaviour in an org where that’s not rewarded.
This means that recently I spun up a tiny web server that costs $200/mo with a $500/mo WAF in front of it that does nothing just so a checkbox can be ticked.
Oh man, web application firewalls and especially Azure Application Gateway are the bane of my existence. Where I work they literally slap an Azure Application Gateway instance on every app service with all rules enabled (even the ones Microsofts recommends not to enable) in block mode directly when provisioning the stuff in Azure. The app is never observed in audit mode.
Result is that random stuff in the application does not work for any user, or only for some users, because some obscure rule in Azure Application Gateway triggers. Especially the SQL injection rule of Azure Application Gateway seems to misfire very often. A true pain to debug, then a true pain for the process to get the particular rule disabled.
And then not even to start about the monthly costs. Often Azure Application Gateway itself is more expensive than the App Service + SQL Database + Blob Storage + opt. App Insights. I really think someone in the company got offered a private island from Microsoft for putting Azure Application Gateway as a mandatory piece in the infrastructure of every app.
Yes, our most of our security has been outsourced to cheap workers in developing countries like India, which are of course rated on maintaining the standard and not rated on thinking and understanding what you want and putting things in context, and probably also work 60-70 hours per week during ungodly times so you can hardly blame them. It is truly the process that is broken.
Well what if they were intelligent and could actually really understand the data and its schema before deciding whether to allow or reject the request... wait... that's just the application itself.
It all boils down to trust. Management don’t trust the developers to do the right thing because they outsourced development to the lowest bidder. They futilely compensate for this by spending a mere $500/mo for a WAF.
So WAF. Bad? I don’t know enough about it. If it’s just a way to inject custom rules that need to be written and maintained, the value seems low or negative. I had hoped you got a bunch of packages that protected against (or at least detected) common classes of attacks. Or at least gave you tools in order to react to an attack?
Just slapping WAF in front of your services without configuring and maintaining rules is bad.
Without someone dedicated for maintenance of WAF it is just a waste. Where not many companies want to pay for someone babysitting WAF and it can be full time job if there is enough changes on layers behind.
Maybe, if the attacker didn't bother to hack into the WAF itself (generally a softer target than whatever's behind it) and if you bothered keeping or understanding the logs (extremely unlikely to be a good use of resources).
You don't need to understand the logs at the time you gather them for this, you just need to keep them long enough to cover the breach, and to be able to understand them after the fact. Hardly seems like an obvious waste to me, and well worth $500/mo.
Every corporation over a certain size has a rule that everything needs a firewall in front of it… even if the something is a cloud service that only listens on port 443.
I have friends who are very scary drivers but insist on backseat driving and telling you about best driving practices, and coworkers who are insistent on implementing excessive procedures at work but constantly are the ones breaking things.
I think following rules gives some people a sense of peace in a chaotic and unpredictable world. And I can't stand them.
A little of both. I understand getting a warm fuzzy feeling that you did the right things, but if you don't achieve your goal, what's the point?
But let me clarify -- OP mentioned a contrast between consequentialism and virtual ethics and I think you can be "too much" consequentialism too. I'm wouldn't call myself a rule follower but I also follow rules 99% of the time too. It does create a sense of order and and predictability and I value that.
There is a right balance where you do follow rules but you also know when to break them. What I can't really stand are rigid people -- diehard rule followers or diehard "no one can tell me what to do." I find working with rigid people hard because you have to work around their "buttons."
It gets worse than that: it rewards people who try to break the law as much as possible without getting caught, while people who follow it are punished.
That's true of most laws, but the system punishes law breakers to make it better to follow the law overall. When the law is vague and subjective, the people who get the most reward are the ones who are willing to see how far they can push it.
A lot of the “big lists of controls” security approaches correspond to duty ethics: following and upholding rules is the path to ethical behaviour. IT applies this control, manages exceptions, tracks compliance, and enforces adherence. Why? It’s the rule.
Contrast with consequentialism (the outcome is key) or virtue ethics (exercising and aligning with virtuous characteristics), where rule following isn’t the main focus. I’ve been part of (heck, I’ve started) lots of debates about the value of some arbitrary control that seemed out of touch with reality, but framed my perspective on virtues (efficiency, convenience) or outcomes (faster launch, lower overhead). That disconnect in ethical perspectives made most of those discussions a waste of time.
A lot of security debates are specific instances of general ethical situations; threat models instead of trolley problems.