Hm, currently, this sets of marketing alert bells. It looks pretty, it has good points. I'd like to have some structured approaches to some internal concerns we have.
However, there seem to be zero links to tools and approaches to actually do something? Most links are to articles explaining why the manifesto exists... by companies offering consulting in this area.
This is a nice slide for an executive briefing. However, to get anything useful out of threat modeling you really need a full architecture view to start with. This would include both the technology and the business processes. Few organizations are willing to put effort and resources into developing these and there are few, if any, tools to help.
I think threat modeling is great, and not that hard (see [1]), but the problem as with everything is that it's tedious to do by hand and it's not clear how one should start or structure the output of a threat modeling exercise.
This cries for better tooling. After all, tooling is here to make you a superhuman, so where's the tooling to help you create a threat model easily?
I've done some with spreadsheets, but it's really not the ideal tool.
If I had time on my hand, I would probably create a threat model SaaS.
I've mostly given up on threat modelling as a solution, even after building a tool from scratch and using it in organizations, and now only do it privately on my own stuff, but the reason might be a fresh perspective.
The basic problem is that the demand for security stems almost exclusively from compliance because compliance transfers risk out of the project and onto the standard/model. The idea of thinking through risks and hypotheticals and leaving a paper trail that you have acknowledged them basically converts theoretical risk into real liability within the project. In an ideal world of individual ownership and goodness, this means the incentives are aligned to create a good product, but in the real world of organizations, you've reduced the flexibility of the business to manage that risk, which means to take the risk and respond to it on the fly as it looks like it becomes realized. By problematizing the risk instead of managing it, you have destroyed potential value. Threat modelling is the exercise of problematizing risk.
This is why the threat model is often the elephant in the room. Almost nobody in business wants to have the "negative" conversation about whether to block law enforcement (foreign or domestic), surveillance on users, protect against data snooping by technical staff (which is often a platform perk and feature), or institutional privacy violations and other obvious threats that a threat modelling exercise addresses. A business PM won't problematize those things because it will demand a solution that gets in their critical path. The best they can do is manage it, which means orienting themselves to risk and being ready to respond.
When people buy security products, they almost exclusively buy ones that provide data to manage risk using surveillance and monitoring, which means generating data that drives conversations and enables the organization to adapt in time. If a security product does not either a) externalize a risk with compliance or b) provide data to flexibly manage risk, it's not a security product in the market today because nobody will have bought it.
I realize this is a 10th man view of something these very smart people have spent a lot of time on, but in the 25 years I have been in the field, I have come to the conclusion that security people are essentially environmentalists and activists trying to make a case for internalizing an exogenous problem and cost, and the only way to get traction for it is to produce either low-level discretionary developer tools, or generate data that supports the conversations and provides that flexibility in a business' attitude to risk. Threat modelling is super powerful, but as a forcing function it can destroy value, and this is why it has encountered so much resistance. All manifestos are quixotic, and that is why I think this one may also be.
I have found that the real value of a threat model is defining the business model for a security product, where the threat model is the business case for a product, but that product needs the above features, and to not be a solution to succeed. The security product provides either compliance or data for the threats you derive from your modelling exercise, and it must either transfer risk to a model or provide data to manage. If it leaves responsibility in the project, like threat modelling, it's going to fail.
Your comment resonates with me but I'm not sure I fully understand it:
"compliance transfers risk out of the project and onto the standard/model": If I choose to be compliant to a standard, some of the risks are mitigated, not transferred; some risks I have that are not covered by the standard are not mitigated or transferred, but still unknown, because I chose not to look into the threats (but just comply with some standard). So how does it transfer risk out of a project?
"A business PM won't problematize those things because it will demand a solution that gets in their critical path. The best they can do is manage it": If you identify a risk you can still choose to accept it and do nothing to mitigate it. So why does it demand a solution?
"you've reduced the flexibility of the business to manage that risk": Why? Again, risk identification does not automatically mean risk mitigation. If you're aware of a risk you may choose to detect problems instead of prevent them from happening. How else would you be able to manage risk if you are not even aware of it?
Maybe your comment is related to something I have experienced. I often think about a theoretical situation related to the movie the Matrix. Suppose there exists a complete risk analysis that provides insight into every aspect of risk for a company. By taking the red pill responsible management would instantly see all these risks, but it cannot be "unseen", so they would have to act to reduce them to acceptable levels (taking away resources that would otherwise be devoted to business functionality). Would they take the red pill or the blue pill? I think most managers would choose to stay ignorant.
> some risks I have that are not covered by the standard are not mitigated or transferred
If you're compliant with a regulation or a standard, then you're not liable for failures of those standards.
I think there's an implicit assumption that the harm of a failure will fall on users/consumers/some other third party, rather than yourself. In this case, the "risk" is liability for damages, rather than actual damages. If you roll your own risk-management strategy, then you assume responsibility for its failures. If you follow a blessed external standard, then you transfer responsibility to the standard. So the risk of a failure is not fully mitigated, but your risk of being liable for that failure is. Which, to GP's point, disincentivizes exploring novel approaches that mitigate the risk of real failures.
Most of the compliance paperwork I’ve seen does leave room for custom risk assessments, threat modeling or other wordings that invite a business team to do more. However in their rush to go live or otherwise get it over with this security work is done after all other things. It isn’t integrated, in an SDLC for example.
So minimum standards become maximum standards. It’s hard work convincing teams to do better, but at least the compliance docs give permission to develop your own, often better, understanding if the data classification is high enough. It doesn’t happen often but I havent abandoned all hope yet.
A threat model is just a cognitive tool whose primary value is in creating an alternative perspective on a design problem to broaden thinking and improve outcomes. Like any tool, there are people who carry the hammer too far and suddenly everything is a nail.
As soon as they start harping on about a "journey of understanding" and "multiple deliveries" your bubbling instinct of disgust is correct and you should run and hide.
I don't trust security people to do sane things. - Linus Torvalds (2017)
while i appreciate the insights you bring to bear on the actual behavior of organizations and teams, your response seems to be resigned to accepting the status quo rather than working to change the incentives so that security risks and damages are internalized appropriately.
this kind of excuse-making is how we slide down various slippery slopes (e.g., how google's "don't be evil" eventually failed), even with eyes wide open. instead, the better response is to advocate legislative and organizational improvements while also working within your perceived constraints. it can be both, not either-or.
This comment expresses a misunderstanding. Merely not believing hard enough is not why threat modelling has failed. There is an incentive mechanism and this description of it can help them re-orient their strategy.
The counterfactual premise of threat modelling is that a business wants responsibility for mitigating or remediating risk without direct compensation, instead of a method to manage and transfer it. A technologist is just happy to solve problems, so they don't see this open loop as a source of value.
Very interesting comment, thank you for sharing your thoughts. I particularly enjoyed your depiction of security products.
Years ago, I advocated strongly for threat modelling and bringing the process "to the masses". I was a security consultant, it was easy for me to tell others what they should do.
Then I started working as an "internal" in an company large enough to offer me both many different perspectives and full access to things that needed threat modelling. Architects and developers I only interacted with as my "customers" became my colleagues. And my point of view completely changed.
I agree with you: threat modelling, as it is "evangelized" today faces strong resistance from business as an expensive and negative activity. The infrequent times it is enjoyed, reasons are not those we think: threat modelling is really a "cool" experience when done correctly, participants learn a lot of things and usually enjoy the activity. Then comes the second round. And the third. And that's where you see it hits a wall: either you repeat the same thing, again and again, or participants don't have the time.
Today, I still do threat modelling, and quite actively. On my own, as you said. My colleagues don't even know I use this process to prepare my internal publications. I write threat advisories, which I then send to a little more than 300 architects, developers and project managers worldwide. My recommendations are the output of a threat modelling process, these advisories fly into the field in a "finished" state. Threat modelling leverages its full strength in environments that can benefit from high capitalization of thought, such as when you deal with a large number of silos/teams that are exposed to very similar problems.
There is a very simple trick to catch wannabe threat modellers: look for threat models that include a "validate all input" recommendation or alike. If you see this in a threat model, you know you are dealing with someone who hasn't actually understood what threat modelling is and is simply repeating some tricks learned the evening before. I could also talk about what I think of security vendors that try to sell threat modelling as some sort of "transferable" skill, which can be taught in one or two days to non-security professionals. But I will refrain.
Now about the manifesto. In my opinion, Adam Shostack deserves full credit, not for inventing threat modelling but for actually formalizing the process in its most efficient and final form (i.e. the 4 threat modelling "questions"). All attempts to complete his work (e.g. books, methodologies, articles, etc.) following his release of the 4 questions are nothing more that complicated "techniques" to derive threats from a diagram or a specification.
> People and collaboration over processes, methodologies, and tools.
This part I don't get. Processes, methodologies, and tools are how you prevent bad stuff from happening. This is exactly what you need over just whether people feel okay or are collaborating enough before they go home. You can value these things but you absolutely should not put people over your processes, methodologies, and tools. Those are there when people inevitably fail. Pretty poor advice from anything related to threat modeling I've heard before; This is corporate speak instead of security speak.
Process, methodologies and tools are the resources available to you to tackle the problem from a technical point of view, rather than a goal unto themselves. People and collaboration are how you set those goals, and how you decide whether the processes in place fulfil those goals
If you read the whole manifesto it references process and consistency quite a few times. I think the point is that its easier to fix the process than the people, so prioritize participation and collaboration over sticking to a specific approach.
Wow, I don't know if it's intentional on the part of the authors, but, on the contrary, I'd ask you: why? What is your threat model?
Do you think somebody may perform a MitM to replace the information on the website?
Do you think somebody could sniff your communications with that website and there could be any negative consequence for any party?
The HTTPS movement, in a lot of situations, suffers from non-threat modeling issue. There's very little gain from https-protecting a page like the linked one; at the same time it's true that the cost involved is minimal.
Just two weeks ago, security researchers (like the kind of people who would be interested in visiting this page) were targeted by hackers which used a still-unpatched (as far as we know) Chrome zero-day to install malware: https://blog.google/threat-analysis-group/new-campaign-targe...
If this page started getting popular, a MitM attack to inject malware is a very real possibility.
In the "A visual guide to SSH tunnels" thread. Not saying it's compromised, but why would fully security conscious people, experts even, not have https:// enabled ? Sounds sketchy as f** to me.
> In each of these cases, the researchers have followed a link on Twitter to a write-up hosted on blog.br0vvnn[.]io, and shortly thereafter, a malicious service was installed on the researcher’s system and an in-memory backdoor would begin beaconing to an actor-owned command and control server. At the time of these visits, the victim systems were running fully patched and up-to-date Windows 10 and Chrome browser versions.
Like seriously ? Security researchers run Windows 10 and Chrome ? I mean if that's their testing environment I get it, but this sounds like they legit use that in the wild and click funny links sent to them from people claiming to be fellow hackers ? Outsider here, but is that really how it is ? I would think they use hardened OSes and !Chrome ...
Yes, these are a lot of valid reasons. My point is that the GP did NOT threatmodel, and instead claimed a vague "I want HTTPS".
HTTPS is so easy to do now (and, hey, the manifesto page offers an https version indeed, it's just not HSTS enabled) that there're few reasons not to use it.
My old isp used to inject ads and other warnings into plain http traffic.
I think it’s generally considered a good posture to be defensive and have privacy as default. Time has shown that information is power no matter how boring you think it is.
However, there seem to be zero links to tools and approaches to actually do something? Most links are to articles explaining why the manifesto exists... by companies offering consulting in this area.