I consider the word "bug" to refer to an error or unintended functionality in the existing code, not a potential vulnerability in what is (hopefully) still a theoretical design. So if you're doing whiteboard threat modeling, the output should be "things not to do going forward."The only two real outputs I’ve ever seen from threat modeling are bugs and threat model documents. I’ve seen bugs work far better than documents in almost every case.
Or not. You see, there are two reasons why I think estimating probability is crucial to threat modeling. One is simply that motivation is the difference between targeted and opportunistic attacks. And there's a lot of difference between managing an opportunistic risk (make sure your virtual pants aren't down) and a targeted one (call in the brute squad and batten down the hatches).
But the other reason for considering probability in threat modeling, even in the design phase, is that you may already have constraints that you need to work within, and those constraints may carry their own risk. For example, a mandated connection to a third party: "We could be vulnerable to anyone who breaks into their network." The business will say "Too bad, do it anyway." As a result, you're stuck with something to mitigate, probably by putting in extra security controls that you otherwise wouldn't have needed. I consider this a to-do list, not a bug list.
Now, if you're working with an existing application when you do threat modeling -- and I've used Adam's most excellent Elevation of Privilege card game to do this -- then yes, the vulnerabilities you're identifying are most likely bugs, and they need to be triaged using probability as one input. (And the sad part is that the "winner" of an EoP card game is also the loser, with the largest number of bugs to go fix.)
Either way, though, the conversation with the project manager, business executives, and developers is always, always going to be about probability, even as a subtext. Even if they don't come out and say, "But who would want to do that?" or "Come on, we're not a bank or anything," they'll be thinking it when they estimate the cost of fixing the bug or putting in the mitigations. It's a lot better to get the probability assumptions out in the open, find out what they're based on, and have an honest conversation about them. (My favorite tool for doing that is a very simple, high-level diagram from FAIR.)
More than that, though, I always enjoy a conversation with Adam, whether it's over tapas or over the Intertubes. Same goes for Alan Shimel, who just added his two cents* about how blogging should be a conversation. It's a shame we can't always do it on Twitter, but that's a good place to start the fire.
* Adjusted for inflation and intrinsic value, that's now about $83,000.
=======
UPDATE: Adam came right back with another volley here. I'm too tired to think of another clever blog post title, so I'll just add it at this juncture ...
I simply think the more you focus threat modeling on the “what will go wrong” question, the better. Of course, there’s an element of balance: you don’t usually want to be movie plotting or worrying about Chinese spies replacing the hard drive before you worry about the lack of authentication in your network connections.Absolutely. And you'll have to keep track of all the things that could go wrong (with varying levels of probability and mitigation), including the ones that you just can't fully address for one reason or another, like the aforementioned third party connectivity. Or, to take Adam's example, the lack of authentication in your network connections may be a known problem that is going to be fixed Real Soon Now (unless the budget goes away), or can't be fixed (you don't run the infrastructure and have to convince someone else to fix it -- hello, cloud!). Known exceptions, mitigations, and problems that need to be solved at layer 8 and above all go into the list, especially for when the auditor comes around, or even the next pentester.
I also find that the design phase is a really good time to talk about ensuring availability and performance -- in short, making the application Rugged. (Yeah, I'm not a manifesto type myself, but the principles are still worth incorporating.) Helping the developers solve for those kinds of issues -- ones that probably stay longer on their radar -- also helps them be more open to the security vulnerabilities you're looking for.
(I'll write more on Rugged Software in another post.)
Thanks, Adam -- I'm getting hungry now for almond-stuffed, bacon-wrapped dates with goat cheese crumbles and a red wine reduction ...