Because I'm all about the "good enough."

Friday, May 30, 2014

Want some more bad news?

I didn't think so, but I had to share this anyway.

I was listening today to a presentation by the CTO of Dell SecureWorks, Jon Ramsey (who for some reason has not yet tried to implore me to stop calling him "J-RAM"). He's always full of insights, but this one was both unsurprising and earth-shattering at the same time.

He pointed out that the half-life of a given security control is dependent upon how pervasive it is. In other words, the more widely it's used, the more attention it will be given by attackers. This is related to the Mortman/Hutton model for expectation of exploit use:


And yes, so far the water is still pretty wet. But he also pointed out that the pervasiveness of a control is driven by its requirement for compliance. 

In other words, once a particular security technology is required for compliance, its pervasiveness will go through the roof, and you've just lowered its effectiveness half-life to that of hydrogen-4. The very act of requiring more organizations to use it will kill off its utility. (And this is even worse than Josh Corman's denigration of PCI-DSS as equivalent to the "No Child Left Behind" act.)

Does this sound an awful lot like "security through obscurity" to you? Pete Lindstrom put it nicely when he said that this means the best security control is one that is the least used.

Now, we all know that security is an arms race between the adversary and the defender, and we also know that obscurity makes up a good portion of the defense on both sides. The adversary doesn't want you to know that he's figured out how to get through your control, and you don't want him to know that you know he's figured it out, so that you can keep on tracking and blocking him.

If so much of security relies on a contest of knowledge, then it's no wonder that so much of what we build turns into wet Kleenex at the drop of a (black) hat.

This means that we need more security controls that can't be subverted or deactivated through knowledge. In this case, "knowledge" often means the discovery of vulnerabilities. And the more complex a system you have, the more chances there are for vulnerabilities to exist. Getting fancier with the security technology and layering it both make it more complex.

So if we're trying to create better security, we could be going in the wrong direction.

The whole problem with passwords is a microcosm of this dilemma. A user gains entry by virtue of some knowledge that is completely driven by what he thinks he'll be able to remember. This knowledge can be stolen or guessed in a number of ways. We know this is stupid. But to turn this model on its head will require some innovation of extraordinary magnitude.

Can we design security controls that are completely independent of obscurity?

If you want to talk this over, you can find me in the bar.