Because I'm all about the "good enough."

Wednesday, December 10, 2014

Depends.

I've always had a problem with compliance, for a very simple reason: compliance is generally a binary state, whereas the real world is not. Nobody wants to hear that you're a "little bit compliant," and yet that's what most of us are.

Compliance surveys generally contain questions like this:

Q. Do you use full disk encryption?

A. Well, that depends. Some of our people are using full disk encryption on their laptops. They probably have that password synched to their Windows password, so I'm not sure how much good encryption would do if the laptops were stolen. We talked about doing full disk encryption on our servers. I think some of the newest ones have it. The rest will be replaced during the next hardware refresh, which I think is scheduled for 2016.

Q. So is that a yes, or a no?

A. Fine, I'll just say yes.

Or they might ask:

Q. Do you have a process for disabling user access?

A. It depends. We have a process written down in this here filing cabinet, but we don't know how many of our admins are using it. Then again, it could be a pretty lame process, but if you're an auditor asking whether we have one, the answer is yes.

Or even:

Q. Do you have a web application firewall?

A. No, I don't think so. ... Oh, we do? That's news to me. Okay, somewhere we apparently have a WAF. Wait, it's Palo Alto? Okay, whatever.

Q. Do you test all your applications for vulnerabilities?

A. That depends on what your definitions are of "test," "applications," and "vulnerabilities." Do we test the applications? Yes, using different methods. Does Nessus count? Do we test for all the vulnerabilities? Probably not. How often do we test them? Well, the ones in development get tested before release, unless it's an emergency fix, in which case we don't test it. The ones not in development -- that we know about -- might get tested once every three years. So I'd give that a definite yes.

The state of compliance is both murky and dynamic: anything you say you're doing right now might change next week. Can you get away with percentages of compliance? Yes, if you have something to count: "83% of our servers are compliant with the patch level requirements." But for all the rest, you have to decide what the definition of "is" is.

Compliance assessments are really only as good as the assessor and the staff they're working with, along with the ability to measure objectively, not just answer questions. And I wouldn't put too much faith in surveys, because whoever is answering them will be motivated to put the best possible spin on the binary answer. It's easier to say "Yes" with your fingers crossed behind your back, or with a secret caveat, than to have the word "No" out where someone can see it.

In fact, your compliance question could be "Bro, do you even?" and it would probably be as useful.




Thursday, September 25, 2014

Shock treatment.

Another day, another bug ... although this one is pretty juicy. One of the most accessible primers on the Bash Bug is on Troy Hunt's blog.

As many are explaining, one of the biggest problems with this #shellshock vulnerability is that it's in part of the Unix and Linux operating systems -- which means it's everywhere, particularly in things that were built decades ago and in things that were never meant to be updated. There will be a lot of hand-wringing over this one.

But I think I have a way to address it.

It's a worn-out analogy, but bear with me here. Windows in buildings. Now, we know glass is fragile to different extents, depending on how it's made. Imagine that we had hundreds or thousands of "glass researchers" who published things like this:
"Say, did you know that a 5-pound rock can break this kind of glass?" 
Whereupon business owners and homeowners say: 
"Oh jeez, okay, I guess we'd better upgrade the glass in our windows." 
Researchers:
"Say, did you know that a 10-pound rock can break this kind of glass?" 
Business- and homeowners:
"Sigh ... all right ... it's going to be expensive, but we'll upgrade." 
Researchers:
"Say, did you know that if you tap on a corner of the glass right over here that it'll break?" 
Business- and homeowners:
" ... " 
Researchers:
"Say, did you --" 
Business- and homeowners:
"WILL YOU FOR CHRISSESAKE GET A LIFE??"

Yes, glass is fragile. So is IT. We all know that. And we don't expect everyone in the world to have the same level of physical security that, say, bank vaults do.

If there's a rash of burglaries in a neighborhood, we don't blame the residents for not having upgraded to the Latest and Greatest Glass.* No, we go after the perps.

Without falling too much into the tactical-vest camp, I think we ought to invest more money and time into defending the Internet as a whole, by improving our ability to tag, find and neutralize prosecute attackers. Right now, the offerings in the security industry are heavily on the enterprise side -- because after all, especially in the case of the finservs, that's where the money is. There are some vendors who are trying to address critical infrastructure, automotive and health care, which are three areas where people can and eventually will die as a result of software breaches. But we shouldn't wait until that happens to go on the offensive. We need a lot more investment in Internet law enforcement.

This is a case where expecting the world at large to defend itself against an infinite number of attacks just doesn't make sense.



*If you think it's cheap to patch, you haven't worked in a real enterprise.

Thursday, September 18, 2014

A tenuous grasp on reality.

"Don't blog while angry," they say. Well, it's too late now.

One thing that has bothered me for years is the tendency for security recommendations to lean towards the hypothetical or the ideal. Yes, many of them are absolutely correct, and they make a lot of sense. However, they assume that you're starting with a blank slate. And how many people ever run into a blank IT slate in the real world?

Here are some examples.

"Don't have a flat network." Well, that's very nice. And it's too late; we already have one. Any idea how much time and effort and money it will cost to segment it out? Start with buying more/new network equipment; think about the chaos that IP address changes bring to multiple layers of the stack. Firewall rule changes (assuming you have them), OS-level changes, application changes (and you can bet that IP addresses are hard-coded all over the place). Think about the timing that a migration needs -- maybe Saturday after midnight until Sunday 6am in the local time zone? And figuring out the policies around which hosts really need to talk to other hosts, on which ports, because while you're playing 52-node pickup, you might as well put in some least privilege.

"Build security into applications early in the SDLC." Yes, absolutely, great idea. What are we going to do with the applications that are already there? Remember those calculations on how much it will cost to fix something that's already in production as opposed to fixing it in development? There's no way around it: you're going to need a bigger checkbook.

"Stay up-to-date on commercial software." Well, what if you're two years behind? (It really does happen.) Again, you're looking at a Project to implement this seemingly simple idea. From the ERP implementation that isn't supported by the vendor yet on the newest operating system, to the dozens of different JVM versions you've got running in production, this is a much more expensive recommendation than you realize.

And yes, these recommendations all hold true for what enterprises should do going forward. But in most cases, they need help making the changes from what they have and what they're doing today.

It would help so much more if instead of couching recommendations and standards in terms of where you should already be, you talked about them in terms of how to get there. After all, security is a process, not an end state; a journey, not a destination. Everyone starts out from different points, and needs different directions to know where to go. There are some ways an organization with 200 security staff and thousands of applications can pivot that will be different from the ways an enterprise can pivot with 2 security staff and with everything hosted by a third party.

So recommendations might look like this: "From now on, you should not automatically grant administrative rights to each desktop user. And here's how you go about taking those rights away from the ones who already have it." I believe incorporating more flexible and realistic security principles will make them easier to swallow by the people who have to implement them.









Sunday, August 24, 2014

How to help.

There are a few movements afoot to help improve security, and the intentions are good. However, to my mind some are just more organized versions of what we already have too much of: pointing out what's wrong, instead of rolling up your sleeves and fixing it.

Here are examples of Pointing Out What's Wrong:

  • Scanning for vulnerabilities.
  • Creating exploits.
  • Building tools to find vulnerabilities.
  • Telling everyone how bad security is.
  • Creating detailed descriptions of how to address vulnerabilities (for someone else to do).
  • Creating petitions to ask someone else to fix security.
  • "Notifying" vendors to fix their security.
  • Proving how easy it is to break into something.
  • Issuing reports on the latest attack campaigns.
  • Issuing reports on all the breaches that happened last year.
  • Issuing reports on the malware you found.
  • Issuing reports on how many flaws there are in software you scanned.
  • Giving out a free tool that requires time and expertise to use that most orgs don't have.
  • Performing "incident response," telling the victim exactly who hacked them and how, and then leaving them with a long "to-do" list.

None of this is actually fixing anything. It's simply pointing out to someone else, who bears the brunt of the responsibility, "Hey, there's something bad there, you really should do something about it. Good luck. Oh yeah, here, I got you a shovel."

Now, if you would like to take actual steps to help make things more secure, here are some examples of what you could do:

  • Adopt an organization near you. Put in hours of time to make the fixes for them, on their actual systems, that they don't know how to do. Offer to read all their logs for them, on a daily basis, because they don't have anyone who has the time or expertise for that.
  • Fix or rewrite vulnerable software. Offer secure, validated components to replace insecure ones.
  • Help an organization migrate off their vulnerable OSes and software. 
  • Do an inventory of an organization's accounts -- user, system, and privileged accounts -- and lead the project to retire all unneeded accounts. Deal with the crabby sysadmins who don't want to give up their rlogin scripts. Field the calls from unhappy users who don't like the new strong password guidelines. Install and do the training and support on two-factor authentication.
  • Invent a secure operating system. Better yet, go work for the maker of an existing OS and help make it more secure out of the box.
  • Raise money for budget-less security teams to get that firewall you keep telling them they need. Find and hire a good analyst to run it and monitor it for them.
  • Help your local school district move its websites off of WordPress.
  • Host and run backups for organizations that don't have any.

And if you're just about to say, "But that takes time and effort, and it's not my problem," then at least stop pretending that you really want to help. Because actually fixing security is hard, tedious, thankless work, and it doesn't get you a speaker slot at a conference, because you probably won't be allowed to talk about it. Yes, I know you don't have time to help those organizations secure themselves. Neither do they. Naming, shaming and blaming are the easy parts of security -- and they're more about self-indulgence than altruism. Go do something that really fixes something.


Friday, May 30, 2014

Want some more bad news?

I didn't think so, but I had to share this anyway.

I was listening today to a presentation by the CTO of Dell SecureWorks, Jon Ramsey (who for some reason has not yet tried to implore me to stop calling him "J-RAM"). He's always full of insights, but this one was both unsurprising and earth-shattering at the same time.

He pointed out that the half-life of a given security control is dependent upon how pervasive it is. In other words, the more widely it's used, the more attention it will be given by attackers. This is related to the Mortman/Hutton model for expectation of exploit use:


And yes, so far the water is still pretty wet. But he also pointed out that the pervasiveness of a control is driven by its requirement for compliance. 

In other words, once a particular security technology is required for compliance, its pervasiveness will go through the roof, and you've just lowered its effectiveness half-life to that of hydrogen-4. The very act of requiring more organizations to use it will kill off its utility. (And this is even worse than Josh Corman's denigration of PCI-DSS as equivalent to the "No Child Left Behind" act.)

Does this sound an awful lot like "security through obscurity" to you? Pete Lindstrom put it nicely when he said that this means the best security control is one that is the least used.

Now, we all know that security is an arms race between the adversary and the defender, and we also know that obscurity makes up a good portion of the defense on both sides. The adversary doesn't want you to know that he's figured out how to get through your control, and you don't want him to know that you know he's figured it out, so that you can keep on tracking and blocking him.

If so much of security relies on a contest of knowledge, then it's no wonder that so much of what we build turns into wet Kleenex at the drop of a (black) hat.

This means that we need more security controls that can't be subverted or deactivated through knowledge. In this case, "knowledge" often means the discovery of vulnerabilities. And the more complex a system you have, the more chances there are for vulnerabilities to exist. Getting fancier with the security technology and layering it both make it more complex.

So if we're trying to create better security, we could be going in the wrong direction.

The whole problem with passwords is a microcosm of this dilemma. A user gains entry by virtue of some knowledge that is completely driven by what he thinks he'll be able to remember. This knowledge can be stolen or guessed in a number of ways. We know this is stupid. But to turn this model on its head will require some innovation of extraordinary magnitude.

Can we design security controls that are completely independent of obscurity?

If you want to talk this over, you can find me in the bar.



Saturday, March 22, 2014

The power of change.

[Yeah, I know, it's been a long time since I updated this blog. When you write for a living, you tend to write the things you get paid for first, and often you don't have any time or ideas left over after that.]

So much of security is about doing it, not just having it. The best products can be useless in the wrong hands: you have something that's supposed to be blocking, but you have it only in logging mode. Or you have it in blocking mode, but you only enabled a few of the available rules. You have it installed in the wrong location on your network because that's the only place you could put it. It tells you what you need to know, but you never have time to look at it. And so on.

I believe that most of security relies on detecting and controlling change. And there are so many aspects to change that have to be considered.

  1. Recognizing change. Do you know how your systems, operations and usage patterns are supposed to be? This is probably the biggest challenge, believe it or not. Just knowing your baseline, in everything that happens in your environment, will be impossible for any one group or team. This baseline knowledge is institutional, and will be spread across staff at every level. The network team may know what normal traffic looks like, but they may not know what makes it normal.
  2. Detecting change. This might sound like it's the same as the item above, but it's not. Detection implies both recognition and timeliness. It requires finding what has changed, when, in what way, and by how much. 
  3. Understanding change. This is the next step in the line: if you know what has changed, and the details, then you should be able to understand the root cause of the change (someone was trying to restore a database) and the implications of the change (it clobbered the production version and now it has to be rebuilt, before the stock markets open in the morning). Or you realize that the change happened for a particular reason, and it will likely be followed by other changes (a connection is made to an unknown external system, and your sensitive data is about to head in that direction).
  4. Initiating change. Making a change happen is often interwoven with politics. Why do you want to initiate the change? Are you allowed to request it? Who will execute the change? These are all big questions when, for example, you are trying to get a security hole plugged ("Turn off that SMTP relay! Now!!").
  5. Designing change. If you understand the effects of a change, you may need to make sure it happens with certain timing, in a certain sequence, on certain systems, done by certain people. You have to design the change, especially if it's a complex one -- say, a migration off of Windows XP.
  6. Controlling change. Making a change happen the way you designed it to happen, within the time frame that you need, without political or organizational fallout, is harder than you think. It may take longer than you hoped for that application's vulnerabilities to be patched. If changes need to be approved by other entities, you may need to persuade them to give that approval. The person assigned to execute the change doesn't actually know how to do it, and is going to do it wrong. Reverting a change could be more complicated than making the original change. 
  7. Preventing change. Most people think that security is all about this part, but it's not. Any business requires change, and it's up to the security team to help ensure that change happens in the right ways. In some cases it's still important to stop some changes from occurring (such as malware being deposited), but the prevention has to be fine-grained enough to address only those change cases. For example, this setting can be changed, but it should never be done on Sundays. This rule can be changed, but only by people in this role. Or something can be changed, but it has to create a notification for someone else who needs to know about it. These types of changes should not be made unless they are logged. 


Business rules, as well as risk management decisions*, will dictate how changes are initiated, designed, approved, executed, recorded and evaluated. And if you don't have a good handle on these business and risk requirements, you'll be severely hindered in detecting unauthorized changes and responding to them.

You can't use malware detection if it takes too much work to figure out what a false positive is. Rather than being able to understand those changes, you'll ignore alerts about them until a bigger change happens as a result. (Or until you get a call from the Secret Service.)

There's no point in running a vulnerability scanner if you can't cause fixes to be made based on the findings. Moreover, you shouldn't use a vulnerability scanner as a "compliance detection" tool: it may tell you that your systems are configured exactly the way your policies specified and they haven't changed, but you may have stupid policies.

If you can't figure out the effects of a change, then you will either try to prevent it out of fear, or you will execute it wrongly. If you don't know how a change is made, you can't design a process that will limit collateral damage.

Luckily, you can start mastering all these aspects of changes without spending a lot of money. Just knowing what your systems, applications and users are supposed to do is a huge start towards effective security -- and it takes time, but it's cheap. After that, the road will be different for every organization, because the changes, effects and control points will be different. There will be things you can't change, changes you can't detect, or business processes that constrain (or mandate) change. Look at what power you have over change, to figure out how secure you can be.


*Notice I didn't say "security best practices."