Because I'm all about the "good enough."

Friday, April 24, 2015

Achievement unlocked?

This week was Hell Week for analysts, otherwise known as Meet All The People, Inspect All The Things, otherwise known as the RSA Conference. Everything was going as expected: I made it through all the speaking engagements (at least one a day this time), spent a little time on the expo floor making a video with the awesome @j4vv4d, did the press interviews, and kissed all the hands and shook all the babies in 30-minute meeting slots.

I was heading over to the Security Bloggers' Meetup, wearing some really spectacular (if you'll pardon the pun) blinking-LED sunglasses that Javvad had given me, and I decided to leave them on for the short walk across the street to Jillian's; I figured they would look good in the dark bar.

All of a sudden, some male conference-goer walks by me, and in passing, he tells me, "There's a switch on the earpiece of the glasses, probably on the right, and you can turn them off that way so they won't run down the battery."

WTaF. Is this guy really mansplaining to me HOW TO OPERATE MY OWN SUNGLASSES?

Yes. Yes, he was.

Now, this is only the most harmless of micro-aggressions compared to what other women go through ("I want to talk to an engineer, not a booth lady"), but what most people don't understand is why we don't take people's heads off at the time. It's simple: you're so stunned, you don't think of the right words until much later. Imagine someone comes up to you out of the blue and says, "Hey buddy, you're wearing socks, we're going to have to ask you to leave." Completely on automatic, you might say, "Oh, okay, sorry about that," and start moving before the rest of your brain finishes processing the "What?" And many of us are trained to be polite first and foremost, so it's a reflex that has to be overcome.

So I said to the guy, "THANK YOU FOR EXPLAINING THAT TO ME. I WOULD NEVER HAVE FIGURED IT OUT BY MYSELF."  (Blogger doesn't have a sarcasm font, but imagine my saying it in one.) And now I'm sure that this Derpasaurus Rex took that completely seriously and thought I was really thanking him. So I should have done better, but it did take a few more minutes for the incredulity to drain away, and then it was too late.

What causes this level of pea-brained sexism to happen? I don't normally encounter it, or at least not so that I'd notice. I'm neither young nor pretty, but I was wearing a skirt at the time, which I don't normally do. What thought process goes on to make someone decide that a middle-aged mother of two, minding her own business, urgently needs sunglasses instructions?

The best I can come up with is this: the guy was truly bothered by the sight of someone wearing blinking sunglasses (on top of the head) in daylight.

"That's wasteful. Oh, it's a woman. She must not know how to turn them off."

And it would never have occurred to him to go through the same thought process if it had been a man. He would have assumed the man had a good reason for leaving them turned on, and it might still have bothered him in some Derpy Engineer Syndrome fashion, but he would have let it go.

Anyway, that was the one surreal moment from the conference this week. I think I'll put away the skirt for next year.


Tuesday, January 27, 2015

Looking logically at legislation.

There's a lot of fuss around the recent White House proposal to amend the Computer Fraud and Abuse Act, and some level-headed analysis of it. There's also a lot of defensive and emotional reaction to it ("ZOMG we're going to be illegal!").

First of all, everyone take a deep breath. The reason why proposed changes are made public is to invite comment. This is a really good time to step up and give constructive feedback, not just say how much it sucks (although a large enough uproar will be taken into account anyway). Try assuming that nobody is "out to get you" -- assume that they're just trying to do the right thing, as you would want them to do for you. Put yourself in their shoes: if you had to figure out how to protect citizens and infrastructure against criminal "cyber" activity, and do it legally, how would you do it?

There's another really important point here, beyond the one that if you don't like it, suggest something more reasonable. Jen Ellis talks about the challenge of doing just that in her great post. And I agree with Jen that an intent-based approach may be the most likely avenue to pursue, although proving intent can be difficult. I'm looking forward to seeing concrete suggestions from others. As I've pointed out before, writing robust legislation or administrative rules is a lot like writing secure code: you have to check for all the use and abuse cases, plan for future additions, and make it all stand on top of legacy code that has been around for decades and isn't likely to change. We have plenty of security people who should be able to do this.

If they can't -- if there's no way to distinguish between security researchers and criminals in a way that allows us to prosecute the latter without hurting the former -- then maybe that's a sign that some people should rethink their vocations. (It also explains why society at large can't tell the difference, and doesn't like security researchers.) After a certain point, it's irrational to insist on your right to take actions just like a criminal, force other people to figure out the difference, and not suffer any consequences. If you want to continue to do what you're doing, step up and help solve the real problem.

Wednesday, December 10, 2014

Depends.

I've always had a problem with compliance, for a very simple reason: compliance is generally a binary state, whereas the real world is not. Nobody wants to hear that you're a "little bit compliant," and yet that's what most of us are.

Compliance surveys generally contain questions like this:

Q. Do you use full disk encryption?

A. Well, that depends. Some of our people are using full disk encryption on their laptops. They probably have that password synched to their Windows password, so I'm not sure how much good encryption would do if the laptops were stolen. We talked about doing full disk encryption on our servers. I think some of the newest ones have it. The rest will be replaced during the next hardware refresh, which I think is scheduled for 2016.

Q. So is that a yes, or a no?

A. Fine, I'll just say yes.

Or they might ask:

Q. Do you have a process for disabling user access?

A. It depends. We have a process written down in this here filing cabinet, but we don't know how many of our admins are using it. Then again, it could be a pretty lame process, but if you're an auditor asking whether we have one, the answer is yes.

Or even:

Q. Do you have a web application firewall?

A. No, I don't think so. ... Oh, we do? That's news to me. Okay, somewhere we apparently have a WAF. Wait, it's Palo Alto? Okay, whatever.

Q. Do you test all your applications for vulnerabilities?

A. That depends on what your definitions are of "test," "applications," and "vulnerabilities." Do we test the applications? Yes, using different methods. Does Nessus count? Do we test for all the vulnerabilities? Probably not. How often do we test them? Well, the ones in development get tested before release, unless it's an emergency fix, in which case we don't test it. The ones not in development -- that we know about -- might get tested once every three years. So I'd give that a definite yes.

The state of compliance is both murky and dynamic: anything you say you're doing right now might change next week. Can you get away with percentages of compliance? Yes, if you have something to count: "83% of our servers are compliant with the patch level requirements." But for all the rest, you have to decide what the definition of "is" is.

Compliance assessments are really only as good as the assessor and the staff they're working with, along with the ability to measure objectively, not just answer questions. And I wouldn't put too much faith in surveys, because whoever is answering them will be motivated to put the best possible spin on the binary answer. It's easier to say "Yes" with your fingers crossed behind your back, or with a secret caveat, than to have the word "No" out where someone can see it.

In fact, your compliance question could be "Bro, do you even?" and it would probably be as useful.




Thursday, September 25, 2014

Shock treatment.

Another day, another bug ... although this one is pretty juicy. One of the most accessible primers on the Bash Bug is on Troy Hunt's blog.

As many are explaining, one of the biggest problems with this #shellshock vulnerability is that it's in part of the Unix and Linux operating systems -- which means it's everywhere, particularly in things that were built decades ago and in things that were never meant to be updated. There will be a lot of hand-wringing over this one.

But I think I have a way to address it.

It's a worn-out analogy, but bear with me here. Windows in buildings. Now, we know glass is fragile to different extents, depending on how it's made. Imagine that we had hundreds or thousands of "glass researchers" who published things like this:
"Say, did you know that a 5-pound rock can break this kind of glass?" 
Whereupon business owners and homeowners say: 
"Oh jeez, okay, I guess we'd better upgrade the glass in our windows." 
Researchers:
"Say, did you know that a 10-pound rock can break this kind of glass?" 
Business- and homeowners:
"Sigh ... all right ... it's going to be expensive, but we'll upgrade." 
Researchers:
"Say, did you know that if you tap on a corner of the glass right over here that it'll break?" 
Business- and homeowners:
" ... " 
Researchers:
"Say, did you --" 
Business- and homeowners:
"WILL YOU FOR CHRISSESAKE GET A LIFE??"

Yes, glass is fragile. So is IT. We all know that. And we don't expect everyone in the world to have the same level of physical security that, say, bank vaults do.

If there's a rash of burglaries in a neighborhood, we don't blame the residents for not having upgraded to the Latest and Greatest Glass.* No, we go after the perps.

Without falling too much into the tactical-vest camp, I think we ought to invest more money and time into defending the Internet as a whole, by improving our ability to tag, find and neutralize prosecute attackers. Right now, the offerings in the security industry are heavily on the enterprise side -- because after all, especially in the case of the finservs, that's where the money is. There are some vendors who are trying to address critical infrastructure, automotive and health care, which are three areas where people can and eventually will die as a result of software breaches. But we shouldn't wait until that happens to go on the offensive. We need a lot more investment in Internet law enforcement.

This is a case where expecting the world at large to defend itself against an infinite number of attacks just doesn't make sense.



*If you think it's cheap to patch, you haven't worked in a real enterprise.

Thursday, September 18, 2014

A tenuous grasp on reality.

"Don't blog while angry," they say. Well, it's too late now.

One thing that has bothered me for years is the tendency for security recommendations to lean towards the hypothetical or the ideal. Yes, many of them are absolutely correct, and they make a lot of sense. However, they assume that you're starting with a blank slate. And how many people ever run into a blank IT slate in the real world?

Here are some examples.

"Don't have a flat network." Well, that's very nice. And it's too late; we already have one. Any idea how much time and effort and money it will cost to segment it out? Start with buying more/new network equipment; think about the chaos that IP address changes bring to multiple layers of the stack. Firewall rule changes (assuming you have them), OS-level changes, application changes (and you can bet that IP addresses are hard-coded all over the place). Think about the timing that a migration needs -- maybe Saturday after midnight until Sunday 6am in the local time zone? And figuring out the policies around which hosts really need to talk to other hosts, on which ports, because while you're playing 52-node pickup, you might as well put in some least privilege.

"Build security into applications early in the SDLC." Yes, absolutely, great idea. What are we going to do with the applications that are already there? Remember those calculations on how much it will cost to fix something that's already in production as opposed to fixing it in development? There's no way around it: you're going to need a bigger checkbook.

"Stay up-to-date on commercial software." Well, what if you're two years behind? (It really does happen.) Again, you're looking at a Project to implement this seemingly simple idea. From the ERP implementation that isn't supported by the vendor yet on the newest operating system, to the dozens of different JVM versions you've got running in production, this is a much more expensive recommendation than you realize.

And yes, these recommendations all hold true for what enterprises should do going forward. But in most cases, they need help making the changes from what they have and what they're doing today.

It would help so much more if instead of couching recommendations and standards in terms of where you should already be, you talked about them in terms of how to get there. After all, security is a process, not an end state; a journey, not a destination. Everyone starts out from different points, and needs different directions to know where to go. There are some ways an organization with 200 security staff and thousands of applications can pivot that will be different from the ways an enterprise can pivot with 2 security staff and with everything hosted by a third party.

So recommendations might look like this: "From now on, you should not automatically grant administrative rights to each desktop user. And here's how you go about taking those rights away from the ones who already have it." I believe incorporating more flexible and realistic security principles will make them easier to swallow by the people who have to implement them.









Sunday, August 24, 2014

How to help.

There are a few movements afoot to help improve security, and the intentions are good. However, to my mind some are just more organized versions of what we already have too much of: pointing out what's wrong, instead of rolling up your sleeves and fixing it.

Here are examples of Pointing Out What's Wrong:

  • Scanning for vulnerabilities.
  • Creating exploits.
  • Building tools to find vulnerabilities.
  • Telling everyone how bad security is.
  • Creating detailed descriptions of how to address vulnerabilities (for someone else to do).
  • Creating petitions to ask someone else to fix security.
  • "Notifying" vendors to fix their security.
  • Proving how easy it is to break into something.
  • Issuing reports on the latest attack campaigns.
  • Issuing reports on all the breaches that happened last year.
  • Issuing reports on the malware you found.
  • Issuing reports on how many flaws there are in software you scanned.
  • Giving out a free tool that requires time and expertise to use that most orgs don't have.
  • Performing "incident response," telling the victim exactly who hacked them and how, and then leaving them with a long "to-do" list.

None of this is actually fixing anything. It's simply pointing out to someone else, who bears the brunt of the responsibility, "Hey, there's something bad there, you really should do something about it. Good luck. Oh yeah, here, I got you a shovel."

Now, if you would like to take actual steps to help make things more secure, here are some examples of what you could do:

  • Adopt an organization near you. Put in hours of time to make the fixes for them, on their actual systems, that they don't know how to do. Offer to read all their logs for them, on a daily basis, because they don't have anyone who has the time or expertise for that.
  • Fix or rewrite vulnerable software. Offer secure, validated components to replace insecure ones.
  • Help an organization migrate off their vulnerable OSes and software. 
  • Do an inventory of an organization's accounts -- user, system, and privileged accounts -- and lead the project to retire all unneeded accounts. Deal with the crabby sysadmins who don't want to give up their rlogin scripts. Field the calls from unhappy users who don't like the new strong password guidelines. Install and do the training and support on two-factor authentication.
  • Invent a secure operating system. Better yet, go work for the maker of an existing OS and help make it more secure out of the box.
  • Raise money for budget-less security teams to get that firewall you keep telling them they need. Find and hire a good analyst to run it and monitor it for them.
  • Help your local school district move its websites off of WordPress.
  • Host and run backups for organizations that don't have any.

And if you're just about to say, "But that takes time and effort, and it's not my problem," then at least stop pretending that you really want to help. Because actually fixing security is hard, tedious, thankless work, and it doesn't get you a speaker slot at a conference, because you probably won't be allowed to talk about it. Yes, I know you don't have time to help those organizations secure themselves. Neither do they. Naming, shaming and blaming are the easy parts of security -- and they're more about self-indulgence than altruism. Go do something that really fixes something.


Friday, May 30, 2014

Want some more bad news?

I didn't think so, but I had to share this anyway.

I was listening today to a presentation by the CTO of Dell SecureWorks, Jon Ramsey (who for some reason has not yet tried to implore me to stop calling him "J-RAM"). He's always full of insights, but this one was both unsurprising and earth-shattering at the same time.

He pointed out that the half-life of a given security control is dependent upon how pervasive it is. In other words, the more widely it's used, the more attention it will be given by attackers. This is related to the Mortman/Hutton model for expectation of exploit use:


And yes, so far the water is still pretty wet. But he also pointed out that the pervasiveness of a control is driven by its requirement for compliance. 

In other words, once a particular security technology is required for compliance, its pervasiveness will go through the roof, and you've just lowered its effectiveness half-life to that of hydrogen-4. The very act of requiring more organizations to use it will kill off its utility. (And this is even worse than Josh Corman's denigration of PCI-DSS as equivalent to the "No Child Left Behind" act.)

Does this sound an awful lot like "security through obscurity" to you? Pete Lindstrom put it nicely when he said that this means the best security control is one that is the least used.

Now, we all know that security is an arms race between the adversary and the defender, and we also know that obscurity makes up a good portion of the defense on both sides. The adversary doesn't want you to know that he's figured out how to get through your control, and you don't want him to know that you know he's figured it out, so that you can keep on tracking and blocking him.

If so much of security relies on a contest of knowledge, then it's no wonder that so much of what we build turns into wet Kleenex at the drop of a (black) hat.

This means that we need more security controls that can't be subverted or deactivated through knowledge. In this case, "knowledge" often means the discovery of vulnerabilities. And the more complex a system you have, the more chances there are for vulnerabilities to exist. Getting fancier with the security technology and layering it both make it more complex.

So if we're trying to create better security, we could be going in the wrong direction.

The whole problem with passwords is a microcosm of this dilemma. A user gains entry by virtue of some knowledge that is completely driven by what he thinks he'll be able to remember. This knowledge can be stolen or guessed in a number of ways. We know this is stupid. But to turn this model on its head will require some innovation of extraordinary magnitude.

Can we design security controls that are completely independent of obscurity?

If you want to talk this over, you can find me in the bar.