Because I'm all about the "good enough."

Wednesday, November 25, 2015

Why the airplane analogy doesn't fly.

Don't get me wrong — I love Trey Ford. He is one of the most inspiring infosec pros I know. He's smart, creative, full of mind-blowing ideas, and has energy to spare. And I love his talk at SecTor about what we can learn about information sharing from the aviation industry.

There's just one problem: aviation isn't all that comparable to cybersecurity.

Imagine that instead of flying the plane herself, a pilot had to convince all the passengers on the flight, EVERY flight, to do the flying together. And many of them aren't good at it, and don't care; they just want to sleep or watch videos or whatever.

The passengers change all the time, so you can't keep them educated on what to do. Depending on the size of the plane, there may be tens or hundreds of thousands of passengers helping with the flying. Instead of a finite maintenance crew that's under the direct control of the airline, there are dozens or thousands of different crews from third-party companies, all doing their bits (or not).

The aircraft types range into the thousands, dating back to Kitty Hawk and up to the newest models, and most of them have at least some custom alterations that can be changed between flights, so the various manufacturers won't take responsibility for anything they didn't add. Remember, too, that each of those alterations was probably made for a good reason — or at least, a reason that was good at the time. (That's a huge part of what we don't know about breaches today: we sometimes know the chain of events and contributing vulnerabilities, but we make rash judgments about why they happened without knowing the full story.)

The airlines all have different ideas on how they should equip their planes, so some pilots have one of everything new and shiny, and others have to make do with duct tape and bags of pretzels. (And some airlines are just now thinking that maybe having a dedicated pilot is a good idea.)

Oh, and did I mention? The weather is actively trying to disrupt your flight, usually in a way that you won't notice until it's too late. (Although you still have to worry about hacktivist storm cells that want you to look bad.)

All of these differences highlight our challenge in security: because everything is so complicated, so flexible, and so NOT under our individual control, we can easily blame someone else for their breaches because they did things so differently. And the pilot is nominally in charge, so that's where we concentrate the attention, but even the pilot can't get the toddler in 32B to stop screaming and fly straight. I'm not even going to mention the armchair aviation enthusiasts who sit near the runway with binoculars and lasers and provide "helpful" critique. (Oops, I guess that slipped out.)

So how can we still make use of what we've learned from information sharing in aviation? As Trey says, we can at least collect data now in a way that we may be able and willing to share later. If only we had a black box that collected vital information about a breach in a way that didn't expose the inner workings of the business, or those custom-built additions. If only we could sanitize the data in a way that communicated the important lessons ("don't combine these tray tables with that boarding process, and especially don't add a pilot over 6 feet tall without upgrading the landing gear") but defanged our industry's reflexive attempts at a certain kind of blame ("how stupid was that? We'd never do that!").

When I consider all this, sometimes I despair that we'll ever figure it out. But with positive thinkers like Trey, we may just have a chance.

Monday, September 7, 2015

When your risk profile is different.

Ready for some (more) unfounded speculation?

Both people and organizations tend to want to keep their data within a circle of trust; it's why there has been (and continues to be) resistance to putting sensitive data in the cloud. It's a function of human nature to keep things close -- which is why people still keep files on their desktops or laptops, use USB drives, and run servers at home. You keep your treasures in an environment that you know best, and where you feel you have the most control over them.

According to the Washington Post, President Bill Clinton had had a personal email server at home; Hillary Clinton had a server which had been in use during her first presidential campaign in 2008, and this same server was then set up for her at home when she took the Secretary of State post.

Besides this controversy with her home email server (and yes, I commented on that on CNN, but they must not have liked most of what I had to say), I noticed the other day that apparently Caroline Kennedy had been using personal email as well for State Department business. This suggests to me that they may have had a reason in common for doing this, one that hasn't been highlighted so far:

They both have a very different risk profile from most public officials.

When you're a celebrity -- independent of the position you currently hold -- your threat modeling has to include just about everyone. Any friends you have, any staff members you hire, could turn on you at any time for some perceived advantage. Now, Hillary could have had knowledge that the State Department was bad at securing its own systems, but I don't think that was it. I think she just couldn't trust staffers that worked for the agency and not for her personally. Any of them might try to access her email for political or personal reasons -- and let's face it: she's spent many, many years being embattled. The same would go for Caroline Kennedy, as well as anyone else who was famous before they took office.

In other words, their threat model holds colleagues to be a higher risk than hackers.

If you think this is surprising, you haven't been inside the minds of most non-security people. They have seen and experienced many more threats on a personal level than they have The Notorious A.P.T, so they will defend against the threat they believe in more.

None of us really knows how secure the server ended up being (although it looks like Hurricane Sandy caused natural disasters to become a more prominent part of the threat model, which is why they finally moved it to a provider with an actual data center), so I can't comment on that. Nor am I in any position to comment on the legal or classification issues, since those seem to be changing depending on who's got the microphone at any given time. But from a threat modeling perspective, I can absolutely understand why people want to hold their staff close and their data closer.

Oh, and by the way: if you can't view things from other peoples' perspectives, you're not going to be very good at threat modeling.

Saturday, May 16, 2015

Lessons in grown-up security.

Okay, so for the sake of those who can't say anything, I feel I have to say something.

Remember how much you hate people talking about things they don't understand? So do I. And let's face it: if you're not on the inside of an organization, you don't know 100% of what's going on there. Oftentimes it's less than 50%. And if it has to do with security, the percentage can drop as low as 10%.

The hysteria around Chris Roberts supposedly hacking a plane and "making it go sideways" has reached an all-time high. Which isn't to say it couldn't go higher, because media. But let's go through the versions here:

There's what he told people he did.
There's what they interpreted from what he said.
There's what he thought he did.
There's what he actually did.

Then there's the usual Telephone game of people misinterpreting, mis-reporting, and deliberately twisting all those things when they hear them second- and third-hand.

But one fact remains: there are people who actually know what's possible to do, and they ain't talking. Nor will they. Even if Roberts was talking complete bullshit, nobody on the inside is going to step forward and say it publicly. So in this case, silence does not equal assent.

We don't know whether the airline manufacturer already has experts doing pentesting, and they don't need any more, thankyouverymuch. Just because they're ignoring your reports doesn't mean they don't already know about what you think you're trying to say. They don't actually owe you an answer: "No, you didn't really get through, but if you had done THIS instead ..." Just because you decide to walk onto the court, it doesn't mean you get to be a player.

We don't know why United decided to come out with a bug bounty program, although it's mighty responsible of them NOT to encourage randoms to try hacking the avionics. Those who are complaining that it's missing from the bug bounty program are completely clueless in that regard, and have probably never been personally responsible for anything more consequential than a runaway shopping cart.

There may be no truth at all to what the FBI claims Roberts did, and they're just prosecuting him because letting him go free would send the wrong message to other juvenile delinquents out there.

The bottom line is, if you're not actively working WITH the company whose technology you're researching, then you're an adversary. So don't be surprised if they treat you like one. United has every right to say to Roberts, "You didn't actually do anything harmful, but you're a dick, so stay off our airplanes."

You can be a security researcher, but in the immortal, wise words of @wilw: Don't be a dick.

Friday, April 24, 2015

Achievement unlocked?

This week was Hell Week for analysts, otherwise known as Meet All The People, Inspect All The Things, otherwise known as the RSA Conference. Everything was going as expected: I made it through all the speaking engagements (at least one a day this time), spent a little time on the expo floor making a video with the awesome @j4vv4d, did the press interviews, and kissed all the hands and shook all the babies in 30-minute meeting slots.

I was heading over to the Security Bloggers' Meetup, wearing some really spectacular (if you'll pardon the pun) blinking-LED sunglasses that Javvad had given me, and I decided to leave them on for the short walk across the street to Jillian's; I figured they would look good in the dark bar.

All of a sudden, some male conference-goer walks by me, and in passing, he tells me, "There's a switch on the earpiece of the glasses, probably on the right, and you can turn them off that way so they won't run down the battery."

WTaF. Is this guy really mansplaining to me HOW TO OPERATE MY OWN SUNGLASSES?

Yes. Yes, he was.

Now, this is only the most harmless of micro-aggressions compared to what other women go through ("I want to talk to an engineer, not a booth lady"), but what most people don't understand is why we don't take people's heads off at the time. It's simple: you're so stunned, you don't think of the right words until much later. Imagine someone comes up to you out of the blue and says, "Hey buddy, you're wearing socks, we're going to have to ask you to leave." Completely on automatic, you might say, "Oh, okay, sorry about that," and start moving before the rest of your brain finishes processing the "What?" And many of us are trained to be polite first and foremost, so it's a reflex that has to be overcome.

So I said to the guy, "THANK YOU FOR EXPLAINING THAT TO ME. I WOULD NEVER HAVE FIGURED IT OUT BY MYSELF."  (Blogger doesn't have a sarcasm font, but imagine my saying it in one.) And now I'm sure that this Derpasaurus Rex took that completely seriously and thought I was really thanking him. So I should have done better, but it did take a few more minutes for the incredulity to drain away, and then it was too late.

What causes this level of pea-brained sexism to happen? I don't normally encounter it, or at least not so that I'd notice. I'm neither young nor pretty, but I was wearing a skirt at the time, which I don't normally do. What thought process goes on to make someone decide that a middle-aged mother of two, minding her own business, urgently needs sunglasses instructions?

The best I can come up with is this: the guy was truly bothered by the sight of someone wearing blinking sunglasses (on top of the head) in daylight.

"That's wasteful. Oh, it's a woman. She must not know how to turn them off."

And it would never have occurred to him to go through the same thought process if it had been a man. He would have assumed the man had a good reason for leaving them turned on, and it might still have bothered him in some Derpy Engineer Syndrome fashion, but he would have let it go.

Anyway, that was the one surreal moment from the conference this week. I think I'll put away the skirt for next year.

Tuesday, January 27, 2015

Looking logically at legislation.

There's a lot of fuss around the recent White House proposal to amend the Computer Fraud and Abuse Act, and some level-headed analysis of it. There's also a lot of defensive and emotional reaction to it ("ZOMG we're going to be illegal!").

First of all, everyone take a deep breath. The reason why proposed changes are made public is to invite comment. This is a really good time to step up and give constructive feedback, not just say how much it sucks (although a large enough uproar will be taken into account anyway). Try assuming that nobody is "out to get you" -- assume that they're just trying to do the right thing, as you would want them to do for you. Put yourself in their shoes: if you had to figure out how to protect citizens and infrastructure against criminal "cyber" activity, and do it legally, how would you do it?

There's another really important point here, beyond the one that if you don't like it, suggest something more reasonable. Jen Ellis talks about the challenge of doing just that in her great post. And I agree with Jen that an intent-based approach may be the most likely avenue to pursue, although proving intent can be difficult. I'm looking forward to seeing concrete suggestions from others. As I've pointed out before, writing robust legislation or administrative rules is a lot like writing secure code: you have to check for all the use and abuse cases, plan for future additions, and make it all stand on top of legacy code that has been around for decades and isn't likely to change. We have plenty of security people who should be able to do this.

If they can't -- if there's no way to distinguish between security researchers and criminals in a way that allows us to prosecute the latter without hurting the former -- then maybe that's a sign that some people should rethink their vocations. (It also explains why society at large can't tell the difference, and doesn't like security researchers.) After a certain point, it's irrational to insist on your right to take actions just like a criminal, force other people to figure out the difference, and not suffer any consequences. If you want to continue to do what you're doing, step up and help solve the real problem.

Wednesday, December 10, 2014


I've always had a problem with compliance, for a very simple reason: compliance is generally a binary state, whereas the real world is not. Nobody wants to hear that you're a "little bit compliant," and yet that's what most of us are.

Compliance surveys generally contain questions like this:

Q. Do you use full disk encryption?

A. Well, that depends. Some of our people are using full disk encryption on their laptops. They probably have that password synched to their Windows password, so I'm not sure how much good encryption would do if the laptops were stolen. We talked about doing full disk encryption on our servers. I think some of the newest ones have it. The rest will be replaced during the next hardware refresh, which I think is scheduled for 2016.

Q. So is that a yes, or a no?

A. Fine, I'll just say yes.

Or they might ask:

Q. Do you have a process for disabling user access?

A. It depends. We have a process written down in this here filing cabinet, but we don't know how many of our admins are using it. Then again, it could be a pretty lame process, but if you're an auditor asking whether we have one, the answer is yes.

Or even:

Q. Do you have a web application firewall?

A. No, I don't think so. ... Oh, we do? That's news to me. Okay, somewhere we apparently have a WAF. Wait, it's Palo Alto? Okay, whatever.

Q. Do you test all your applications for vulnerabilities?

A. That depends on what your definitions are of "test," "applications," and "vulnerabilities." Do we test the applications? Yes, using different methods. Does Nessus count? Do we test for all the vulnerabilities? Probably not. How often do we test them? Well, the ones in development get tested before release, unless it's an emergency fix, in which case we don't test it. The ones not in development -- that we know about -- might get tested once every three years. So I'd give that a definite yes.

The state of compliance is both murky and dynamic: anything you say you're doing right now might change next week. Can you get away with percentages of compliance? Yes, if you have something to count: "83% of our servers are compliant with the patch level requirements." But for all the rest, you have to decide what the definition of "is" is.

Compliance assessments are really only as good as the assessor and the staff they're working with, along with the ability to measure objectively, not just answer questions. And I wouldn't put too much faith in surveys, because whoever is answering them will be motivated to put the best possible spin on the binary answer. It's easier to say "Yes" with your fingers crossed behind your back, or with a secret caveat, than to have the word "No" out where someone can see it.

In fact, your compliance question could be "Bro, do you even?" and it would probably be as useful.

Thursday, September 25, 2014

Shock treatment.

Another day, another bug ... although this one is pretty juicy. One of the most accessible primers on the Bash Bug is on Troy Hunt's blog.

As many are explaining, one of the biggest problems with this #shellshock vulnerability is that it's in part of the Unix and Linux operating systems -- which means it's everywhere, particularly in things that were built decades ago and in things that were never meant to be updated. There will be a lot of hand-wringing over this one.

But I think I have a way to address it.

It's a worn-out analogy, but bear with me here. Windows in buildings. Now, we know glass is fragile to different extents, depending on how it's made. Imagine that we had hundreds or thousands of "glass researchers" who published things like this:
"Say, did you know that a 5-pound rock can break this kind of glass?" 
Whereupon business owners and homeowners say: 
"Oh jeez, okay, I guess we'd better upgrade the glass in our windows." 
"Say, did you know that a 10-pound rock can break this kind of glass?" 
Business- and homeowners:
"Sigh ... all right ... it's going to be expensive, but we'll upgrade." 
"Say, did you know that if you tap on a corner of the glass right over here that it'll break?" 
Business- and homeowners:
" ... " 
"Say, did you --" 
Business- and homeowners:

Yes, glass is fragile. So is IT. We all know that. And we don't expect everyone in the world to have the same level of physical security that, say, bank vaults do.

If there's a rash of burglaries in a neighborhood, we don't blame the residents for not having upgraded to the Latest and Greatest Glass.* No, we go after the perps.

Without falling too much into the tactical-vest camp, I think we ought to invest more money and time into defending the Internet as a whole, by improving our ability to tag, find and neutralize prosecute attackers. Right now, the offerings in the security industry are heavily on the enterprise side -- because after all, especially in the case of the finservs, that's where the money is. There are some vendors who are trying to address critical infrastructure, automotive and health care, which are three areas where people can and eventually will die as a result of software breaches. But we shouldn't wait until that happens to go on the offensive. We need a lot more investment in Internet law enforcement.

This is a case where expecting the world at large to defend itself against an infinite number of attacks just doesn't make sense.

*If you think it's cheap to patch, you haven't worked in a real enterprise.