Because I'm all about the "good enough."

Friday, December 23, 2011

The Security Poverty Line, and junk food.

I've given talks on this before, and published a report (available for free here), but I haven't really put everything in one place until now.  I coined the term "security poverty line" to describe those organizations that, for one reason or another (usually a lack of IT funds), can't afford to reach an effective level of security, much less compliance with security regulations.

When you don't have a lot of IT money, you can't afford your own IT staff (or you go with whatever you can borrow or rent).  This means you don't have in-house expertise to maintain a decent level of security controls and monitoring, even assuming you get systems and networks configured right to begin with.  As we all know, security is an ongoing process, and if you have Jane the IT Girl as your sole resource, she's going to be too busy troubleshooting problems and installing new systems to be able to maintain the existing ones in a proactive fashion.

Organizations below the SPL tend to be inordinately dependent on third parties for this reason, and since they're so dependent, they have less direct control over the security of the systems they use.  They also end up ceding risk decisions to third parties that they ideally should be making themselves.

And they don't have resources for luxuries, such as separate systems for different tasks, or different personnel to achieve segregation of duties.  They'll tend to throw everything on the existing old hardware until it breaks, or until the performance is so unacceptable that they're forced into paying for more.  (This is why nobody should be surprised that the public sector has to struggle with crowded, antiquated systems.  How many taxpayers are going to pay for upgrades just to keep everything new and shiny, when the old systems were working just fine?)  They'll share data and networks with partners. They'll use the cheapest software they can find regardless of its quality or security.  And they'll have all sorts of kludges and back doors to make administration easier for whoever they can convince to do it.

So although some people see the failure to achieve compliance or effective security as simply a matter of attitude ("if you really cared about auto safety, you'd buy a Mercedes!"), it's not that simple.  Even upgrading and untangling a set of legacy systems can double the cost of migration to a new platform, due to system inertia and missing institutional knowledge.  Any consultant who has had to step in to one of these environments to fix something knows what it's like to pull on one thread that appears to have an obvious solution, and discover that it's attached to too many other things that can't be easily changed.

Not only that, but certain types of security technology are more expensive than others.  In a talk I gave at the UNITED Security Summit this year, I showed some figures from some back-of-the-envelope surveying I did on what $2,000 can buy you; slides are available here. (Even $2k is a lot of money to justify spending on security for a lot of these organizations.)  As it turns out, most of the affordable security technology is the oldest kind, the least effective, and mostly preventive in nature -- firewalls, antivirus, and a scanner that will tell you what's wrong with your systems that you can't afford to fix.  The newer stuff, especially anything that involves proactive work and monitoring, is out of reach.  Enterprises below the SPL are not only stuck with the equivalent of burgers and fries, they can't afford any vegetables (thanks to Alan Shimel for making this more explicit).

Open source, you say?  Tell me if your dentist's office uses open source software, and who there knows how to install and maintain it.  Open source software is expensive when you include the expertise needed for support.  (I chatted with Alan about this one time when a recording was running.)

What this means is that many organizations that slip into security poverty tend to get trapped there.  Unless they can afford to do a greenfield transfer to a provider with a squeaky-clean new network and managed security services, they will just keep patching what they have, and only do the minimum that is going to fend off their biggest, most visible risk: the auditor.  Rather than continuing to beat them with the compliance stick until morale improves, we need to make security services more affordable (and there are some providers who are working on just that).  We need to build security into products and deliver them already secured, so that security isn't an add-on luxury. We also need to create more hands-on resources -- perhaps as a community service -- that poorer organizations can draw on, not just to give them guidelines, but to adapt them to what they can afford to do.

And finally, we need to be able to state clearly what effective security looks like.  The great thing about compliance (yes, I really did just write that) is that you know when you're done.  When the last box has been checked, you have that sense of accomplishment, and it's straightforward to know whether you pass or not.  I challenge anyone in the security community to tell me what, say, a 50-person company needs to buy -- even assuming they have a blank check -- to make sure they are doing everything necessary to manage their risk.  (Hell, I challenge anyone to tell me what their risk is without using colors.)

At least there's a food pyramid (or plate, or whatever -- they keep changing it) to describe the minimum daily requirements for nutrition.  What should be on the security plate for a healthy organization?

Update: I talked with Tracy Kitten at about the topic here.

Guest Post: The Angry Angry CISO.

I'm happy to publish a guest post from someone I'll call the Angry Angry CISO.  Obviously they speak only for themselves, but boy howdy, do they go well with popcorn.  Enjoy.


It’s a tough time to be a CISO in an enterprise of any size these days.  I don’t want to be a whiner, but when you look at the challenges being faced by the folks who play permanent defense, things are looking pretty bleak.  APTs to the right of us, auditors to the left of us, onward – onward – into the Valley of Compliance…
But that’s not what I’m here to talk to you about today.
I’m here because so many in the “security researcher” community have become -- well, hypocrites.
Lemme s’plain.  No, it’s too much – I sum up.
 When the CarrierIQ story broke, what happened in our community?  People went berserk.  “How could they do this?”  “It’s EEEEEEVVVVIIILLLLL!!!”  “They should be prosecuted to the fullest extent of the law!” and on and on and on.
And for what?
For something that the vast majority of them would have been cackling in glee about had someone in a black t-shirt and questionable personal hygiene been presenting it in a meeting room of a hotel in Vegas.  Had the CarrierIQ tech been revealed by a “researcher” it would have been seen as further evidence of the total incompetence of the carriers, phone manufacturers, and phone OS providers.  Had a “researcher” presented CarrierIQ, anyone who said, “Gee, this tool could be used for underhanded and devious things” would have been scolded into submission on the Twitterz because, after all, The Community Needs These Things.
Yeah, right.
What gets this CISO angry (amongst diverse other things) with the community is that we have developed a serious case of situational ethics.  We readily explain away the things we do that could negatively impact the security and privacy of millions of people as “projects”, “proofs of concept”, and “just plain old hacking”, but throw a complete conniption fit when a corporation does the same.  Are we that special?  Or does being a hacker make one impervious to irony?
Look – I expect hackers to be hackers.  I know that any piece of technology I own or gets deployed in the factory is going to get hacked at some point.  I accept that.  I also expect companies to be companies.  I know that anything I buy for myself or the factory probably is gathering information for the vendor to use in marketing, etc.  I accept that.
 So should you.

The Angry Angry CISO, when not writing as part of anger management treatment, is the head of information security for a medium-sized enterprise somewhere in North America.  The Angry Angry CISO speaks only for the Angry Angry CISO.

Tuesday, December 20, 2011

Remember, predictions make a ...

Oh, no, I almost went there.  Pull up!  PULL UP!

'Tis the season for half of the security world to make predictions, and the other half to make fun of them.  Why do we even bother to make predictions, anyway?  In the analyst world, it's another chance not only to show you've been thinking hard about these topics, but also to talk about what you'd like to see happen.  Predictions can be a great way of starting conversations, if you look at them the right way.  (If you look at them the wrong way, they're great for raising a huge chorus of "Nuh-UH!" or even "You're kidding, right?  Call the coroner?")

But let's have some fun with unofficial "predictions" that are intended, as the horoscopes say, for entertainment purposes only:

  1. Big Data, having shed its sizeist origins and become Total Data, will go on to become Totally Leaked Data.
  2. Security teams will finally get invited to the table -- that is, the table at the pub where they can drink and commiserate with the legal, HR and audit departments.
  3. PCI will become the most widely used de facto security standard for cloud services.*
  4. Personal feuds will break out among security researchers and they'll start hax0ring each other, leaving the rest of us to breathe a little easier as we polish our Generation Z Firewalls.
  5. Patent wars will escalate among security vendors, causing a new crop of IT lawyers to go shopping for Maseratis and stimulate the economy.**
  6. Some enterprise somewhere will try to ban all email attachments in an effort to stop phishing, and text-only messaging on retro CRTs will become hipsta.
  7. Someone will try, and fail, to rename The Cloud into something more ambiguous.
  8. Security conferences will become Big Business, and some people will leave their hands-on security jobs to run them full-time.
  9. An analyst will issue a prediction with an actual number in it.  However, this number will be an attempt to quantify a qualitative metric, so it will be useless.  "GRC dashboards will be 15% greener!"
  10. Nobody will make risk management any more understandable than it is today.
*Okay, I slipped in something a little too close to the truth.
**You're probably wondering how I came up with such a far-fetched idea.

Now that I've gotten these published, feel free to refer back to them at this same time next year, and if any of them are proven wrong, you'll get your money back.  Guaranteed.

Tuesday, December 13, 2011

That's not a bug, it's a creature.

Adam Shostack posted a great expansion on the very short Twitter conversation we had regarding threat modeling.  I think we agree on most things, but I sense a little semantic disconnect in some things that he says:
The only two real outputs I’ve ever seen from threat modeling are bugs and threat model documents. I’ve seen bugs work far better than documents in almost every case.
I consider the word "bug" to refer to an error or unintended functionality in the existing code, not a potential vulnerability in what is (hopefully) still a theoretical design.  So if you're doing whiteboard threat modeling, the output should be "things not to do going forward."

Or not.  You see, there are two reasons why I think estimating probability is crucial to threat modeling.  One is simply that motivation is the difference between targeted and opportunistic attacks.  And there's a lot of difference between managing an opportunistic risk (make sure your virtual pants aren't down) and a targeted one (call in the brute squad and batten down the hatches).

But the other reason for considering probability in threat modeling, even in the design phase, is that you may already have constraints that you need to work within, and those constraints may carry their own risk.  For example, a mandated connection to a third party: "We could be vulnerable to anyone who breaks into their network."  The business will say "Too bad, do it anyway."  As a result, you're stuck with something to mitigate, probably by putting in extra security controls that you otherwise wouldn't have needed.  I consider this a to-do list, not a bug list.

Now, if you're working with an existing application when you do threat modeling -- and I've used Adam's most excellent Elevation of Privilege card game to do this -- then yes, the vulnerabilities you're identifying are most likely bugs, and they need to be triaged using probability as one input.  (And the sad part is that the "winner" of an EoP card game is also the loser, with the largest number of bugs to go fix.)

Either way, though, the conversation with the project manager, business executives, and developers is always, always going to be about probability, even as a subtext.  Even if they don't come out and say, "But who would want to do that?" or "Come on, we're not a bank or anything," they'll be thinking it when they estimate the cost of fixing the bug or putting in the mitigations.  It's a lot better to get the probability assumptions out in the open, find out what they're based on, and have an honest conversation about them.  (My favorite tool for doing that is a very simple, high-level diagram from FAIR.)

More than that, though, I always enjoy a conversation with Adam, whether it's over tapas or over the Intertubes.  Same goes for Alan Shimel, who just added his two cents* about how blogging should be a conversation.  It's a shame we can't always do it on Twitter, but that's a good place to start the fire.

* Adjusted for inflation and intrinsic value, that's now about $83,000.

UPDATE:  Adam came right back with another volley here.  I'm too tired to think of another clever blog post title, so I'll just add it at this juncture ...
I simply think the more you focus threat modeling on the “what will go wrong” question, the better. Of course, there’s an element of balance: you don’t usually want to be movie plotting or worrying about Chinese spies replacing the hard drive before you worry about the lack of authentication in your network connections.
Absolutely.  And you'll have to keep track of all the things that could go wrong (with varying levels of probability and mitigation), including the ones that you just can't fully address for one reason or another, like the aforementioned third party connectivity.  Or, to take Adam's example, the lack of authentication in your network connections may be a known problem that is going to be fixed Real Soon Now (unless the budget goes away), or can't be fixed (you don't run the infrastructure and have to convince someone else to fix it -- hello, cloud!).  Known exceptions, mitigations, and problems that need to be solved at layer 8 and above all go into the list, especially for when the auditor comes around, or even the next pentester.

I also find that the design phase is a really good time to talk about ensuring availability and performance -- in short, making the application Rugged.  (Yeah, I'm not a manifesto type myself, but the principles are still worth incorporating.)  Helping the developers solve for those kinds of issues -- ones that probably stay longer on their radar -- also helps them be more open to the security vulnerabilities you're looking for.

(I'll write more on Rugged Software in another post.)

Thanks, Adam -- I'm getting hungry now for almond-stuffed, bacon-wrapped dates with goat cheese crumbles and a red wine reduction ...

Wednesday, December 7, 2011

What your analyst wishes you knew.

Not naming names here, but these are a few things that some industry analysts would like you to know:

  1. If you claim that your product is the "world's only" or the "first," I will be tempted to prove you wrong, and nine times out of ten, I can.
  2. Please don't assume that I'm not technical. Make your presentation detailed, and I will ask you if there's something I don't understand, but starting out by explaining what a firewall is does not win any points.
  3. You know how some vendors send out a marketing email using the latest headlines as soon as they come out?  That's ambulance chasing.  Don't be that guy.
  4. One-way webinar-style briefings are a waste of your time and mine. We have a chance for a good, personal dialogue; don't throw it away on a cattle call.
  5. I'm happy to hear about any factual errors I've made in a report about you, but if you object to an adjective I used ... sorry.  Not changing it to fit your marketing better.
  6. Yes, I talk to your competition.  Don't worry, I love you all equally.
  7. I try hard to find something positive to say in all my reports.  If I can't, then I don't write one. If I haven't written about your latest ... you might try asking me why before complaining.
  8. Yes, it's very nice that you're in the Magic Quadrant.  It doesn't have anything to do with my analysis, though.
  9. If you want to meet up at a conference, that's great, but please book EARLY.  Especially for RSA.
  10. At the end of the day, this is only my opinion.  As an analyst, I reserve the right to be wrong.

Baby, it's Veracode outside ...

Just read Veracode's chilling new State of Software Security Report, Volume 4 (I'm just waiting for the Greatest Hits to come out), and it's pretty depressing.  Among those organizations that use Veracode in any capacity -- testing their own applications or someone else's -- things haven't gotten all that much better.

As I've said once or twice in my talks, I like to learn about how security goes wrong; how the best-laid plans of CISOs gang aft agley.  One of my favorite German words is Sollbruchstelle:  the place where something is supposed to break or tear, such as a perforation.  And as my lovely and talented spouse points out, "Sollbruchstelle heißt nicht unbedingt Wollbruchstelle" -- just because something is supposed to break at a particular place doesn't mean it will.  For this reason, I'm interested in other data from reports like Veracode's and those of other vendors.  Why are we not seeing more progress in securing software?  Is it really just a matter of awareness and education, or is it something more?

Reading between the lines of the Veracode report, we see the statistics on the number of applications that never get resubmitted for testing, but not much explanation as to why they didn't.  The authors seem a little puzzled by this, but it makes a lot of sense to me: the applications probably never got resubmitted because they haven't been fixed yet.  I'd love to see data over a longer period of time for those enterprises to see which fixes got made quickly, which ones took longer, and which ones were pretty much abandoned.

I tried doing a meta-analysis at one point among the various "state of the state" reports for application security to see whether there was a large difference in findings between dynamic and static testing.  The effort sort of fell apart for a number of reasons: one, because not all reports described the data in the same way, and two, because vendors are increasingly melding static with dynamic both in their tools and in their revenue streams.  In the few places where I was able to do a one-to-one comparison (as best as I understood the language in the report), the statistics from static analysis tools were all very similar to one another, and the dynamic ones were also very similar for a common set of vulnerabilities; there were marked gaps between the two types of testing results.

Of course we know that static analysis and dynamic testing are two different beasts; that's no surprise.  But I'll be very interested in seeing how the two are bridged going forward.  I think there needs to be some sort of translation between the two before a significant amount of correlation can be done; and Dinis Cruz thinks it ought to be an abstraction layer. This is a good idea, but before it can be abstracted it needs to be normalized, and we can have a lot of visibility into how the application is functioning in real time, but unless we can describe all the testing with the same words, I don't think we're going to achieve enough understanding of the problem space.  (Yeah, I'm getting my Wittgenstein on.)

So kudos to Veracode for adding some more to the shared knowledge out there.  It's clear that we have more work to do, and only outcome data will really point us in the right direction.  We need to understand why and where things break before we can make sustainable progress.

My opinions, let me show you them.

Well, this is really Tripwire's fault.  I realized that not only am I the only ch1XX0r on the list along with the incredibly smart Allison Miller, but that we two were the only ones without a blog. 

So, um, hi.  It should go without saying (so I'll say it here anyway) that anything I write here isn't on behalf of my daytime employer.  I may link to reports I've written, and they're behind a paywall -- sorry about that.  This blog will contain the thoughts and opinions I don't get paid for, which means they're probably worth what you're paying for them.

Please keep your arms and legs inside the drama at all times, and enjoy your flight.