Because I'm all about the "good enough."

Saturday, December 29, 2012

Levelling up in the real world.

Here's a great post from Victor Wong on What They Don't Tell You About Promotions.

All of his points are so, so true -- and I thought I'd add some more from my own experiences and perspective. There are a lot of misconceptions out there about what entitles you to a promotion, so let me get those out of the way first:

What does not get you promoted:
  • Being the oldest person on your team. (Really, some people seem to believe this.) It's not about how old you are; management or senior positions are not about babysitting other people.
  • Being in your position the longest. Your position does not expire after a certain date, and you don't level up just by doing your job.
  • Doing your job the best of everyone else on the team. It's not about how well you do what's expected of you; it's about what you do above and beyond your job description.
  • Needing the money. Sorry, but that is not a sufficient reason for your boss to actually give you more money, much less move you to another position. You have to prove that you're worth it.
  • Working the hardest on your team. Again, it's not about fulfilling your current responsibilities. If you are working much harder than others, your boss might be looking at you and thinking, "This person doesn't know when to stop." Or your boss might decide, "We really need this person to keep the group afloat, so we're not going to change her job." It might even be, "This person has to work harder to do the same job as everyone else -- he's not as competent."
  • Taking a course or two to prepare for your next level. Courses are nice, but there's no guarantee that you can actually execute on what you've learned. You need to prove that you can do the next higher job by actually doing it. Think of a promotion as an acknowledgment of what you've already been doing rather than a change into a brand-new set of responsibilities that you haven't done yet.

Here are some other things that will keep you from getting promoted:
  • Not playing well with others. If you upset people inside or outside the team, it creates extra work for your boss, who has to smooth things over. When you create extra work for your boss, you are totally not getting rewarded for it. 
  • Taking a negative view of things. If you complain about other people, your workload, or talk about customers as if they're idiots, you're not going to level up. Nobody likes a pill.
  • Having no helpful ideas of your own. Your boss wants a problem-solver who can be trusted to do it the right way without creating other problems (see above). Just reporting on problems isn't enough.
  • Not seeing the big picture. If you are thinking only about your current job or your current team, you're not thinking big enough. You need to prove that you can approach things from your boss's perspective (or that of your boss's boss). Even better, you should be coming up with ideas that they haven't (but that they like).
  • Not doing the job your boss wants you to do. You may be the most brilliant person in the world who is going to change the whole industry; you may think you have the right answers (and in some cases that might even be true). But people who haven't managed teams have no idea how annoying it is to have an employee who won't just do his fricking job because he thinks he knows better. If you don't agree with your boss on how to do things, go find another boss. You'll be doing everyone a favor.

And finally, here's one that not a lot of people think about:

Being irreplaceable. Yes, being irreplaceable will keep you from being promoted. If you are so key to operations that you can't take a vacation or sick day without things falling apart, you are not going to get moved to a different position so that things can fall apart full-time. If you are already a manager, your job is to make sure your team has the skills and empowerment to take care of anything that comes up in your absence. A succession plan is vitally important in every organization. Your own boss will feel much better knowing that you have a stable and successful team, and knowing that you're not endangering operations by indulging your ego's need to feel special.

When you are looking out for the welfare of your organization instead of focusing on what you can get for yourself, that's when you'll be given the chance to do more and own more. 


Tuesday, November 20, 2012

Sure, I'll be your unicorn.

I was fascinated to read about the cancellation of the British Ruby conference due to the arguments that the speaker lineup lacked diversity.  Other people have their own opinions on why we have this problem and what we should do about it.  I've spent a lot of my career as a hiring manager, trying to walk the fine line between encouraging diversity and slipping into tokenism; I've also had to be as impartial as possible in selecting conference talks (full disclosure: I'm on the RSA 2013 committee).

As someone who has a chromosome allocation that has traditionally been in short supply in IT, I'm used to being the only one of my kind in the room. If that makes me a unicorn from time to time, there's not much I can do about it, short of leaving the room, and that kind of defeats the purpose of my being there in the first place, which is simply to learn from and contribute to whatever is going on. If I've been exceptionalised, it wasn't such that I could detect it, but I don't know what discussions are ever held behind the scenes.

But here's the thing, the most important thing: What we see every day is what we expect.

Our brains are hardwired that way, so that we spend less processing time trying to re-analyze and make predictions about stuff that we've experienced before. It goes on without our realizing it, and you can sometimes tell it's happening when you find yourself paying more attention to something than you usually do; it means that something is different from what your brain was used to. We expect humans to be walking on two legs, and so we notice anyone we see on crutches or in wheelchairs. We are used to women with hair on their heads, so a woman with visible hair loss receives a lot of attention (as I did in the middle of chemo when I visited my kids' school playground). This is natural.

So if we want to hack our brains, we have to be conscious about it, and put some effort into changing our subconscious expectations of the way things ought to be. This is why I applaud conference organizers such as Chuck Hardy who work on soliciting paper submissions (and more than one B-Sides conference does this too; I've volunteered to be a speaker mentor for London 2013). Reaching out to anyone who is different from yourself -- and helping them along if necessary so that nobody can complain about quality -- is what we need to do to change our experience, and therefore our expectations, of who is seen onstage.

I don't know whether I've ever been invited to speak simply on account of being female, but if it happens, I'm okay with it. Haters gonna hate, and they probably won't change their minds on why I was selected just because I gave a pretty good talk. But at least I have a chance with the rest of the audience: to change their experience regardless of whether I'm a quality presenter (we're all used to seeing bad male speakers, aren't we? Why shouldn't I be allowed the same opportunity to fail?).

So if you want me as your unicorn, I'll take one for the team. If it means opening up the door a little wider in the future for other people who look different, then I still think it's worth doing.



Saturday, August 18, 2012

Pre-rejected CFP submissions.

Here are some of my planned conference submissions that I thankfully abandoned early in the process:

"Increasing Security Awareness Using Wall-to-Wall Counseling"
Most security awareness training is less effective than it could be.  Introducing a physical reminder component boosted our compliance levels up to 450% (but did necessitate a new carpet from time to time). 

"Zero-Day Exploits For CP/M"
There are critical risks to data integrity for every enterprise using WordStar.  Help us get the word out about these frightening vulnerabilities that have been around for DECADES.

"A Meta-Discussion on Meta-Talks at Security Conferences"
A disturbing trend in security conferences is meta-talks that have nothing to do with, like, pwning stuff.  Burnout, sexism, career advice, economics, recruiting, food, exercise and other presentations, usually on what's wrong with the security industry, are replacing actual knowledge transfer involving shell scripts, cookie abuse and lockpicking. Our whole community is in danger of extreme navel-gazing.  This presentation aims to point out the meta-risks of meta-talks.

"On a New Certification For Security Professionals"
We can't possibly take ourselves, or each other, seriously in the security industry without certifications.  The current ones are not fine-grained enough to depict the exquisite subtleties of arcane knowledge that make us so proud to be in this business.  In this presentation, we will propose a new certification model with 25 levels and over 18,000 separate certifications to remedy this granularity problem.  (And all of them start with the letter C!)

"Musical Ports"
After many years of research, we have discovered a new weapon in the battle against intruders:  musical ports, in which services migrate every few seconds to new port numbers so that they can't be found and exploited.  This is done to the system administrator's choice of music (or you can leave it on the default setting, which uses streaming dancehall reggae).  Every so often, when the music stops, one service that can't find an open port is arbitrarily terminated.  The end effect is a much more secure infrastructure.

"The Original Internet Privacy Threat: Your Mom"
You think you can still fight for your privacy?  Privacy is deader than you know.  Your mom built the Internet, punk, and not only has she been monitoring all your activity, she's got Google alerts on you and has a network of other moms planted where you least expect them. She thinks it's really cute how you change pseudonyms every so often, by the way. And since you're reading this, she'd like to remind you to take out the garbage and brush your teeth.

"It's Probably Okay, Don't Worry About It"
Security isn't the problem that people think it is.  Chill, folks.  It's just ones and zeroes.  You're just getting everyone upset with all this bogeyman talk about APTs and insider threats and whatnot.  Relax, open up the firewall to let it breathe, and embrace the Internet.


Thursday, August 16, 2012

Actually, you're both right.

I normally don't like to write about gender issues.  It's not that I don't have opinions on them; it's just that it would be like taking a public stand on other controversial topics that may (or should) not have anything to do with my profession.

But it seems that the pot has come to a rolling boil these days over sexism and other kinds of harassment, and since I think I understand both sides of the arguments, I thought I'd just come out and say that everyone is (mostly) right.

I think the fundamental problem is that there is a continuum of acceptable conduct and/or speech that at some point crosses over into unacceptable.  The problem is that the dividing line is very blurry, and people who are most in danger of crossing it resent attempts to define it too closely or to move the goalposts without notice.  In fact, it's pretty hard to define it completely without writing a huge book on it.

Harassment is bad, no matter who does it or to whom.  Harassment should be defined as well as is possible and should not be tolerated. 

I can understand how someone can write in his usual style -- blunt, verbose, with a touch of condescension -- and not mean it to be any different just because the current target is a woman as opposed to a man.  I can also see how a woman can take it as an inappropriate attack.  They're both right.  In a case where someone is treating a woman exactly the way he would treat a man, it's not sexism on his part.  At the same time, if that treatment happens to match sexist acts that the woman has experienced, to her it's certainly more of the same.  There is no getting around the mismatch, and it can't always be remedied.  

So when harassment or sexism is contextual -- something is a normal behavior when doing it to a man, but not to a woman, for example -- then I can see how it can be very confusing to someone who doesn't innately experience the difference.  People can wind up perplexed rather than informed.  It can look like one team has a secret rulebook and there might always be a rule or two that could be violated without warning.

The key here is "without warning."  Feedback, like salad, is best when it's fresh.  (I don't know where that analogy came from.  Work with me here.)  Feedback needs to be immediate and unambiguous, which means that it can't always be subtle or polite.  When it comes to unwanted actions of any kind, people have to speak up right then and there.  Women need to be able to yell, push, or punch someone in the nose if all other tactics fail.

A long time ago, in a club in a country far, far away, some drunken guy grabbed me around the waist in what presumably was an attempt to dance with me.  I shoved him away.  The international language of "no" was clear, and I didn't have to do it twice.  Were his feelings hurt?  Probably.  Did I overreact?  We could sit here debating that for hours.  But the fact is, it worked without any need for escalation.  He could have had harmless intentions, other women could have found it charming, and at the same time I still felt it was an unwanted and obnoxious act.  Short of putting the decision to Schrödinger's cat, we're always going to have two states here.  And if we're all going to get along, we have to recognize that and build bridges to deal with both of them.

There are some forms of harassment that we can all agree on:  using threatening language, launching  attacks that do damage, calling someone names.  And most of the time, those types of harassment are clearly intentional.  As a community, we can and should work together to fight that kind, because it's a shared standard.  Where a reasonable person could claim that something is not intentional, however, we need to recognize that and respond in a way that gives feedback, not accusations or punishment.  We can also recognize that this feedback may not be well received, but we can work to make sure it's understood.  And anyone who agrees with the feedback can and should speak up to support it -- not to make it worse, not to escalate it, but to strengthen it.

What we don't want is to wind up in extremes:  where both sides feel attacked, albeit for different reasons.  We don't want women, men, ethnic minorities, people of size, people of age*, or anyone else to be wary of attending a conference for fear of intentional harassment.  We also don't want people attending conferences to be scared of unintentionally offending someone through the mismatch described above.  We want people to be able to write what they think is normal language, and get a second chance if they mess up once.  In all of these cases, though, if you keep getting the same feedback for the same actions or language, maybe you'd better take the lesson to heart, whether you understand/agree with it or not.



* Hello.




Wednesday, August 15, 2012

The OTHER problem with passwords.

There are some sites that I use very rarely, and I can never remember what I used for a password there.  But it doesn't matter, because honestly, the reset procedure is less onerous than trying a few passwords and risking getting locked out.  So I just don't bother: I put in a crazy strong password, forget about it, and when I come back to the site I just ask for a reset.  In many cases there aren't even security questions to answer; I just get a new password mailed to my address of record.  In the case of one site where they wanted me to change the password every 90 days, I did this dance every 90 days.

Yes, yes, I know, password manager.  But most of the public doesn't use one. And site designers know it.  Any site feature that makes it harder for a non-technical user to do a password reset causes that user to email or call the support desk, and every use of the support desk (as in actual humans) costs money.  So organizations are motivated to prioritize ease of use over security, if they feel their target audience won't be able to use more advanced features without support.  The end result is that the password reset process to an address of record is the easiest way to get into an account.

And of course attackers know this too: this is why many publicized breaches today started with the password reset.  If it's a simple enough process, getting into an account is no longer "something you know;" it's "something you have," as in control of the email address.  If you've broken into the address of record, you can collect password resets from as many sites as you can find without having to do any more homework. 

The next level up in attacking an account is to add a new email address of record that you control, which often requires social engineering of the support desk.  But support people are incentivized to help the helpless, which tends to make the process easier.  And as Mat Honan found out, the types of security verification data that support desks use can often be found out with a little Google action (and in his case, the clever use of an Apple process loophole).  This is why I've never liked the use of the last four SSN digits as an identifier; they're even more widely used than the whole SSN these days, and they're used for everything, including utility and phone service accounts. It's arguably less secure than a site-specific PIN.

Make no mistake: designing identity and access management while balancing cost and security is hard.  You can't control the biggest factor, which is the level of expertise for your users (particularly if they're all external to your organization).  With each of these breaches, we're learning more about what works and what doesn't in these designs.  But there's still a lot of risk out there.


 

Monday, August 13, 2012

CFP Karaoke.

I have to thank Wim Remes for coming up with the idea of CFP Karaoke:  you come up with a talk title, and someone else has to do the rest of the work.  Here are some of the gems he came up with on Twitter; feel free to take one and run with it.

@wimremes: "Two and a half clouds : how to keep winning on tiger blood as a service"
@wimremes: "Infosec and the God complex : we're better than we are and worse than we realize."
@wimremes:  "Exploit sales for the masses : do you want a patch with that?"
@wimremes: "Cutting through the infosec BS : is there an evangelist in the house?"
@wimremes: "Eeny, meeny, miny, mo, your QSA says it's secure but I say no."

Tuesday, August 7, 2012

When the mothers talk ...

Seeing as how I've been on the bench for roughly the past nine months, I'm looking forward to getting back to some conferences.  Here's what's planned, at least for now:

9/12-9/14/12 - Looking forward to speaking again at the UNITED Security Summit in San Francisco.  This year I'm talking about "Why Doing Application Security Remediation Is Like Building a Rube Goldberg Machine."  (If this sounds familiar, it's because I'm going green and have recycled it from SOURCE Boston 2011.)

9/18-9/20/12 - My employer's gala event, the North American Hosting and Cloud Transformation Summit in Las Vegas.  This isn't a pure security event, so I enjoy talking with a wider variety of attendees.  This year's panel is called "Security and DevOps: Table Stakes of Doing Business?"  And I've got some heavy hitters who will be contributing to the discussion.

10/9-10/11/12 - I'll be hanging around at RSA Europe, because I really love London.

10/25-10/26 - Of course I can't possibly miss the OWASP AppSec USA conference, especially as it's in the 512 this year.

12/3-12/7/12 - In what I'm sure will be the blowout of the year, Security Zone 2012 will be gathering a whole lot of security experts in Cali, Colombia.  Oh, and I'll be there too, talking about the Security Poverty Line.

It'll be great to see a lot of cool people again, and catch up on the latest research.


Friday, June 22, 2012

There is no spoon.

I just read this article from TechTarget, quoting Gartner research vice president Ramon Krikken as saying:
[...] it may be time to ask whether it’s faster, cheaper and ultimately just as effective to use a device like a WAF to shield an application from a security flaw than face the unending cycle of developing, testing and implementing software patches. [...] I have an increasing number of customers starting to question whether putting a Web application firewall in front of an application to fix something is all that much worse than fixing the code.
And I agree that some apps can't be remediated in a short enough time span, others can't ever be fixed, and so on -- for those exigiencies where there's no other choice, a WAF is better than nothing (assuming it's actually in blocking mode and properly configured).

However, I would strongly caution anyone against deciding that the wave of the future is simply to rely on the WAF or any other network-based security device for application security, because THERE IS NO FRONT.

You can't put a WAF in your production DMZ and declare it done.  An attacker who bypasses that DMZ and gets into your internal network will have a field day inside the soft and chewy interior.  Dev and test applications, often with production data in them, will be vulnerable.  They may also have back doors -- excuse me, I mean "test harnesses" -- that don't get stripped out until the production build. 

So maybe you'll put a WAF on board every web application server on your network.  Are you ready to manage all those rules, as new security vulnerabilities are found?  (You may well have different versions of the same application throughout your enterprise.) And wouldn't it be tempting just to start loading any other functional fixes that you can onto the WAF instead of having to fix, test and release the code?

There's another problem, though:  portability of code.  You may not ever plan to make it available outside of your organization, but can you imagine giving it to a business partner, or selling it, and saying, "Oh yeah, you'll need to get a WAF and about 80 rules because we never fixed the code"?  That's just not an option for anyone who is shipping, releasing or sharing their apps.

And finally, don't forget that pesky cloud.  As applications become distributed among different sites and providers (not to mention mobile devices), there isn't going to be a choke point for all application traffic to go through in order to enforce policies and detect attacks.  As hard as it sounds, trust boundaries need to be built into the applications themselves so that they correctly handle and protect their own architecture components and processes.  Otherwise your app is going to be wandering around the Internet with its hair creepily following two steps behind.

A WAF is a Band-aid, not a cure.  And even though it can be very useful for defense, additional validation or data transformation, it still doesn't provide 360-degree protection for an app; only the app can do that.  Don't slip into becoming part of a popular blog.



Saturday, June 9, 2012

Slide rules.

In my time as a CISO, and now as an analyst, I've seen more vendor presentations than I can possibly count.  Over time, I evolved a set of rules that you may want to know about if you're going to share a slide deck with me.

Here are the required elements:
  1. First off, there must be a slide talking about The Problem We All Face, and indicate that it’s a scary, scary world out there.  Otherwise I would forget why we’re all here. 
  2. Next, there must be a conceptual  slide that includes icons of people, the cloudernet, and either monitors or CPUs.  Extra points for locks, or creatively drawn bad guys.
  3. Add a chart of your company’s growth with the arrow pointing skywards on the right-hand side.  Don't include any numbers or units on the axes; those details are irrelevant.
  4. There must be at least one circle-shaped process flow, indicating that the customer will never be finished using your product.
  5. Don't forget the obligatory page full of customer logos (whether they approved the use or not).
  6. And tiny screenshots of your product, which I cannot possibly read.
  7. Compliance.  The word compliance has to be on there; otherwise I’m not reading it.  APT is not a one-for-one substitute, although it’s close.
  8. You must show your boxes replacing your competitor’s boxes in an abstracted network diagram.  If your product is only software, you should still use boxes.  Virtualized appliances should be depicted by cloudy boxes.
  9. Please include some fancy transitions or build sequences so that I can watch them break, or miss them altogether, during an online presentation.
  10. And finally:  I cannot take your presentation seriously without military references, a fortress metaphor, or an onion metaphor (depicting defense in depth).
Now, if you're feeling especially ambitious and would like bonus points, I would love to see:
  1. The classic "risk = vulnerability x impact" equation.  I just can't get enough of that one.
  2. Carefully chosen quotes from a couple of bank customers saying how wonderful your product is.  Because I hadn't been planning to buy until I saw those. Banks always know what they're doing.
  3. A description of your bad-ass threat researchers, whose continuous stream of published vulnerabilities and exploits makes my job as CISO so much easier.
  4. Add a percentage figure to your "low false positive" rate.  Better yet, make it zero; that saves us all time.
  5. A reference to Kevin Mitnick is just the cherry on top.
Thanks for tuning in, and I look forward to the next 24-megabyte PowerPoint file in my inbox.

Thursday, May 24, 2012

Conferring about conferences.

There's a great discussion going on right now on Twitter about what's wrong with security conferences:  do we have too many?  Are they focusing on the wrong things? 

Josh Corman threw out the figure that more than 60% of conference paper submissions these days were on Android security issues.  This sounds pretty excessive when you consider all the other security topics out there.  However, let's not forget that there are many different audiences for security talks, just as there are different sub-communities within the security industry.  For "breakers," Android security is a hot topic these days, and you would expect to see a lot of talks on mobile security in general at conferences "by breakers, for breakers."  And because that's a hot topic among breakers, you'll see defenders and builders eyeing it as well, because in the security ecosystem, what's getting targeted the most is what everyone will tend to focus on.

That's not to say that security conferences are homogeneous.  There is a very different culture and flavor at work at a conference for defense-related security (law enforcement and military, and to some extent critical infrastructure), as opposed to a meeting of financial services CISOs, or civilian government, or academia, or "hacker ethos" tribal gatherings.  Even if the hot topics are nominally the same, the perspectives and timbre of discussions will be very different.  And a conference that features roundtable discussions will bring out information exchanges that aren't as readily forthcoming at classic "stand up and present" functions (even if you count the hallway track).

So even though the sheer number of security conferences these days is dizzying, I think the variety is healthy.  We need the grass-roots B-Sides just as much as the vendor-oriented RSA, or the raucous Shmoocon, or the Chatham House Rules-driven CISO roundtable.  If anything needs to be changed or tweaked, I simply think that we need to make sure that the same speakers aren't touting the same perspectives at all of these different venues.  Everyone wants to hear a sexy war story about mobile every so often, but I really admire the efforts to bring in first-time and local speakers to certain events as well.  The "democratization" of security conferences is a trend that I'd like to see continue.

Monday, May 7, 2012

Too many questions.

As an analyst, I have too many things I'd love to research and can't.  I'm in a target-rich environment (then again, so was Custer).  It doesn't stop me from coming up with questions, though, and hoping someone else will want to answer them.

Take the discussion I just had on Twitter with @jeremiahg, @chriseng, @attritionorg, @dakami, @rybolov and others.  I objected to the claim that everyone in the Fortune 500 is hacked, in the absence of two things:
  1. A clear definition of "hacked," and
  2. Some data supporting the assertion that everyone in the F500 fit that definition.
So we got to talking about what data would constitute proof, and I suggested that having one host in your IP range detected as being a member of a botnet could qualify as "hacked."  This could theoretically be straightforward to determine, if you had access to enough threat intelligence feeds and/or had enough sensors to compile a list yourself.  Now, there are some open source feeds, but for the most part companies that create their own feeds want to monetize them. (One laudable exception is Microsoft, which has been testing a feed that it would offer free of charge to law enforcement, CERTs, foreign governments and private corporations.)  If you have one machine on a botnet at some point in time, that could designate you as hacked, at least until you scrubbed it. 

But is it the tip of the iceberg?  Does having a bot automatically mean that more nefarious things are going on besides just selling V1agr4 or perhaps DDoSing the Anonymous target of the week?  This is the risk calculation that we need more data to perform, and it's one that the C-suite would really appreciate.

So I'd love for someone to comb through their incident response data and present statistics on what, if anything, followed after an initial malware infection.  If you could say that (for example) 70% of the time, it was simply used to grab CPU without necessarily trying to grab passwords or data, and 20% of the time it led to password compromise for financial theft, and 10% of the time it led directly to IP theft, those would let us infer probability.  It would depict in a more concrete way just why being part of a botnet is a symptom of something more dangerous.

By association, any company that found itself with membership in a botnet could reasonably suspect that it was even more compromised than that.  It might take the time to look further.  (There are plenty of enterprises that just wipe the affected machine, re-image it, and go back to work.)

The other question is whether membership in a botnet should be considered public data.  If anyone on the Internet can discover it, you could argue that it's the kind of compromise that anyone can report.  The fact of an enterprise's system interacting with another host on the Internet isn't confidential; it (like a public posting) is just assumed to go unnoticed.  Would a company have grounds to complain if its membership in a botnet were revealed, based entirely on publicly available information outside of its private network?  I am not a lawyer, but sometimes I want to ask lawyerly questions like this.

Following this chain of thought, anyone could set up sensors, collect data on botnet membership, and publish it widely.  Someone could collect statistics on just how many of a company's systems were in a botnet at any given time.  In the absence of any other data, could this be used as a poor man's Compromise Index?  It would be like someone noting how many broken windows you could see in a building: one indication of a breach, but without any way to know what, if anything, happened or was taken after the windows were broken.

And armed with that data, someone could actually make a substantiated claim that the whole Fortune 500 is hacked, without hearing the clackety-clack sound of thousands of eyes rolling.

After that comes the question, "So what?"  Would this kind of naming and shaming prompt any additional diligence on the part of these organizations?  Would it make regulators sit up and take notice?  Call me a skeptic, but I suspect that botnet membership is so widespread that people would assume it happens to everybody -- just like ant invasions -- and it wouldn't be condemned except within the security echo chamber.  I could be wrong.  Either way, I'd love to find out.

[DISCLAIMER: I am not encouraging anyone to compromise any systems themselves without the permission of the affected organizations.  I am not suggesting that anyone collect data that can only be gathered directly from those systems.  I am certainly not recommending that anyone leak confidential data, even if it's with the best of intentions.  Do not try this at home.  Ask your parents before calling.  And so on.]



Tuesday, March 27, 2012

For great justice. I mean security.

The Verizon Data Breach Investigations Report (available here) was basically another year of "all your POS are belong to us."  Which is depressing, but not at all surprising.  As you know, I talk a lot about what I call the Security Poverty Line, and how smaller organizations that are IT-poor tend also to be security-poor.  Moreover, because security and IT are so often separate, security becomes optional, a luxury and an omission for the small business that doesn't know it has something to lose -- or even if it does, it hasn't got the faintest idea of how to go about addressing it.

Enter the DBIR, and what I think is one of the most helpful steps ever taken to address this security-poor population.  On page 62, the redoubtable Verizon Risk Team has created a cutout sheet that you can hand out to your favorite retail, hospitality and food establishments.
Greetings. You were given this card because someone likes your establishment. They wanted to help protect your business as well as their payment and personal information.
It may be easy to think “that’ll never happen to me” when it comes to hackers stealing your information. But you might be surprised to know that most attacks are directed against small companies and most can be prevented with a few small and relatively easy steps.
 And the cutout doesn't get too fancy or preachy; it basically recommends two main things:  change your default passwords, and make sure you have a firewall.  And if you're not the one who is in charge of these things, make sure your vendor does them.

The beautiful simplicity of this is hard to overstate.  The cutout doesn't invoke FUD; it just says, "Hey, we've seen a lot of this and you might want to be careful."  The language makes it accessible to someone who is busy running a business, and who doesn't have time to delve into arcane IT concepts.  It tells them the most important things they need to do, and puts it in a digestible format.

I hope people will go to the trouble of making copies of this cutout and giving them to as many franchises and local businesses as possible.  It would also help to have a simple and cheap answer to the question, "How do I find out more about this?" if the business owner should ask.  I know of at least one security professional who makes a point of going to speak about security at chamber of commerce meetings, and we need more of this kind of outreach.

For the security-poor organizations, the best thing we can start with is to arm them with information -- the kind of information that is useful to them.  If we made a concerted effort to reach out to this underserved population, I'm hoping the DBIR numbers would get smaller over time.

Saturday, March 3, 2012

Going back to the stack.

In the spirit of trying to suggest solutions, here are a couple of thoughts about what an enterprise can do first off to make security a little better.

It's bothered me that infrastructure is being administered more horizontally than vertically these days.  Everyone specializes in a different layer: network, OS, utilities (such as Exchange), middleware, applications, etc.  And this gets worse when you outsource one or two layers to "the cloud" (think IaaS, PaaS and SaaS), so that you have to coordinate with a third party to troubleshoot something.

Back in the Pleistocene era, system administrators knew their systems like they were their babies.  They knew everything that was running on them, how they were configured, who they talked to, and they knew when something was "off."  I know sysadmins that would regularly help a developer debug code, and they were often better at it than the developer, because they also understood the underlying environment better.  They could troubleshoot all the way up and down the stack, and you went to one source to do it instead of having to get a conference call together with 3rd level engineers from four different companies.  (Seriously, I know of a data center that had five different networks owned by five separate entities.  Think you could figure out what happened to a packet?  Think again.)

So one thing that enterprises can do is simply to get control of their layers as much as possible.  Know what you have, know where it is, and be able to cause changes to it when you want to.  That sounds so obvious as to be not worth saying, but I don't know of any admins who know more than about 500 hostnames by heart, and many times the environment is so dynamic that boxes come and go without any centralized tracking keeping up with it.  (And I'm not even talking about VMs.)

If you already have parts of your infrastructure outsourced, go over your contracts and strengthen your relationships with your providers.  You want them to be able to give you logs, for example, within a few minutes of the request.  You also need to have the right technical level support people on call without having to fight your way through first-level script-readers.

And finally, go back to designating "stack admins," who are generalists rather than specialists in one particular technology.  It should be their job to know as much as possible about any given system.  You can fit this into DevOps if the developers truly know the lower layers.  A stack admin is your best hope for knowing what normal operation is, and for alerting you when something doesn't smell right; they're also the best at understanding the implications of any given planned change (such as changing the ports an application uses without creating the corresponding firewall rules).

Start with knowledge, and then work your way to control.  Notice we haven't really touched on security yet; that'll come later.  But knowledge and control are basic building blocks of security.

Saturday, February 11, 2012

In 50 gigabytes, turn left: data-driven security.

I love Scott Crawford's research into data-driven security.  I agree with him that IT operations and development can both benefit from the right security data -- where "right" means at the appropriate level and relevant to what they're doing.  It also has to be in the right mode:  an alert should be based on a conclusion drawn from the analysis of data (20 failed logins per second = someone is using automation to try to break in), based on an event or confluence of certain events.  Once someone in IT needs to perform an investigation, the need changes to looking at more atomic data (exactly which logins are being targeted, whether they're active or disabled, etc.).  In other words, the details need to be available on demand, but they shouldn't be shoved at the IT staff in lieu of useful alerts.

Another kind of data that is useful is situational data:  how things are configured and what is happening during "normal" operation.  Viewing all the responses from a database is too much to ask of a developer -- but the developer would benefit a lot by knowing that some queries are taking 25 minutes to return (do you suppose that would have some effect on application performance?).  This is the sort of data that is incredibly useful, but setting up every possible abnormal situation to trigger an alert is way beyond the scope of an overworked operations team.  Sometimes you just have to sit down and do some exploring every so often, to find out these sorts of operational problems.  Packet captures can teach you things you can't learn any other way -- if you have the time and skills to read them.

Because detection is expensive.  It requires the luxury of having staff both knowledgeable in the technology and in the context of those particular systems, and having them devote a lot of their time just to sitting and looking at things, sorting out what's normal from what's not.  Those are the kind of costly eyeballs that have been transferred so frequently to managed security service providers.  It's the kind of thing you pay consultants to do, because if your staff weren't completely occupied with keeping the infrastructure running, you wouldn't be allowed to keep them.  Data analysis today is expensive, and it's a one-off deal unless you can find economies of scale somewhere.

Yes, automation is getting better, but it's not there yet.  There are still too many alerts taking up too much time to sort through (particularly in the tuning phase).  IT staff get hundreds of emails a day; they can't handle more than two or three alerts that require real investigation.  (By the way, this is why operations often can't respond to something until it's down -- it's the most severe and least frequent kind of alert that they receive all day, and they don't have time to chase down anything lower-level, like a warning message that hasn't resulted in badness yet.)

If you break security events down, you're generally looking for two kinds of things:  normal activities that are being done by the wrong people (as in, from a Tor exit node through your administration console), or abnormal activities that are being done by the "right" people (internal fraud, or someone has taken over an authorized account).  And by "people," of course, I also mean "systems," but at first glance it's sometimes hard to tell the difference. 

This determination of "wrong" and "right" is a security activity, and for the reasons I listed above, operations people may not care that much until it makes something happen that they have to fix.  If someone wipes a database, they'll care a whole lot, but if there's some unusual encrypted traffic leaving the enterprise on port 80, not so much.  A fully leveraged (i.e. overworked) ops team doesn't have time to analyze alerts at that level.

"Wrong" and "right" to the business is on a completely different stratum, and it's one that's hard for automation to reach today.  Executives care when it gets to the level where they have to do something about it, like fire someone for looking at patient data, or talk to the press about a breach.  They care when an event starts to present the risk of legal liability or increased cost.  But you can't bring them alerts like that until you have digested everything at a lower level and put together enough evidence to reveal a business issue.

And finally, historical data can be extremely useful in determining what works in security and operations and what doesn't.  But that kind of data has to be analyzed in a different way from real-time operational data or situational data.  It requires a different model that caters to the requirements of risk analysis -- and that, too, is expensive, even assuming you know how to do it today.  (Hi, Chris.)

My point here is to say that data-driven security is where we need to go, absolutely.  But there is no single path to take with the data we have; there are a number of divergent paths that are all needed in the enterprise.  We also need to be able to drive the data in the right delivery directions -- which means that we need a really good data navigation system.

Thursday, February 9, 2012

Security: ur doin it rong.

As I mentioned before, a lot of security work consists of telling people they're doing something wrong.  There are all the "thou shalt nots" in security policies, there's the "scanning and scolding" of vulnerability assessment, and there's the "Ha! Got you!" inherent in penetration testing and exploit development.

In other words, it takes a lot of moxie (pun intended) to stand up to a security professional.

Rob Lewis, aka @Infosec_Tourist, made the comment yesterday:
You're right. Nobody says "we're screwed!" with as a sincere and calm demeanour as @451wendy.
Which I appreciate, but it's been bothering me lately that that's almost always how we discuss security.

In his preso at Security B-Sides London last year, David Rook (aka @securityninja) made a great point about application security:  if we taught driving the same way we taught secure development, we'd make a whole big list of different ways you could crash the car, but never actually tell the student how to drive safely.

A good number of talks at security conferences focus on what we (or other people) are Doing Wrong.  Very, very few are about how to do something right.  Part of the reason for this, of course, is that practitioners are afraid to stand up in front of an audience and talk about how they're defending themselves, for fear that someone in the audience will take it as a challenge and de-cyber-pants them before they've even gotten to the Q&A session.  (I've heard tell of presenters' laptops being hijacked in the middle of a presentation.)  I know a lot of practitioners are doing very cool things that their management would never let them say publicly.

But when we focus too much on what people are doing wrong, there's a danger of our talks turning into pompous lectures.  "We need to do something different from what we're doing today."  Okay, but what, exactly?*  This is why I admire those who are proposing alternative solutions, such as Moxie Marlinspike's Convergence.  These folks might be right, or they might be wrong, but at least they're trying to make things better.

So, lest this turn too Gödel, Escher, Bach on us, I'll stop lecturing too, and talk about what I plan to do about it.  I'm going to do more talks about what I think works in security.  I've done a few before on topics such as how to bootstrap an infosec program, what multi-contextual identity and access management looks like, and how to dicker on the contract with third party providers.   I won't aspire to #sexydefense; I'll leave that to the ones who show up all the time on the Top Ten Infosecsiest lists.  But I'll encourage people to turn that frown upside down, and try not to bring up a problem without also proposing a solution.  


Maybe this way, we can get invited to a few more non-security parties instead of having to throw them all ourselves.


*No, the answer is NOT "use our product."  Thanks for playing, though.

Wednesday, February 8, 2012

Insecure at any speed.

With the release of breach data reports, such as the one from Trustwave SpiderLabs that came out recently and the highly anticipated one from Verizon Business, inevitably comes a wave of data dissection and then disbelief.  Security pundits moan at the statistics, such as the one this year that 78% of organizations that Trustwave investigated had no firewalls at all.  The report itself takes an incredulous tone as it describes the pervasive use of unencrypted legacy protocols (one highlighted case study described a breach involving an X.25 network), insecure utilities such as telnet and rsh, and more.

Security pros who specialize in this sort of thing may be surprised at how big the problem is, particularly among smaller enterprises, but anyone who has actually tried to implement security in these organizations isn't surprised at all.  You can tell by the faces in the audience when one of these talks goes on:  it's the difference between "ZOMG!" and "Yup, *sigh*."

It's not that these organizations don't care about security.  You'd have to know about security first in order to care about it.  The next time you go to a sandwich shop or a gas station, ask the manager about the security in the POS system they're using.  It should be an interesting, but very brief, exchange.

Should everyone be able to manage their own security?  It's very much out of reach for those below the security poverty line; when you think about it, the level of security management needed for technology today reaches the equivalent of having to rebuild and restock grocery shelves on a weekly basis, or requiring an accountant to know construction, electricity and plumbing for the office.  Just reading through the Trustwave report, and all the myriad ways that systems are breached, I can't help but imagine the look on a manager's face if I made it into a checklist and handed it out.  Who outside of the clannish IT industry knows how to spell ftp, much less knows that it's insecure?  Who would know the better options and be able to implement them? Who has the time to examine and reconfigure computers on a regular basis?

What this indicates to me is that our IT infrastructure -- from the networks to mobile -- is inherently, badly insecure.  And we're so far down the road in its widespread implementation that it will be decades before the problem is substantially fixed, even assuming we started today with all software developers and manufacturers.  Nobody is going to pay to replace what's running just fine today -- until someone loses a figurative eye.

As technology advances, organizations have to deal with an ever-widening range of technology that they have to try to get secured.  Yes, there are still X.25, COBOL, VMS, DOS, NT, SunOS, Sybase, and token ring out there. At the same time, iOS and Android are coming into play, along with "the cloud" and Hadoop and NoSQL and everything else that's new.  A CIO needs to know about all these; a CISO has to know how to secure them all -- especially when older systems aren't compatible with updated software.  The complexity grows year by year, and the inertia of the legacy environment weighs more heavily on it.

And make no mistake: security is disruptive.  It's enormously disruptive.  Getting the network architected correctly, every version of software patched and every configuration right, especially after the system has been in use for a while, is as disruptive to the business as migrating to a completely new system or platform.  Ask anyone who has tried to manage a security initiative in an enterprise.  Even assuming the enterprise wants to do it, it's a major undertaking.  All this shows how badly security is designed today; you shouldn't have to keep reconfiguring your systems on a weekly or monthly basis in-flight just to keep the security entropy at bay.

It's an intractable problem, and frankly, it's one that the enterprise shouldn't have to solve.  People are trying to work with the equivalent of a pencil, and it's not their fault that their pencils are fragile, complicated, and prone to exploding at inopportune moments.  They shouldn't have to know or care why the pencil isn't working; they want a new one without any delay, and without hearing long stories about how the graphite in this type of pencil isn't backwards-compatible with all the erasers in the firm.

So when we read about how bad security is getting, we shouldn't be pointing the finger at the compromised enterprises.  We should be pointing it at their IT providers, who really ought to know better; but more fundamentally, we should be pointing it at ourselves.  We should stop demanding that the user be responsible for security; those of us who are building this stuff to begin with should fix it ourselves, and build it in to all future technology.  Today security is an afterthought, and a bad one at that.  As long as it remains separate from the systems it's supposed to protect, instead of being simply an attribute, and as long as it requires users to maintain an abnormal height of awareness as they go about their daily jobs, security is going to continue to be as bad as it is today.

Tuesday, February 7, 2012

Analyst geometries.

Quadrants and cycles and waves, oh my! 

We're all familiar with the best-known graphics, in which there are #WINNING parts of the page and #LUSING parts.  In fact, I like anything that lays out concepts and relationships so that I can pick them up at a glance, like this lovely "subway map" from The Real Group.  I've argued that my employer needs a "magic dartboard" so that we could write reports like this:
"Vendor X is in its third year right next to the bullseye.  On the other hand, Vendor Y took a wrong turn recently and is now firmly wedged in the fake wooden paneling on the wall."
I myself have presented a Punnett Square of Doom before; we have Christofer Hoff's Hamster Sine Wave of Pain; and we have the one that started it all, Andrew Jaquith's Hamster Wheel of Pain.  Someone even proposed a magic quadrant for analysts, with one axis being "ego" and the other being "clue." (I'm not drawing that one up; someone else will have to do that.)

However, the issue in drawing something out, especially as a chart or graph, is that people want to see numbers (mostly so they can argue with them: "We should be at least 3.5 to the right!").  And where there are numbers, there is a danger of misleading math holding it all together: quantitative depictions of what are really qualitative properties.  I don't think anyone means "20/300" when describing a company's vision.*  There's also a tendency by decision-makers to turn the positioning into a binary sort of proposition: "Upper right or not?  Okay, I'll sign the purchase order."  I've never had a discussion in which I successfully argued for one vendor over another based on one being eighteen pixels down but twenty degrees north-northwest of the equator.

So what kinds of graphics are useful without turning the exercise into a rating system?  I started a mind map of vendors in one particular sector, in which I simply tried to categorize them by offerings, show who was reselling whom, and who was partnering with whom.  It turned into a confusing mass of spaghetti faster than you could say "al dente."  It certainly wouldn't help anyone who was trying to evaluate products.

The problem is, sectors within security are blurring and merging, companies are building out portfolios, and everyone's adding discrete functionality from different categories.  Static and dynamic security analysis, for example, aren't separate revenue streams for some vendors who do both, and it'll just get more muddled when you add "glass box" or "hybrid" testing to the mix.  To make matters worse, some vendors invent a new sector for themselves: "We're not Category X!  We're next-generation big data hybrid security snorkeling!"  There just aren't enough drinks at RSA to make up for that kind of headache.

So any kind of graphic that I can come up with to depict market placement is going to look more like Jackson Pollock than a fixed geometry, maybe with contrails behind some of the vendors going in different directions from their current paintdrop.  Especially with the startups, the best I could do would be to create a magic pinball machine.  I'll mull it over some more and let you know what I come up with for the next report.



*Although it would be really fun to get into business astigmatism or technology presbyopia.  Hey!  Magic Spectacles!

Monday, February 6, 2012

Of Egyptian rivers &c.

Just for fun, I've compiled some of the top security excuses I've heard in my career.

  1. It's okay, it's behind the firewall.
  2. Won't antivirus catch that?
  3. No, we don't have confidential data on our system, just these Social Security numbers of our employees.
  4. But nobody would do that [exploit of a vulnerability].
  5. I can't remember all these passwords.
  6. My application won't work with a firewall in the way.
  7. They won't be able to see that; it's hidden.
  8. It's safe because you have to log in first.
  9. No, we don't have credit cards on our system, just on this one PC here.
  10. We didn't HAVE any security issues until YOU came to work here.*

*True story.

Friday, January 13, 2012

Eating the security dog food.

I kept meaning to get back to Rafal Los's post on "The God Complex" -- and answer his question.

Are you an exception to your own security policies?
To which my answer is (was): no.  In fact, as a CISO I tried hard to follow every policy.

Why?  Because if it was too annoying for me, if it kept me from getting something important done, then it was probably obstructing other people too, and I should change the policy.

Admin rights?  There should be policies governing their access too -- arguably even more of them, because the more access you have, the higher the standard you should be held accountable to.  For their own protection as well as that of the users, admins should be able to demonstrate that there are checks on their powers and activities, and that they can be open about what they're doing.  It's harder to be accused of nefarious activities if you are completely above-board, show that you're willing to be subject to appropriate limits, and make a point of relinquishing any sole powers you might have.  Call it CYA, call it leading by example, whatever.  It's ethically important.

Not only is it the right thing to do, but it also helps in user relations.  A lot of security is about telling people that they're Doing Something Wrong.  And if you're going to be telling them that, then you'd better be doing things Right yourself. 

Now, constructing things so that everyone has accountability checks all the way up to the top can be harder than you think.  It can end up being "turtles all the way up," so to speak.  In every organization there's going to be an Ultimate Decider, and the Ultimate Decider is always someone who is too busy to do that deciding.  He or she will want to delegate parts of that responsibility back down the chain, leading to conflicts.  For example, someone can end up being deputized to submit and approve requests rather than having those broken up into separate duties, or be empowered to monitor the activities of their own bosses.  Sure, there will always be exceptions to policy, but the point is to design them so that they still have checks and balances on them -- not to ignore them and let them be gaping holes in your controls.  They need to be documented five ways from Sunday, approved by as many people as you can hunt down, and changed back to normal as soon as they're no longer necessary.

I'm sure everyone agrees that those with power need to be held accountable for that power, whether it's a government executive, law enforcement officers, the military, or any other person in a leadership position.  In security, you don't need to be a leader to have power, but you still need to be conscious of what you can do, how someone could abuse it, and how you can make sure you're not the one who will do the abusing.  You've got to protect the enterprise from external and internal threats, but one of those threats is you.  Go look in the mirror and start threat modeling.

Why we still need firewalls and AV.

It's become trendy to talk about how ineffective some commoditized security products are, classic firewalls and AV being the poster children for this.  One of Josh Corman's favorite points is that "we never retire any security controls."  But as fond as I am of Josh, I think he's wrong in his implication that we should.

Let's take my firewall.  (Please.)  It's still blocking what it's supposed to block; it's just that the ports that I need to leave open (such as 80 and 443) are now carrying all the traffic as a result, and those protocols are being used to tunnel attacks these days.  The firewall is doing its job; it's just that the job is no longer as sufficient as it used to be, back in the '90s. 

In the same vein, we still have umbrellas, even though they're not terribly useful in a hurricane.  Nobody would tell you to throw away your umbrellas because they're "ineffective" -- nobody, that is, except the maker of a Next-Generation Umbrella.  (And while we're on the subject of umbrellas: I really hate it when firewalls are described as stopping "millions of attacks per day."  An umbrella isn't rated by how many raindrops it blocks and how wet you didn't get every day. A probe shouldn't count as an attack; it's just a raindrop to a properly configured firewall.)

Now, it's important for a consumer to understand the limits of the umbrella and not to believe that it will stop someone from getting wet in a hurricane.  It's also important for consumers to know that even if the chance of a hurricane in their area is small, there are still tornados, sideways winds and Advanced Persistent Puddles to contend with, and they should plan accordingly.  They shouldn't pay a whole lot for an umbrella that is not going to protect them in all use cases.  But it's still useful for what it does well.

The functions that classic firewalls perform are so commoditized that they're tucked into just about everything right now; I could wear them as earrings if I felt like it and someone made the right form factor.  In the future, it should be a given, and therefore not worth marketing.  But we will always need that functionality for as long as we have network traffic that doesn't automagically inspect and block itself.

Same thing goes with anti-virus.  It's necessary but not sufficient, and it ought to come in every cereal box, not as a standalone product that will completely solve any given problem.  Classic viruses are still out there, and they still need to be stopped, but advances in anti-malware, anti-phishing and other forms of automated defense still continue to pick up where classic AV leaves off. More sophisticated inspection and detection methods need to be developed, but that's a universal problem in security.

My belief is that users need education, not exhortation to throw out perfectly good controls that just aren't covering as much of the attack space as they used to.  They need to know what each security product will and won't protect, and they need to understand this in a non-technical way, just as people have learned over time that air bags plus seat belts are better than seat belts alone, without needing to know the mechanics of how they work, and without having to do threat modeling when they buy a car.

So if you don't agree with me, and you've really stopped using these products, I'd love to hear about how you're addressing those classic threats, and what controls you replaced them with.  (You don't get any points if the threats don't apply to what you're using; of course your toaster doesn't need AV.  But your smart meter just might.)

Friday, January 6, 2012

Well, that was unexpected.

I have to thank whichever sneaky judge it was (and I have my suspicions) who nominated this blog for a Social Security Blogger Award.  Honestly, I only started the blog when I did because I figured it would be disqualified on account of my being a judge as well; obviously I didn't read the fine print.

But there are a lot of great nominees out there (I should know; I picked some), and although I won't be at RSA myself, I'll be watching the bitwaves to see who ends up buying the drinks later that night at the Irish Bank.