Because I'm all about the "good enough."

Sunday, August 24, 2014

How to help.

There are a few movements afoot to help improve security, and the intentions are good. However, to my mind some are just more organized versions of what we already have too much of: pointing out what's wrong, instead of rolling up your sleeves and fixing it.

Here are examples of Pointing Out What's Wrong:

  • Scanning for vulnerabilities.
  • Creating exploits.
  • Building tools to find vulnerabilities.
  • Telling everyone how bad security is.
  • Creating detailed descriptions of how to address vulnerabilities (for someone else to do).
  • Creating petitions to ask someone else to fix security.
  • "Notifying" vendors to fix their security.
  • Proving how easy it is to break into something.
  • Issuing reports on the latest attack campaigns.
  • Issuing reports on all the breaches that happened last year.
  • Issuing reports on the malware you found.
  • Issuing reports on how many flaws there are in software you scanned.
  • Giving out a free tool that requires time and expertise to use that most orgs don't have.
  • Performing "incident response," telling the victim exactly who hacked them and how, and then leaving them with a long "to-do" list.

None of this is actually fixing anything. It's simply pointing out to someone else, who bears the brunt of the responsibility, "Hey, there's something bad there, you really should do something about it. Good luck. Oh yeah, here, I got you a shovel."

Now, if you would like to take actual steps to help make things more secure, here are some examples of what you could do:

  • Adopt an organization near you. Put in hours of time to make the fixes for them, on their actual systems, that they don't know how to do. Offer to read all their logs for them, on a daily basis, because they don't have anyone who has the time or expertise for that.
  • Fix or rewrite vulnerable software. Offer secure, validated components to replace insecure ones.
  • Help an organization migrate off their vulnerable OSes and software. 
  • Do an inventory of an organization's accounts -- user, system, and privileged accounts -- and lead the project to retire all unneeded accounts. Deal with the crabby sysadmins who don't want to give up their rlogin scripts. Field the calls from unhappy users who don't like the new strong password guidelines. Install and do the training and support on two-factor authentication.
  • Invent a secure operating system. Better yet, go work for the maker of an existing OS and help make it more secure out of the box.
  • Raise money for budget-less security teams to get that firewall you keep telling them they need. Find and hire a good analyst to run it and monitor it for them.
  • Help your local school district move its websites off of WordPress.
  • Host and run backups for organizations that don't have any.

And if you're just about to say, "But that takes time and effort, and it's not my problem," then at least stop pretending that you really want to help. Because actually fixing security is hard, tedious, thankless work, and it doesn't get you a speaker slot at a conference, because you probably won't be allowed to talk about it. Yes, I know you don't have time to help those organizations secure themselves. Neither do they. Naming, shaming and blaming are the easy parts of security -- and they're more about self-indulgence than altruism. Go do something that really fixes something.


Friday, May 30, 2014

Want some more bad news?

I didn't think so, but I had to share this anyway.

I was listening today to a presentation by the CTO of Dell SecureWorks, Jon Ramsey (who for some reason has not yet tried to implore me to stop calling him "J-RAM"). He's always full of insights, but this one was both unsurprising and earth-shattering at the same time.

He pointed out that the half-life of a given security control is dependent upon how pervasive it is. In other words, the more widely it's used, the more attention it will be given by attackers. This is related to the Mortman/Hutton model for expectation of exploit use:


And yes, so far the water is still pretty wet. But he also pointed out that the pervasiveness of a control is driven by its requirement for compliance. 

In other words, once a particular security technology is required for compliance, its pervasiveness will go through the roof, and you've just lowered its effectiveness half-life to that of hydrogen-4. The very act of requiring more organizations to use it will kill off its utility. (And this is even worse than Josh Corman's denigration of PCI-DSS as equivalent to the "No Child Left Behind" act.)

Does this sound an awful lot like "security through obscurity" to you? Pete Lindstrom put it nicely when he said that this means the best security control is one that is the least used.

Now, we all know that security is an arms race between the adversary and the defender, and we also know that obscurity makes up a good portion of the defense on both sides. The adversary doesn't want you to know that he's figured out how to get through your control, and you don't want him to know that you know he's figured it out, so that you can keep on tracking and blocking him.

If so much of security relies on a contest of knowledge, then it's no wonder that so much of what we build turns into wet Kleenex at the drop of a (black) hat.

This means that we need more security controls that can't be subverted or deactivated through knowledge. In this case, "knowledge" often means the discovery of vulnerabilities. And the more complex a system you have, the more chances there are for vulnerabilities to exist. Getting fancier with the security technology and layering it both make it more complex.

So if we're trying to create better security, we could be going in the wrong direction.

The whole problem with passwords is a microcosm of this dilemma. A user gains entry by virtue of some knowledge that is completely driven by what he thinks he'll be able to remember. This knowledge can be stolen or guessed in a number of ways. We know this is stupid. But to turn this model on its head will require some innovation of extraordinary magnitude.

Can we design security controls that are completely independent of obscurity?

If you want to talk this over, you can find me in the bar.



Saturday, March 22, 2014

The power of change.

[Yeah, I know, it's been a long time since I updated this blog. When you write for a living, you tend to write the things you get paid for first, and often you don't have any time or ideas left over after that.]

So much of security is about doing it, not just having it. The best products can be useless in the wrong hands: you have something that's supposed to be blocking, but you have it only in logging mode. Or you have it in blocking mode, but you only enabled a few of the available rules. You have it installed in the wrong location on your network because that's the only place you could put it. It tells you what you need to know, but you never have time to look at it. And so on.

I believe that most of security relies on detecting and controlling change. And there are so many aspects to change that have to be considered.

  1. Recognizing change. Do you know how your systems, operations and usage patterns are supposed to be? This is probably the biggest challenge, believe it or not. Just knowing your baseline, in everything that happens in your environment, will be impossible for any one group or team. This baseline knowledge is institutional, and will be spread across staff at every level. The network team may know what normal traffic looks like, but they may not know what makes it normal.
  2. Detecting change. This might sound like it's the same as the item above, but it's not. Detection implies both recognition and timeliness. It requires finding what has changed, when, in what way, and by how much. 
  3. Understanding change. This is the next step in the line: if you know what has changed, and the details, then you should be able to understand the root cause of the change (someone was trying to restore a database) and the implications of the change (it clobbered the production version and now it has to be rebuilt, before the stock markets open in the morning). Or you realize that the change happened for a particular reason, and it will likely be followed by other changes (a connection is made to an unknown external system, and your sensitive data is about to head in that direction).
  4. Initiating change. Making a change happen is often interwoven with politics. Why do you want to initiate the change? Are you allowed to request it? Who will execute the change? These are all big questions when, for example, you are trying to get a security hole plugged ("Turn off that SMTP relay! Now!!").
  5. Designing change. If you understand the effects of a change, you may need to make sure it happens with certain timing, in a certain sequence, on certain systems, done by certain people. You have to design the change, especially if it's a complex one -- say, a migration off of Windows XP.
  6. Controlling change. Making a change happen the way you designed it to happen, within the time frame that you need, without political or organizational fallout, is harder than you think. It may take longer than you hoped for that application's vulnerabilities to be patched. If changes need to be approved by other entities, you may need to persuade them to give that approval. The person assigned to execute the change doesn't actually know how to do it, and is going to do it wrong. Reverting a change could be more complicated than making the original change. 
  7. Preventing change. Most people think that security is all about this part, but it's not. Any business requires change, and it's up to the security team to help ensure that change happens in the right ways. In some cases it's still important to stop some changes from occurring (such as malware being deposited), but the prevention has to be fine-grained enough to address only those change cases. For example, this setting can be changed, but it should never be done on Sundays. This rule can be changed, but only by people in this role. Or something can be changed, but it has to create a notification for someone else who needs to know about it. These types of changes should not be made unless they are logged. 


Business rules, as well as risk management decisions*, will dictate how changes are initiated, designed, approved, executed, recorded and evaluated. And if you don't have a good handle on these business and risk requirements, you'll be severely hindered in detecting unauthorized changes and responding to them.

You can't use malware detection if it takes too much work to figure out what a false positive is. Rather than being able to understand those changes, you'll ignore alerts about them until a bigger change happens as a result. (Or until you get a call from the Secret Service.)

There's no point in running a vulnerability scanner if you can't cause fixes to be made based on the findings. Moreover, you shouldn't use a vulnerability scanner as a "compliance detection" tool: it may tell you that your systems are configured exactly the way your policies specified and they haven't changed, but you may have stupid policies.

If you can't figure out the effects of a change, then you will either try to prevent it out of fear, or you will execute it wrongly. If you don't know how a change is made, you can't design a process that will limit collateral damage.

Luckily, you can start mastering all these aspects of changes without spending a lot of money. Just knowing what your systems, applications and users are supposed to do is a huge start towards effective security -- and it takes time, but it's cheap. After that, the road will be different for every organization, because the changes, effects and control points will be different. There will be things you can't change, changes you can't detect, or business processes that constrain (or mandate) change. Look at what power you have over change, to figure out how secure you can be.


*Notice I didn't say "security best practices."

Thursday, November 21, 2013

What's my name? No, really, what is it?

[Warning: rant ahead. Slow to impulse power, Mr. Sulu.]

Ever since I've been responsible for user-facing applications -- which is probably since the early Jurassic period -- and ever since I've been using pentesters on those apps, which was probably two seconds after the Jurassic was over -- I've run into the same problem, over and over again.

It's the ridiculous security trope that "username and password feedback is bad."

It's one of the first things that a pentester points out: you can find out valid email addresses or usernames by putting in a bad one and looking at the response. Yes, I know this to be the case.

IT'S ON PURPOSE.

Anyone who has had to provide user support on an application knows how much of that burden is due to users forgetting their usernames, or forgetting which email address they used. Remember: you have one application. The user may have dozens or hundreds of accounts in applications across the Internet, some of which they may use only once in a few years. It's unrealistic to expect them to have been writing down which usernames, email addresses and passwords they've been using since the '90s (especially if they were assigned those usernames - remember when that was a fad?). Unless you think it's okay for them to have saved everything in their browser ... no? I didn't think so.

It's bad enough when they get a clear message saying "We've never heard of you," and they're sure that they do have an account on that system.

Can you imagine how much worse the support load gets, and how much more frustrating it is for the user, if the application refuses to tell them what's wrong?

We may or may not have sent a password reset email to the address you typed in. Even if we sent one, you may not be registered on our system, so the password won't do you any good. Ha ha ha. Take THAT, HaXX0rz!!!1

If you want to prevent user enumeration attacks, you had better have a good alternative in mind. You CANNOT forget the real reason the system is there, and what that user understands and needs in order to use it. If you can't suggest anything that helps, then you are myopic in the extreme, and you're probably just playing reindeer games with other hackers, not contributing usefully to the business that hired you.

Treating username feedback as evil is just playing into the "security by obscurity" mindset. If your system can't withstand attacks by someone who knows a valid username or email address, then you have MUCH bigger problems to solve. Throwing your users under the bus because it's easier for YOU is not the way to solve them.

Thank you for listening.


Friday, July 26, 2013

Things I've learned about CFPs.

Here are some great tips in this article on how to submit to calls for papers (CFPs). I'd like to add a few more, based on my own experience:

  • Don't be afraid to reveal the plot. Some people think that they need to submit something very high-level and save the good stuff for the actual talk. As it turns out, this isn't true. You have to show a little leg (or a lot) so that the reviewers know enough about what the conference will be getting. 
  • Here are some things that I personally find less enticing as topics: presos about reports (why can't I just read the original report, not sit for an hour listening to someone talk about it?); talks about surveys (unless they promise really surprising or controversial results); talks about historical topics (the only person I've known to do this well is Schuyler Towne); and meta-talks that aren't about security themselves, but are about 'X in security.' (Analysts in security, women in security, beards in security, ear-candling in security, whatever.)
  • If you're submitting a talk on how bad things are in security, join the club of about 1,000,000 members. If you're submitting a talk on how you fixed something in security, you have a much better chance of standing out from the crowd.
  • They say you can't judge a book by its cover, but the cover IS part of the book, and that's all the reviewers are going to see. Make sure the title and the dust jacket blurb are really compelling. Really - go look at some best-selling books for examples. When it comes time for the conference, people are going to choose to attend your talk or not in about half a second when they're reading the schedule. You'll need to grab them right then and there.
  • If you're not accepted, ask for feedback. Most conference committees will give you feedback if you ask nicely. This will help you fix your submission for next time. Sometimes it's a matter of "We got 20 submissions on this topic, and sorry, yours wasn't the strongest." Sometimes it's "Are you kidding? Why did you ever think this was an appropriate submission?" I've seen a case where someone submitted the same (appalling) talk abstract over and over again, year after year. Maybe they thought the conference review was a lottery, and they'd win it some day. But if only they'd asked for feedback, it could have saved them the trouble of submitting every year, because it was NEVER going to be accepted.
 Good luck to all, and may the odds be ever in your favor.


Thursday, June 20, 2013

Sauce for the gander.

I attended the Dell annual analyst conference a couple of weeks ago, and was privileged to witness something that made me extremely happy.

As is typical with these analyst events, the vendor features a few customer companies who talk about what they've done with the vendor products, how it helped their business, etc. Also, as is typical with analyst events, we all get little souvenir bags handed out to us with vendor-branded schwag.

Well, all the stars aligned this time, because one of the featured customers was Revlon.

And the goodie bags all contained Revlon products. You know -- lipstick, mascara, nail files, that sort of thing. As a sop to the men, there was a masculine sort of deodorant stick included.

I couldn't help but grin when I saw the reactions around the room. Most of the men were looking into the bag with an expression of, "What the ... I don't even ... what IS this? This isn't meant for me. WTF am I supposed to do with this?"

Guess what, guys in technology? This is EXACTLY the reaction that we (straight) women have when we're confronted by a booth babe.

Le boom.




Monday, June 10, 2013

If at first you don't succeed, FAIL, FAIL again.

Here's an example of security FAIL at its finest.

I have an account for a service online, for which I have to manage things for the rest of my family as well. This service recently switched to another company, and I logged into the new website to find that their policy is that my oldest child is considered an "adult dependent," and I have to get permission to manage the service for her. This "permission" comes in the form of an "invitation" that she needs to send me, which sends me a magic code that I have to input from my account, and then my access is enabled, and everything is supposed to be hunky-dory.

The only thing is, my child is not set up with her own account, because up until now she was just set up as a dependent. So I asked Customer Service what to do, and they said, "Have her register an account and then send you an invitation."

To hell with that. I registered her account myself, which was linked to my own member ID anyway. I figured they would bounce a registration with a duplicate email address, so I used a second email address of my own. They didn't even send a confirmation link to that address; as soon as I registered with all the demographic information (which of course I know quite well), I was logged in to "her" account. And I just took care of business.

So here's where the security design fails, bigtime. I don't know whether someone bothered checking for a duplicate email address on registration, but it didn't matter, because they didn't even use it to confirm before finishing the account setup. And there is absolutely nothing to stop me, as a parent, from setting up the account myself. I can have more than one email address. I know all the demographic info. I can set up the challenge questions with answers that I know. So what is the freaking point of this whole "dependent" exercise?

The fact of the matter is, they have nothing in place to stop an impersonator. Short of reviewing the email address and guessing that it's not hers, there is no way to enforce this ridiculous policy. Drop a cookie to make sure the registering browser is unique? I can delete it. Same IP address? Of course; we live in the same house and she's using my computer. Send her some other individual magic ID number to the house? I get her mail.

This is one of these "paper tiger" security policies that simply annoys me for a span of 15 minutes.