tag:blogger.com,1999:blog-50418631488291652662024-03-13T10:04:51.891-07:00Idoneous Securityi·do·ne·ous [ahy-doh-nee-uhs]
adjective: appropriate; fit; suitable; apt.Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comBlogger52125tag:blogger.com,1999:blog-5041863148829165266.post-4816729329229575852016-02-20T08:34:00.000-08:002016-02-20T08:35:38.470-08:00How Google turned me into my mother.We are facing a big problem, one that's hidden behind the more prominent issues of cybercrime, encryption wars, and vulnerability disclosure. It's endemic to our digital infrastructure, and it's going to get worse over time. And it's so complex that I'm not sure I can do it justice in a blog post. I've been talking about it here:<br />
<br />
<a href="https://www.youtube.com/watch?v=lU8_S0V_zOQ">https://www.youtube.com/watch?v=lU8_S0V_zOQ</a> (B-Sides London)<br />
<br />
<a href="https://www.youtube.com/watch?v=mKnKQv-0cwE">https://www.youtube.com/watch?v=mKnKQv-0cwE</a> (HouSecCon)<br />
<br />
In a nutshell, it has to do with digital delegation.<br />
<br />
What do I mean by that? I mean any situation where an online user needs to be able to delegate all or part of their access or capabilities to someone else -- whether temporarily, intermittently, or permanently. Most identity and access management models only deal with delegation in an enterprise context: Alice needs to go on PTO, and Bob needs to cover for her during that time, without anyone confusing the two people for the purpose of accountability.<br />
<br />
But real life is more complicated than that, and it involves legal protections as well. Take the reasonably simple example of a minor child. A parent or legal guardian has the authority to administer many things for a child, but the design of online accounts is often muddled. Which signups does the parent have to do, and which ones does the parent simply approve at some part of the workflow? If a registration is asking for a date of birth, whose date of birth are we talking about? And what happens when the child reaches the age of legal majority? Does the parent suddenly have to turn over access to a login, or does the parent drop out of the approval workflow?<br />
<br />
At the other end of the spectrum, we have the problem of what happens to online accounts after the owner dies. We still haven't worked that out too well yet -- there are good talks out there by people who have had to deal with it personally -- but death is a pretty permanent condition, as well as a binary one. What about temporary or intermittent delegation?<br />
<br />
If you were incapacitated today for a month -- let's say, due to the proverbial bus accident -- who would be able to pay your online bills? A friend can't just go to your bank and say, "Yeah, I just need to be set up as a secondary on this account so I can get into billpay." No, if you were conscious, you would probably just give your friend your password. And if you're using 2FA with that account on your phone (as everyone should do, right?), you'd have to hand over your phone -- oh yes, and the passcode for the phone too. Or would you let your friend change over the 2FA registration to their phone for a while, to make it easier?<br />
<br />
That's just one scenario. The harder one, which I've had to live through twice now, is the declining parent who has good days and bad days, and doesn't want to give up control of their accounts. They may be so impaired that they make mistakes with them, or forget how to use the sites, but they won't simply sign everything over to their child (and in some cases, they may already be so disabled that they can't take the legal steps to sign things over anyway).<br />
<br />
Dealing with an incapacitated loved one is heartrending. You want to allow them as much autonomy as possible, while protecting them from themselves. Above all, you don't want to have to get them declared legally incapacitated; that will ruin your relationship forever. You simply want to be able to help them out. "Hey Dad, do you just want me to log in and take care of this for you? I know you're tired today, but this bill is due." And in the future, they may have good days where they can go back to doing it themselves; you don't want to have taken over their logins, changed passwords, set up your own phone as the recovery number, and so on. There has to be a better middle ground, between impersonation (which can trigger fraud alerts) and the permanent, legal takeover.<br />
<br />
So here's the story behind the title of my talks, and this post. I had to take over my father's Gmail account when he had a stroke, so that I could get into his accounts and reset the passwords, so that I could pay my parents' bills (as well as watch those accounts for fraud). Later, I had to take over my mother's Gmail account, and I set up my personal (non-Gmail) email address as a secondary on her Gmail.<br />
<br />
What I found out was that Google helpfully associates the two addresses when you do that, so whenever someone using Gmail tried to mail me at my personal email, Google would say brightly, "Oh, you mean [my mother]!" So whenever anyone mailed me using Gmail -- business associates, friends, merchants, etc. -- the messages would be sent to me, but under my mother's name. That's pretty creepy when it happens.<br />
<br />
I went in and removed my email address as the secondary, but it didn't fix the problem. I am now permanently associated with my (now deceased) mother as far as Google is concerned. I reported this to them, but they did not consider this to be a security or privacy issue, so there you are. (I don't want to delete my parents' Gmail accounts, because I don't want an impostor popping up in the future, and there may still be alerts coming in from other accounts I don't know about.)<br />
<br />
The bottom line here is that we need a massive overhaul in the design of consumer-facing systems that can take into account different delegation cases. They need to handle authentication, re-certification, and legal proxies, and they need to understand non-binary conditions, while at the same time continuing to protect against account takeovers and fraud.<br />
<br />
Right now, this is not a crisis, since the majority of people who are becoming incapacitated did not set up much in the way of online accounts. But as the tech-savvier baby boomers age, it is going to get much worse; I have hundreds of accounts out there dating back decades, some of which I'm sure I've forgotten about and never entered into a password manager. If my children had to take over my business affairs, there would be no other way for them to do it other than online (they don't know how to write a check, and all my statements arrive electronically anyway).<br />
<br />
Luckily, a few companies out there are starting to become aware of the issue and offer emergency access functionality. It's a start. But we need global, consistent mechanisms for doing this, and they need to be set up at the point of initial registration, not months after someone has managed to get a legal power of attorney signed and notarized, and has had to fax it to fifteen different entities.<br />
<br />
I don't have a ready answer for this, except that a bunch of us need to get to work on it. Our digital future as a society depends on supporting our real-world life cycles.<br />
<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-61238990225819379572015-12-08T16:40:00.002-08:002015-12-08T16:40:33.782-08:00A matter of taste.I've figured it out: The word "cyber" is like garlic.<br />
<br />
For most palates, just a bit of cyber in anything is enough. It makes it all a bit more interesting.<br />
<br />
Some people love cyber so much that they put it in everything, in massive amounts (chicken with 40 cloves of cyber, for example). Others are so sensitive to cyber that they can't stand the faintest whiff of it.<br />
<br />
If you've been raised in a culture that uses cyber a lot, you won't realize how it comes across to those who haven't grown up with it. People will pull away from you with horrified or disgusted looks on their faces and you won't know why. When you've been steeping in cyber, you don't notice the smell any more.<br />
<br />
There's even a certain part of the United States that just loves its cyber. It puts on a regular cyber festival, where you can get cyber flavor in everything. I've never been to it myself, but I can tell you right now that I will never accept cyber-ice cream.<br />
<br />
Some cultures love cyber, and some don't, but if you're part of a couple and only one of you has ingested cyber that day, you're going to have compatibility problems later on that evening.<br />
<br />
And one final thought: if you feed your toddler too many spinach pierogi with cyber, she's going to be exhaling that stench for days until it clears out of her little body. Trust me on this one.<br />
<br />
<br />
<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-83351599496686062382015-11-25T06:29:00.000-08:002015-11-25T06:29:07.252-08:00Why the airplane analogy doesn't fly.Don't get me wrong — I love Trey Ford. He is one of the most inspiring infosec pros I know. He's smart, creative, full of mind-blowing ideas, and has energy to spare. And I love <a href="http://2015.video.sector.ca/video/144659750" target="_blank">his talk at SecTor</a> about what we can learn about information sharing from the aviation industry.<br />
<br />
There's just one problem: aviation isn't all that comparable to cybersecurity.<br />
<br />
Imagine that instead of flying the plane herself, a pilot had to convince all the passengers on the flight, EVERY flight, to do the flying together. And many of them aren't good at it, and don't care; they just want to sleep or watch videos or whatever.<br />
<br />
The passengers change all the time, so you can't keep them educated on what to do. Depending on the size of the plane, there may be tens or hundreds of thousands of passengers helping with the flying. Instead of a finite maintenance crew that's under the direct control of the airline, there are dozens or thousands of different crews from third-party companies, all doing their bits (or not).<br />
<br />
The aircraft types range into the thousands, dating back to Kitty Hawk and up to the newest models, and most of them have at least some custom alterations that can be changed between flights, so the various manufacturers won't take responsibility for anything they didn't add. Remember, too, that each of those alterations was probably made for a good reason — or at least, a reason that was good at the time. (That's a huge part of what we don't know about breaches today: we sometimes know the chain of events and contributing vulnerabilities, but we make rash judgments about why they happened without knowing the full story.)<br />
<br />
The airlines all have different ideas on how they should equip their planes, so some pilots have one of everything new and shiny, and others have to make do with duct tape and bags of pretzels. (And some airlines are just now thinking that maybe having a dedicated pilot is a good idea.)<br />
<br />
Oh, and did I mention? The weather is actively trying to disrupt your flight, usually in a way that you won't notice until it's too late. (Although you still have to worry about hacktivist storm cells that want you to look bad.)<br />
<br />
All of these differences highlight our challenge in security: because everything is so complicated, so flexible, and so NOT under our individual control, we can easily blame someone else for their breaches because they did things so differently. And the pilot is nominally in charge, so that's where we concentrate the attention, but even the pilot can't get the toddler in 32B to stop screaming and fly straight. I'm not even going to mention the armchair aviation enthusiasts who sit near the runway with binoculars and lasers and provide "helpful" critique. (Oops, I guess that slipped out.)<br />
<br />
So how can we still make use of what we've learned from information sharing in aviation? As Trey says, we can at least collect data now in a way that we may be able and willing to share later. If only we had a black box that collected vital information about a breach in a way that didn't expose the inner workings of the business, or those custom-built additions. If only we could sanitize the data in a way that communicated the important lessons ("don't combine these tray tables with that boarding process, and especially don't add a pilot over 6 feet tall without upgrading the landing gear") but defanged our industry's reflexive attempts at a certain kind of blame ("how stupid was that? We'd never do that!").<br />
<br />
When I consider all this, sometimes I despair that we'll ever figure it out. But with positive thinkers like Trey, we may just have a chance.<br />
<br />
<br />
<br />
<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-86128905479556225372015-09-07T06:59:00.000-07:002015-09-07T06:59:26.643-07:00When your risk profile is different.Ready for some (more) unfounded speculation?<br />
<br />
Both people and organizations tend to want to keep their data within a circle of trust; it's why there has been (and continues to be) resistance to putting sensitive data in the cloud. It's a function of human nature to keep things close -- which is why people still keep files on their desktops or laptops, use USB drives, and run servers at home. You keep your treasures in an environment that you know best, and where you feel you have the most control over them.<br />
<br />
According to <a href="http://www.washingtonpost.com/politics/fbi-looks-into-security-of-clintons-private-e-mail-setup/2015/08/04/2bdd85ec-3aae-11e5-8e98-115a3cf7d7ae_story.html" target="_blank">the Washington Post,</a> President Bill Clinton had had a personal email server at home; Hillary Clinton had a server which had been in use during her first presidential campaign in 2008, and this same server was then set up for her at home when she took the Secretary of State post.<br />
<br />
Besides this controversy with her home email server
(and yes, I commented on that on CNN, but they must not have liked most
of what I had to say), I noticed the other day that apparently Caroline
Kennedy <a href="http://www.nytimes.com/2015/08/26/us/politics/ambassador-caroline-kennedys-use-of-personal-email-faulted.html" target="_blank">had been using personal email as well </a>for State Department
business. This suggests to me that they may have had a reason in common
for doing this, one that hasn't been highlighted so far:<br />
<br />
They both have a very different risk profile from most public officials.<br />
<br />
When you're a celebrity -- independent of the position you currently hold -- your threat modeling has to include just about everyone. Any friends you have, any staff members you hire, could turn on you at any time for some perceived advantage. Now, Hillary could have had knowledge that the State Department was bad at securing its own systems, but I don't think that was it. I think she just couldn't trust staffers that worked for the agency and not for her personally. Any of them might try to access her email for political or personal reasons -- and let's face it: she's spent many, many years being embattled. The same would go for Caroline Kennedy, as well as anyone else who was famous before they took office.<br />
<br />
In other words, their threat model holds colleagues to be a higher risk than hackers.<br />
<br />
If you think this is surprising, you haven't been inside the minds of most non-security people. They have seen and experienced many more threats on a personal level than they have The Notorious A.P.T, so they will defend against the threat they believe in more.<br />
<br />
None of us really knows how secure the server ended up being (although it looks like Hurricane Sandy caused natural disasters to become a more prominent part of the threat model, which is why they finally moved it to a provider with an actual data center), so I can't comment on that. Nor am I in any position to comment on the legal or classification issues, since those seem to be changing depending on who's got the microphone at any given time. But from a threat modeling perspective, I can absolutely understand why people want to hold their staff close and their data closer.<br />
<br />
Oh, and by the way: if you can't view things from other peoples' perspectives, you're not going to be very good at threat modeling.<br />
<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-89592085520189450832015-05-16T05:45:00.001-07:002015-05-16T05:45:48.798-07:00Lessons in grown-up security.Okay, so for the sake of those who can't say anything, I feel I have to say something.<br />
<br />
Remember how much you hate people talking about things they don't understand? So do I. And let's face it: if you're not on the inside of an organization, you don't know 100% of what's going on there. Oftentimes it's less than 50%. And if it has to do with security, the percentage can drop as low as 10%.<br />
<br />
The hysteria around Chris Roberts supposedly hacking a plane and "making it go sideways" has reached an all-time high. Which isn't to say it couldn't go higher, because media. But let's go through the versions here:<br />
<br />
There's what he told people he did.<br />
There's what they interpreted from what he said.<br />
There's what he thought he did.<br />
There's what he actually did.<br />
<br />
Then there's the usual Telephone game of people misinterpreting, mis-reporting, and deliberately twisting all those things when they hear them second- and third-hand.<br />
<br />
But one fact remains: there are people who actually know what's possible to do, and they ain't talking. Nor will they. Even if Roberts was talking complete bullshit, nobody on the inside is going to step forward and say it publicly. So in this case, silence does not equal assent.<br />
<br />
We don't know whether the airline manufacturer already has experts doing pentesting, and they don't need any more, thankyouverymuch. Just because they're ignoring your reports doesn't mean they don't already know about what you think you're trying to say. They don't actually owe you an answer: "No, you didn't really get through, but if you had done THIS instead ..." Just because you decide to walk onto the court, it doesn't mean you get to be a player.<br />
<br />
We don't know why United decided to come out with a bug bounty program, although it's mighty responsible of them NOT to encourage randoms to try hacking the avionics. Those who are complaining that it's missing from the bug bounty program are completely clueless in that regard, and have probably never been personally responsible for anything more consequential than a runaway shopping cart.<br />
<br />
There may be no truth at all to what the FBI claims Roberts did, and they're just prosecuting him because letting him go free would send the wrong message to other juvenile delinquents out there.<br />
<br />
The bottom line is, if you're not actively working WITH the company whose technology you're researching, then you're an adversary. So don't be surprised if they treat you like one. United has every right to say to Roberts, "You didn't actually do anything harmful, but you're a dick, so stay off our airplanes."<br />
<br />
You can be a security researcher, but in the immortal, wise words of @wilw: Don't be a dick.<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-41133025305927323872015-04-24T07:59:00.000-07:002015-04-24T07:59:11.321-07:00Achievement unlocked?This week was Hell Week for analysts, otherwise known as Meet All The People, Inspect All The Things, otherwise known as the RSA Conference. Everything was going as expected: I made it through all the speaking engagements (at least one a day this time), spent a little time on the expo floor making a video with the awesome @j4vv4d, did the press interviews, and kissed all the hands and shook all the babies in 30-minute meeting slots.<br />
<br />
I was heading over to the Security Bloggers' Meetup, wearing some really spectacular (if you'll pardon the pun) blinking-LED sunglasses that Javvad had given me, and I decided to leave them on for the short walk across the street to Jillian's; I figured they would look good in the dark bar.<br />
<br />
All of a sudden, some male conference-goer walks by me, and in passing, he tells me, "There's a switch on the earpiece of the glasses, probably on the right, and you can turn them off that way so they won't run down the battery."<br />
<br />
WTaF. Is this guy really mansplaining to me HOW TO OPERATE MY OWN SUNGLASSES?<br />
<br />
Yes. Yes, he was.<br />
<br />
Now, this is only the most harmless of micro-aggressions compared to what other women go through ("I want to talk to an engineer, not a booth lady"), but what most people don't understand is why we don't take people's heads off at the time. It's simple: you're so stunned, you don't think of the right words until much later. Imagine someone comes up to you out of the blue and says, "Hey buddy, you're wearing socks, we're going to have to ask you to leave." Completely on automatic, you might say, "Oh, okay, sorry about that," and start moving before the rest of your brain finishes processing the "What?" And many of us are trained to be polite first and foremost, so it's a reflex that has to be overcome.<br />
<br />
So I said to the guy, "THANK YOU FOR EXPLAINING THAT TO ME. I WOULD NEVER HAVE FIGURED IT OUT BY MYSELF." (Blogger doesn't have a sarcasm font, but imagine my saying it in one.) And now I'm sure that this Derpasaurus Rex took that completely seriously and thought I was really thanking him. So I should have done better, but it did take a few more minutes for the incredulity to drain away, and then it was too late.<br />
<br />
What causes this level of pea-brained sexism to happen? I don't normally encounter it, or at least not so that I'd notice. I'm neither young nor pretty, but I was wearing a skirt at the time, which I don't normally do. What thought process goes on to make someone decide that a middle-aged mother of two, minding her own business, urgently needs sunglasses instructions?<br />
<br />
The best I can come up with is this: the guy was truly bothered by the sight of someone wearing blinking sunglasses (on top of the head) in daylight.<br />
<br />
"That's wasteful. Oh, it's a woman. She must not know how to turn them off."<br />
<br />
And it would never have occurred to him to go through the same thought process if it had been a man. He would have assumed the man had a good reason for leaving them turned on, and it might still have bothered him in some Derpy Engineer Syndrome fashion, but he would have let it go.<br />
<br />
Anyway, that was the one surreal moment from the conference this week. I think I'll put away the skirt for next year.<br />
<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-67833257902255027622015-01-27T06:40:00.000-08:002015-01-27T06:40:00.306-08:00Looking logically at legislation.There's a lot of fuss around the recent White House proposal to amend the Computer Fraud and Abuse Act, and some <a href="https://community.rapid7.com/community/infosec/blog/2015/01/23/will-the-president-s-cybersecurity-proposal-make-us-more-secure" target="_blank">level-headed analysis of it.</a> There's also a lot of defensive and emotional reaction to it ("ZOMG we're going to be illegal!").<br />
<br />
First of all, everyone take a deep breath. The reason why proposed changes are made public is to invite comment. This is a really good time to step up and give constructive feedback, not just say how much it sucks (although a large enough uproar will be taken into account anyway). Try assuming that nobody is "out to get you" -- assume that they're just trying to do the right thing, as you would want them to do for you. Put yourself in their shoes: if you had to figure out how to protect citizens and infrastructure against criminal "cyber" activity, and do it legally, how would you do it?<br />
<br />
There's another really important point here, beyond the one that if you don't like it, suggest something more reasonable. Jen Ellis <a href="https://community.rapid7.com/community/infosec/blog/2015/01/26/how-do-we-de-criminalize-security-research-aka-what-s-next-for-the-cfaa" target="_blank">talks about the challenge of doing just that</a> in her great post. And I agree with Jen that an intent-based approach may be the most likely avenue to pursue, although proving intent can be difficult. I'm looking forward to seeing concrete suggestions from others. As I've pointed out before, writing robust legislation or administrative rules is a lot like writing secure code: you have to check for all the use and abuse cases, plan for future additions, and make it all stand on top of legacy code that has been around for decades and isn't likely to change. We have plenty of security people who should be able to do this.<br />
<br />
If they can't -- if there's no way to distinguish between security researchers and criminals in a way that allows us to prosecute the latter without hurting the former -- then maybe that's a sign that some people should rethink their vocations. (It also explains why society at large can't tell the difference, and doesn't like security researchers.) After a certain point, it's irrational to insist on your right to take actions just like a criminal, force other people to figure out the difference, and not suffer any consequences. If you want to continue to do what you're doing, <a href="https://github.com/rooksecurity/CyberLaw" target="_blank">step up and help solve the real problem.</a><br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-50795349257515383922014-12-10T10:41:00.001-08:002014-12-10T10:41:13.703-08:00Depends.I've always had a problem with compliance, for a very simple reason: compliance is generally a binary state, whereas the real world is not. Nobody wants to hear that you're a "little bit compliant," and yet that's what most of us are.<br />
<br />
Compliance surveys generally contain questions like this:<br />
<br />
<b>Q. Do you use full disk encryption?</b><br />
<br />
A. Well, that depends. Some of our people are using full disk encryption on their laptops. They probably have that password synched to their Windows password, so I'm not sure how much good encryption would do if the laptops were stolen. We talked about doing full disk encryption on our servers. I think some of the newest ones have it. The rest will be replaced during the next hardware refresh, which I think is scheduled for 2016.<br />
<br />
<b>Q. So is that a yes, or a no?</b><br />
<br />
A. Fine, I'll just say yes.<br />
<br />
Or they might ask:<br />
<br />
<b>Q. Do you have a process for disabling user access?</b><br />
<br />
A. It depends. We have a process written down in this here filing cabinet, but we don't know how many of our admins are using it. Then again, it could be a pretty lame process, but if you're an auditor asking whether we have one, the answer is yes.<br />
<br />
Or even:<br />
<br />
<b>Q. Do you have a web application firewall?</b><br />
<br />
A. No, I don't think so. ... Oh, we do? That's news to me. Okay, somewhere we apparently have a WAF. Wait, it's Palo Alto? Okay, whatever.<br />
<br />
<b>Q. Do you test all your applications for vulnerabilities?</b><br />
<br />
A. That depends on what your definitions are of "test," "applications," and "vulnerabilities." Do we test the applications? Yes, using different methods. Does Nessus count? Do we test for all the vulnerabilities? Probably not. How often do we test them? Well, the ones in development get tested before release, unless it's an emergency fix, in which case we don't test it. The ones not in development -- that we know about -- might get tested once every three years. So I'd give that a definite yes.<br />
<br />
The state of compliance is both murky and dynamic: anything you say you're doing right now might change next week. Can you get away with percentages of compliance? Yes, if you have something to count: "83% of our servers are compliant with the patch level requirements." But for all the rest, you have to decide what the definition of "is" is.<br />
<br />
Compliance assessments are really only as good as the assessor and the staff they're working with, along with the ability to measure objectively, not just answer questions. And I wouldn't put too much faith in surveys, because whoever is answering them will be motivated to put the best possible spin on the binary answer. It's easier to say "Yes" with your fingers crossed behind your back, or with a secret caveat, than to have the word "No" out where someone can see it.<br />
<br />
In fact, your compliance question could be "Bro, do you even?" and it would probably be as useful.<br />
<br />
<br />
<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-61041366916237680672014-09-25T06:08:00.000-07:002014-09-25T06:08:34.385-07:00Shock treatment.Another day, another bug ... although this one is pretty juicy. One of the most accessible primers on the Bash Bug is on <a href="http://www.troyhunt.com/2014/09/everything-you-need-to-know-about.html" target="_blank">Troy Hunt's blog</a>.<br />
<br />
As many are explaining, one of the biggest problems with this #shellshock vulnerability is that it's in part of the Unix and Linux operating systems -- which means it's everywhere, particularly in things that were built decades ago and in things that were never meant to be updated. There will be a lot of hand-wringing over this one.<br />
<br />
But I think I have a way to address it.<br />
<br />
It's a worn-out analogy, but bear with me here. Windows in buildings. Now, we know glass is fragile to different extents, depending on how it's made. Imagine that we had hundreds or thousands of "glass researchers" who published things like this:<br />
<blockquote class="tr_bq">
"Say, did you know that a 5-pound rock can break this kind of glass?" </blockquote>
<blockquote class="tr_bq">
Whereupon business owners and homeowners say: </blockquote>
<blockquote class="tr_bq">
"Oh jeez, okay, I guess we'd better upgrade the glass in our windows." </blockquote>
<blockquote class="tr_bq">
Researchers:<br />"Say, did you know that a 10-pound rock can break this kind of glass?" </blockquote>
<blockquote class="tr_bq">
Business- and homeowners:<br />"Sigh ... all right ... it's going to be expensive, but we'll upgrade." </blockquote>
<blockquote class="tr_bq">
Researchers:<br />"Say, did you know that if you tap on a corner of the glass <u>right over here</u> that it'll break?" </blockquote>
<blockquote class="tr_bq">
Business- and homeowners:<br />" ... " </blockquote>
<blockquote class="tr_bq">
Researchers:<br />"Say, did you --" </blockquote>
<blockquote class="tr_bq">
Business- and homeowners:<br />"WILL YOU FOR CHRISSESAKE GET A LIFE??"</blockquote>
<br />
Yes, glass is fragile. So is IT. We all know that. And we don't expect everyone in the world to have the same level of physical security that, say, bank vaults do.<br />
<br />
If there's a rash of burglaries in a neighborhood, we don't blame the residents for not having upgraded to the Latest and Greatest Glass.* No, we go after the perps.<br />
<br />
Without falling too much into the tactical-vest camp, I think we ought to invest more money and time into defending the Internet as a whole, by improving our ability to tag, find and <strike>neutralize</strike> prosecute attackers. Right now, the offerings in the security industry are heavily on the enterprise side -- because after all, especially in the case of the finservs, that's where the money is. There are some vendors who are trying to address critical infrastructure, automotive and health care, which are three areas where people can and eventually will die as a result of software breaches. But we shouldn't wait until that happens to go on the offensive. We need a lot more investment in Internet law enforcement.<br />
<br />
This is a case where expecting the world at large to defend itself against an infinite number of attacks just doesn't make sense.<br />
<br />
<br />
<br />
*If you think it's cheap to patch, you haven't worked in a real enterprise.<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-2171212182370078692014-09-18T13:51:00.001-07:002014-09-18T13:51:59.328-07:00A tenuous grasp on reality."Don't blog while angry," they say. Well, it's too late now.<br />
<br />
One thing that has bothered me for years is the tendency for security recommendations to lean towards the hypothetical or the ideal. Yes, many of them are absolutely correct, and they make a lot of sense. However, they assume that you're starting with a blank slate. And how many people ever run into a blank IT slate in the real world?<br />
<br />
Here are some examples.<br />
<br />
"Don't have a flat network." Well, that's very nice. And it's too late; we already have one. Any idea how much time and effort and money it will cost to segment it out? Start with buying more/new network equipment; think about the chaos that IP address changes bring to multiple layers of the stack. Firewall rule changes (assuming you have them), OS-level changes, application changes (and you can bet that IP addresses are hard-coded all over the place). Think about the timing that a migration needs -- maybe Saturday after midnight until Sunday 6am in the local time zone? And figuring out the policies around which hosts really need to talk to other hosts, on which ports, because while you're playing 52-node pickup, you might as well put in some least privilege.<br />
<br />
"Build security into applications early in the SDLC." Yes, absolutely, great idea. What are we going to do with the applications that are already there? Remember those calculations on how much it will cost to fix something that's already in production as opposed to fixing it in development? There's no way around it: you're going to need a bigger checkbook.<br />
<br />
"Stay up-to-date on commercial software." Well, what if you're two years behind? (It really does happen.) Again, you're looking at a Project to implement this seemingly simple idea. From the ERP implementation that isn't supported by the vendor yet on the newest operating system, to the dozens of different JVM versions you've got running in production, this is a much more expensive recommendation than you realize.<br />
<br />
And yes, these recommendations all hold true for what enterprises should do <u>going forward</u>. But in most cases, they need help making the changes from what they have and what they're doing today.<br />
<br />
It would help so much more if instead of couching recommendations and standards in terms of <u>where you should already be</u>, you talked about them in terms of <u>how to get there</u>. After all, security is a process, not an end state; a journey, not a destination. Everyone starts out from different points, and needs different directions to know where to go. There are some ways an organization with 200 security staff and thousands of applications can pivot that will be different from the ways an enterprise can pivot with 2 security staff and with everything hosted by a third party.<br />
<br />
So recommendations might look like this: "From now on, you should not automatically grant administrative rights to each desktop user. And here's how you go about taking those rights away from the ones who already have it." I believe incorporating more flexible and realistic security principles will make them easier to swallow by the people who have to implement them.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-91643973725329330882014-08-24T13:24:00.000-07:002014-08-24T13:24:21.415-07:00How to help. There are a few movements afoot to help improve security, and the intentions are good. However, to my mind some are just more organized versions of what we already have too much of: pointing out what's wrong, instead of rolling up your sleeves and fixing it.<br />
<br />
Here are examples of Pointing Out What's Wrong:<br />
<br />
<ul>
<li>Scanning for vulnerabilities.</li>
<li>Creating exploits.</li>
<li>Building tools to find vulnerabilities.</li>
<li>Telling everyone how bad security is.</li>
<li>Creating detailed descriptions of how to address vulnerabilities (for someone else to do).</li>
<li>Creating petitions to ask someone else to fix security.</li>
<li>"Notifying" vendors to fix their security.</li>
<li>Proving how easy it is to break into something.</li>
<li>Issuing reports on the latest attack campaigns.</li>
<li>Issuing reports on all the breaches that happened last year.</li>
<li>Issuing reports on the malware you found.</li>
<li>Issuing reports on how many flaws there are in software you scanned.</li>
<li>Giving out a free tool that requires time and expertise to use that most orgs don't have.</li>
<li>Performing "incident response," telling the victim exactly who hacked them and how, and then leaving them with a long "to-do" list.</li>
</ul>
<br />
None of this is actually fixing anything. It's simply pointing out to someone else, who bears the brunt of the responsibility, "Hey, there's something bad there, you really should do something about it. Good luck. Oh yeah, here, I got you a shovel."<br />
<br />
Now, if you would like to take actual steps to help make things more secure, here are some examples of what you could do:<br />
<br />
<ul>
<li>Adopt an organization near you. Put in hours of time to make the fixes for them, on their actual systems, that they don't know how to do. Offer to read all their logs for them, on a daily basis, because they don't have anyone who has the time or expertise for that.</li>
<li>Fix or rewrite vulnerable software. Offer secure, validated components to replace insecure ones.</li>
<li>Help an organization migrate off their vulnerable OSes and software. </li>
<li>Do an inventory of an organization's accounts -- user, system, and privileged accounts -- and lead the project to retire all unneeded accounts. Deal with the crabby sysadmins who don't want to give up their rlogin scripts. Field the calls from unhappy users who don't like the new strong password guidelines. Install and do the training and support on two-factor authentication.</li>
<li>Invent a secure operating system. Better yet, go work for the maker of an existing OS and help make it more secure out of the box.</li>
<li>Raise money for budget-less security teams to get that firewall you keep telling them they need. Find and hire a good analyst to run it and monitor it for them.</li>
<li>Help your local school district move its websites off of WordPress.</li>
<li>Host and run backups for organizations that don't have any.</li>
</ul>
<br />
And if you're just about to say, "But that takes time and effort, and it's not my problem," then at least stop pretending that you really want to help. Because actually fixing security is hard, tedious, thankless work, and it doesn't get you a speaker slot at a conference, because you probably won't be allowed to talk about it. Yes, I know you don't have time to help those organizations secure themselves. <u>Neither do they.</u> Naming, shaming and blaming are the easy parts of security -- and they're more about self-indulgence than altruism. Go do something that really fixes something.<br />
<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-6199501341852478182014-05-30T17:01:00.001-07:002014-05-30T17:01:28.013-07:00Want some more bad news?I didn't think so, but I had to share this anyway.<br />
<br />
I was listening today to a presentation by the CTO of Dell SecureWorks, <a href="http://www.secureworks.com/company/team/jon_ramsey" target="_blank">Jon Ramsey</a> (who for some reason has not yet tried to implore me to stop calling him "J-RAM"). He's always full of insights, but this one was both unsurprising and earth-shattering at the same time.<br />
<br />
He pointed out that the half-life of a given security control is dependent upon how pervasive it is. In other words, the more widely it's used, the more attention it will be given by attackers. This is related to the <a href="http://newschoolsecurity.com/2009/08/mortmanhutton-security-bsides-black-hat-presentation-available/" target="_blank">Mortman/Hutton model for expectation of exploit use:</a><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-RFF_Rf1FW8c/U4kVmGPugKI/AAAAAAAAAJg/yrMT9eOtXQ8/s1600/bh31.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-RFF_Rf1FW8c/U4kVmGPugKI/AAAAAAAAAJg/yrMT9eOtXQ8/s1600/bh31.jpg" height="312" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
And yes, so far the water is still pretty wet. But he also pointed out that the pervasiveness of a control is driven by its requirement for compliance. </div>
<br />
In other words, once a particular security technology is required for compliance, its pervasiveness will go through the roof, and you've just lowered its effectiveness half-life to that of hydrogen-4. The very act of requiring more organizations to use it will kill off its utility. (And this is even worse than Josh Corman's <a href="http://www.csoonline.com/article/2124517/compliance/analyst--pci-security-a-devil---like-no-child-left-behind-.html" target="_blank">denigration of PCI-DSS</a> as equivalent to the "No Child Left Behind" act.)<br />
<br />
Does this sound an awful lot like "security through obscurity" to you? <a href="http://spiresecurity.com/" target="_blank">Pete Lindstrom</a> put it nicely when he said that this means the best security control is one that is the least used.<br />
<br />
Now, we all know that security is an arms race between the adversary and the defender, and we also know that obscurity makes up a good portion of the defense on both sides. The adversary doesn't want you to know that he's figured out how to get through your control, and you don't want him to know that you know he's figured it out, so that you can keep on tracking and blocking him.<br />
<br />
If so much of security relies on a contest of knowledge, then it's no wonder that so much of what we build turns into wet Kleenex at the drop of a (black) hat.<br />
<br />
This means that we need more security controls that can't be subverted or deactivated through knowledge. In this case, "knowledge" often means the discovery of vulnerabilities. And the more complex a system you have, the more chances there are for vulnerabilities to exist. Getting fancier with the security technology and layering it both make it more complex.<br />
<br />
So if we're trying to create better security, we could be going in the wrong direction.<br />
<br />
The whole problem with passwords is a microcosm of this dilemma. A user gains entry by virtue of some knowledge that is completely driven by what he thinks he'll be able to remember. This knowledge can be stolen or guessed in a number of ways. We <u>know</u> this is stupid. But to turn this model on its head will require some innovation of extraordinary magnitude.<br />
<br />
Can we design security controls that are completely independent of obscurity?<br />
<br />
If you want to talk this over, you can find me in the bar.<br />
<br />
<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-39357913297589507192014-03-22T11:23:00.000-07:002014-03-22T11:23:32.867-07:00The power of change.<i>[Yeah, I know, it's been a long time since I updated this blog. When you write for a living, you tend to write the things you get paid for first, and often you don't have any time or ideas left over after that.]</i><br />
<br />
So much of security is about doing it, not just having it. The best products can be useless in the wrong hands: you have something that's supposed to be blocking, but you have it only in logging mode. Or you have it in blocking mode, but you only enabled a few of the available rules. You have it installed in the wrong location on your network because that's the only place you could put it. It tells you what you need to know, but you never have time to look at it. And so on.<br />
<br />
I believe that most of security relies on detecting and controlling change. And there are so many aspects to change that have to be considered.<br />
<br />
<ol>
<li><b><u>Recognizing change.</u></b> Do you know how your systems, operations and usage patterns are supposed to be? This is probably the biggest challenge, believe it or not. Just knowing your baseline, in everything that happens in your environment, will be impossible for any one group or team. This baseline knowledge is institutional, and will be spread across staff at every level. The network team may know what normal traffic looks like, but they may not know what makes it normal.</li>
<li><b><u>Detecting change.</u></b> This might sound like it's the same as the item above, but it's not. Detection implies both recognition and timeliness. It requires finding what has changed, when, in what way, and by how much. </li>
<li><b><u>Understanding change.</u> </b>This is the next step in the line: if you know what has changed, and the details, then you should be able to understand the root cause of the change (someone was trying to restore a database) and the implications of the change (it clobbered the production version and now it has to be rebuilt, before the stock markets open in the morning). Or you realize that the change happened for a particular reason, and it will likely be followed by other changes (a connection is made to an unknown external system, and your sensitive data is about to head in that direction).</li>
<li><b><u>Initiating change.</u></b> Making a change happen is often interwoven with politics. Why do you want to initiate the change? Are you allowed to request it? Who will execute the change? These are all big questions when, for example, you are trying to get a security hole plugged ("Turn off that SMTP relay! Now!!").</li>
<li><b><u>Designing change.</u></b> If you understand the effects of a change, you may need to make sure it happens with certain timing, in a certain sequence, on certain systems, done by certain people. You have to design the change, especially if it's a complex one -- say, a migration off of Windows XP.</li>
<li><b><u>Controlling change.</u></b> Making a change happen the way you designed it to happen, within the time frame that you need, without political or organizational fallout, is harder than you think. It may take longer than you hoped for that application's vulnerabilities to be patched. If changes need to be approved by other entities, you may need to persuade them to give that approval. The person assigned to execute the change doesn't actually know how to do it, and is going to do it wrong. Reverting a change could be more complicated than making the original change. </li>
<li><b><u>Preventing change.</u></b> Most people think that security is all about this part, but it's not. Any business requires change, and it's up to the security team to help ensure that change happens in the right ways. In some cases it's still important to stop some changes from occurring (such as malware being deposited), but the prevention has to be fine-grained enough to address only those change cases. For example, this setting can be changed, but it should never be done on Sundays. This rule can be changed, but only by people in this role. Or something can be changed, but it has to create a notification for someone else who needs to know about it. These types of changes should not be made unless they are logged. </li>
</ol>
<br />
<br />
Business rules, as well as risk management decisions*, will dictate how changes are initiated, designed, approved, executed, recorded and evaluated. And if you don't have a good handle on these business and risk requirements, you'll be severely hindered in detecting unauthorized changes and responding to them.<br />
<br />
You can't use malware detection if it takes too much work to figure out what a false positive is. Rather than being able to understand those changes, you'll ignore alerts about them until a bigger change happens as a result. (Or until you get a call from the Secret Service.)<br />
<br />
There's no point in running a vulnerability scanner if you can't cause fixes to be made based on the findings. Moreover, you shouldn't use a vulnerability scanner as a "compliance detection" tool: it may tell you that your systems are configured exactly the way your policies specified and they haven't changed, but you may have stupid policies.<br />
<br />
If you can't figure out the effects of a change, then you will either try to prevent it out of fear, or you will execute it wrongly. If you don't know how a change is made, you can't design a process that will limit collateral damage.<br />
<br />
Luckily, you can start mastering all these aspects of changes without spending a lot of money. Just knowing what your systems, applications and users are <i>supposed</i> to do is a huge start towards effective security -- and it takes time, but it's cheap. After that, the road will be different for every organization, because the changes, effects and control points will be different. There will be things you can't change, changes you can't detect, or business processes that constrain (or mandate) change. Look at what power you have over change, to figure out how secure you can be.<br />
<br />
<br />
*Notice I didn't say "security best practices."<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-51718462822113669382013-11-21T08:53:00.000-08:002013-11-21T08:53:48.658-08:00What's my name? No, really, what is it?[Warning: rant ahead. Slow to impulse power, Mr. Sulu.]<br />
<br />
Ever since I've been responsible for user-facing applications -- which is probably since the early Jurassic period -- and ever since I've been using pentesters on those apps, which was probably two seconds after the Jurassic was over -- I've run into the same problem, over and over again.<br />
<br />
It's the ridiculous security trope that "username and password feedback is bad."<br />
<br />
It's one of the first things that a pentester points out: you can find out valid email addresses or usernames by putting in a bad one and looking at the response. Yes, I know this to be the case.<br />
<br />
IT'S ON PURPOSE.<br />
<br />
Anyone who has had to provide user support on an application knows how much of that burden is due to users forgetting their usernames, or forgetting which email address they used. Remember: you have one application. The user may have dozens or hundreds of accounts in applications across the Internet, some of which they may use only once in a few years. It's unrealistic to expect them to have been writing down which usernames, email addresses and passwords they've been using since the '90s (especially if they were assigned those usernames - remember when that was a fad?). Unless you think it's okay for them to have saved everything in their browser ... no? I didn't think so.<br />
<br />
It's bad enough when they get a clear message saying "We've never heard of you," and they're sure that they do have an account on that system. <br />
<br />
Can you imagine how much worse the support load gets, and how much more frustrating it is for the user, if the application refuses to tell them what's wrong?<br />
<br />
<blockquote class="tr_bq">
<b><span style="font-family: "Courier New",Courier,monospace;">We may or may not have sent a password reset email to the address you typed in. Even if we sent one, you may not be registered on our system, so the password won't do you any good. Ha ha ha. Take THAT, HaXX0rz!!!1</span></b></blockquote>
<br />
If you want to prevent user enumeration attacks, you had better have a good alternative in mind. You CANNOT forget the real reason the system is there, and what that user understands and needs in order to use it. If you can't suggest anything that helps, then you are myopic in the extreme, and you're probably just playing reindeer games with other hackers, not contributing usefully to the business that hired you.<br />
<br />
Treating username feedback as evil is just playing into the "security by obscurity" mindset. If your system can't withstand attacks by someone who knows a valid username or email address, then you have MUCH bigger problems to solve. Throwing your users under the bus because it's easier for YOU is not the way to solve them.<br />
<br />
Thank you for listening.<br />
<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-53478596047492853442013-07-26T09:54:00.000-07:002013-07-26T09:54:27.550-07:00Things I've learned about CFPs.Here are some great tips in this article on <a href="http://www.net-security.org/secworld.php?id=14432" target="_blank">how to submit to calls for papers</a> (CFPs). I'd like to add a few more, based on my own experience:<br />
<br />
<ul>
<li>Don't be afraid to reveal the plot. Some people think that they need to submit something very high-level and save the good stuff for the actual talk. As it turns out, this isn't true. You have to show a little leg (or a lot) so that the reviewers know enough about what the conference will be getting. </li>
<li>Here are some things that I personally find less enticing as topics: presos about reports (why can't I just read the original report, not sit for an hour listening to someone talk about it?); talks about surveys (unless they promise really surprising or controversial results); talks about historical topics (the only person I've known to do this well is <a href="https://twitter.com/shoebox" target="_blank">Schuyler Towne</a>); and meta-talks that aren't about security themselves, but are about 'X in security.' (Analysts in security, women in security, beards in security, ear-candling in security, whatever.)</li>
<li>If you're submitting a talk on how bad things are in security, join the club of about 1,000,000 members. If you're submitting a talk on how you fixed something in security, you have a much better chance of standing out from the crowd.</li>
<li>They say you can't judge a book by its cover, but the cover IS part of the book, and that's all the reviewers are going to see. Make sure the title and the dust jacket blurb are really compelling. Really - go look at some best-selling books for examples. When it comes time for the conference, people are going to choose to attend your talk or not in about half a second when they're reading the schedule. You'll need to grab them right then and there.</li>
<li>If you're not accepted, ask for feedback. Most conference committees will give you feedback if you ask nicely. This will help you fix your submission for next time. Sometimes it's a matter of "We got 20 submissions on this topic, and sorry, yours wasn't the strongest." Sometimes it's "Are you kidding? Why did you ever think this was an appropriate submission?" I've seen a case where someone submitted the same (appalling) talk abstract over and over again, year after year. Maybe they thought the conference review was a lottery, and they'd win it some day. But if only they'd asked for feedback, it could have saved them the trouble of submitting every year, because it was NEVER going to be accepted.</li>
</ul>
Good luck to all, and may the odds be ever in your favor.<br />
<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-69951755787936068512013-06-20T07:25:00.000-07:002013-06-20T07:28:28.624-07:00Sauce for the gander.I attended the Dell annual analyst conference a couple of weeks ago, and was privileged to witness something that made me extremely happy.<br />
<br />
As is typical with these analyst events, the vendor features a few customer companies who talk about what they've done with the vendor products, how it helped their business, etc. Also, as is typical with analyst events, we all get little souvenir bags handed out to us with vendor-branded schwag.<br />
<br />
Well, all the stars aligned this time, because one of the featured customers was Revlon.<br />
<br />
And the goodie bags all contained Revlon products. You know -- lipstick, mascara, nail files, that sort of thing. As a sop to the men, there was a masculine sort of deodorant stick included.<br />
<br />
I couldn't help but grin when I saw the reactions around the room. Most of the men were looking into the bag with an expression of, <b>"What the ... I don't even ... what IS this? This isn't meant for me. WTF am I supposed to do with this?"</b><br />
<br />
Guess what, guys in technology? This is EXACTLY the reaction that we (straight) women have when we're confronted by a booth babe.<br />
<br />
Le boom.<br />
<br />
<br />
<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-86978888801369667482013-06-10T17:53:00.000-07:002013-06-10T17:53:39.148-07:00If at first you don't succeed, FAIL, FAIL again.Here's an example of security FAIL at its finest.<br />
<br />
I have an account for a service online, for which I have to manage things for the rest of my family as well. This service recently switched to another company, and I logged into the new website to find that their policy is that my oldest child is considered an "adult dependent," and I have to get permission to manage the service for her. This "permission" comes in the form of an "invitation" that she needs to send me, which sends me a magic code that I have to input from my account, and then my access is enabled, and everything is supposed to be hunky-dory.<br />
<br />
The only thing is, my child is not set up with her own account, because up until now she was just set up as a dependent. So I asked Customer Service what to do, and they said, "Have her register an account and then send you an invitation."<br />
<br />
To hell with that. I registered her account myself, which was linked to my own member ID anyway. I figured they would bounce a registration with a duplicate email address, so I used a second email address of my own. They didn't even send a confirmation link to that address; as soon as I registered with all the demographic information (which of course I know quite well), I was logged in to "her" account. And I just took care of business.<br />
<br />
So here's where the security design fails, bigtime. I don't know whether someone bothered checking for a duplicate email address on registration, but it didn't matter, because they didn't even use it to confirm before finishing the account setup. And there is absolutely nothing to stop me, as a parent, from setting up the account myself. I can have more than one email address. I know all the demographic info. I can set up the challenge questions with answers that I know. So what is the freaking point of this whole "dependent" exercise?<br />
<br />
The fact of the matter is, they have nothing in place to stop an impersonator. Short of reviewing the email address and guessing that it's not hers, there is no way to enforce this ridiculous policy. Drop a cookie to make sure the registering browser is unique? I can delete it. Same IP address? Of course; we live in the same house and she's using my computer. Send her some other individual magic ID number to the house? I get her mail.<br />
<br />
This is one of these "paper tiger" security policies that simply annoys me for a span of 15 minutes.<br />
<br />
<br />
<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-27692809586483676722013-05-21T14:43:00.001-07:002013-05-21T14:43:41.214-07:00The view from the other side.Many thanks to Wim Remes, (ISC)^2 board member, for sending me his view of the cert issue and for letting me post it here. <u><b>DISCLAIMER</b></u>: this is Wim's own view and does not represent the rest of the board or the organization as a whole.<br />
<br />
<blockquote class="tr_bq">
Hey there,<br /><br />I do disagree on the CISSP being an entry level cert. It's a pity it has been used by HR drones as a bar for selection because I honestly believe that was never the goal of the cert. With prevalence came expectations too high for a piece of paper to fulfill, or for an organisation to prove the contrary. In my opinion the cert, first and foremost, establishes a common vocabulary among professionals that allows us -even though from different backgrounds and with different focus areas- to talk the same language and understand eachother. The second part I believe the organisation does -not the cert itself- is support an ecosystem of professionals. This has been established through a revised strategy (member-focused instead of product-focused) last year and the establishment of our chapter program. This ecosystem relies on an influx of 'new' people and the support of 'elders'.<br /><br />While I respect your decision, I don't think it's the right one. In what we are trying to accomplish, people like you are elementary. And frankly, there is no other org that is positioned to even try this.<br /><br />If anything, we need to work on communication. I do know that more than 50% of what I've written here is not commonly known among the membership and that is a very sad state of affairs.<br /><br />We have done nothing but focus on preparing the org to go full force on the member-focused strategy, because it is the right thing to do.<br /><br />Again, I fully respect your decision. Nothing I can do about that.<br /><br />Cheers,<br />Wim</blockquote>
Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-74569441880728097222013-05-21T08:51:00.003-07:002013-05-21T12:48:24.181-07:00Going paperless.<b>UPDATE</b>: Boy, this generated a lot more response than I had anticipated.<br />
<br />
Let me make it clear: I really respect and admire what members of the (ISC)^2 board are trying to do, and they have a big job ahead of them. I don't think the CISSP is completely useless; there are areas where it's quite useful (I could write a whole post just on the challenges of hiring security pros in government). It's just not something I personally want to put time and money into maintaining.<br />
<br />
If anyone can make me change my mind, it'll be Wim Remes and Dave Lewis; they can do just about anything they put their minds to. They're the vanguard of people who are trying to improve the industry, and the world is a better place already because of them.<br />
<br />
===============<br />
<br />
Much as I love some of the (ISC)^2 board members and heavily involved volunteers, I've decided to let my CISSP certification lapse.<br />
<br />
I never actually planned to get it to begin with; I only signed up for the exam because there was a job I thought I might apply for, and the CISSP was required. By the time I decided to go in a different career direction, it was too late for me to get my exam fees back (and for that amount of money, I could have bought a laptop or some wicked designer shoes). So I crammed for about a day and a half, went to the exam, came out two hours later, and was done. Relatively painless, except for the extortion I had to do of certain former colleagues to get the recommendation forms filled out.<br />
<br />
Since then, having that certification has done nothing for me, except to make me have to look up my number every so often when registering for a conference. As an analyst, I earn CPEs at least once a week, and I suppose if I could just send (ISC)^2 <a href="https://451research.com/search?author=Wendy+Nather" target="_blank">a link like this</a> to be done with the submission, it might be less annoying. But filing them individually? And possibly being audited on them? Ain't nobody got time for that.<br />
<br />
Besides, it still chafes me to think of paying good money every year to be allowed to do something I don't want to do anyway: put letters after my name. At this point, CISSPs are so common, they're like a bachelor's degree:* if you have to brag about it, you probably don't have anything else going for you.<br />
<br />
After decades of being in IT, I no longer want to bother proving how much I know. If someone can't figure it out by talking to me or reading my writing, then I don't want their job. If they feel so strongly about that certification that they won't waive it for me, then they don't want me either, and that's okay. (And if someone is trying an <a href="http://en.wikipedia.org/wiki/Argument_from_authority" target="_blank">argument from authority</a> and won't listen to me because I don't have a current CISSP, then send 'em my way; I could use the belly laugh.)<br />
<br />
I suppose a CISSP might be useful for people starting out in security, who need to prove that they've actually put in a few years at it and know the basics. It's a handy first sorting mechanism when you're looking to fill certain levels of positions. But by the time you're directly recruiting people, you should know why you want them other than the fact that they're certified. And then the letters aren't important.<br />
<br />
I know that the (ISC)^2 board is working hard to pump up the value of the certifications, and I wish them luck with that. I think their biggest challenge will be getting them out of the category of "gate tickets": if having one helps you get through a gate, then you won't feel like you need it any more after that. (You don't have to keep maintaining a college degree; once you've obtained it, that's good enough for anyone who requires it.)<br />
<br />
It'll be hard to create ongoing value for those of us who are past that stage in our careers. Especially for those of us who are too old to go <a href="http://www.j4vv4d.com/security/benefits-of-cissp/" target="_blank">kicking down doors.</a> Maybe in the far future, a security certification will hold the same weight as an engineering one, and need to be maintained in good standing in order to practice your craft. But for that to happen, a lot of other attitudes around security will need to change. More on that in another post.<br />
<br />
<br />
<br />
*Which I also don't have, by the way.<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-75052921278738825492013-03-01T19:50:00.000-08:002013-03-01T19:50:00.330-08:00The data cleanse.Everyone talks about the evils of multitasking, and everyone still does it. I'm becoming convinced, though, that the problem isn't multitasking in and of itself; it's the massive ingestion of data that is putting a strain on our digestive systems.<br />
<br />
All of this is represented neatly by browser tabs. Think about it: each one represents a window on a full set of data, whether it be a news site, a blog post, an online store, an animated gif, a social media site, or a white paper. You can switch between them instantaneously, and your brain has to make a context switch and encompass the entirety of whatever you're doing with that tab.<br />
<br />
Remember when you read just one book at a time? (Okay, maybe two or three, but they were scattered around the house, and you put one down before you picked up the other one.) You didn't have the sheer speed and volume of context switching that we have today just on our computer screens alone.<br />
<br />
And that's not counting the additional inputs represented by phones, music, TV, and conversations. Each texting thread is a conversation, an interaction with another person. Texting and email allow you to conduct conversations with a potentially large number of people at nearly the same time, and with each one, you need to remember what has just gone before, and what you were planning to say next.<br />
<br />
Conferences just kick up the stress even more for me. In half-hour chunks, I meet anywhere from one to five people, and learn their names, titles, affiliations and roles. I also hear all about what they're working on full-time, and I have to digest as much of it as I can in that 30-minute span. Once they leave, I have to flush the cache and start over again with the next group of people. This can go on all day, for several days: in a typical RSA conference day this week, I gave a talk, met several people beforehand and afterwards, and went through nine or ten 30-minute meetings, followed by group socializing in the evening. I have no idea how many people I interacted with in total, nor can I remember most of them or our conversations, unless I got their business cards or took notes.<br />
<br />
Contrast this with the way the previous generations grew up. A person might meet two or three new people in a day if they were busy. More likely, they'd interact with several people they already knew in an office, or classroom; they might go home and have a phone conversation with one or two people. They'd read a book or a magazine, or they'd watch a TV program (this was before DVRs, so whatever was on is what you'd watch, until it was done). Pretty much one at a time, with some amount of overlap, but nowhere near the broadband mode we function in today. And if you grew up in an isolated area, you might not talk to anyone else on a regular basis unless you lived with them. Some people didn't read more than one or two books in a year, if they read them at all.<br />
<br />
The most challenging multitasking I did in my youth was waiting tables, where you only needed to remember the status of six tables at any given time (during a rush, getting up to nine tables was really pushing it). But at least I didn't have to remember more than who was waiting for their cheeseburger, and I could forget everything about the customers as soon as they left the restaurant.<br />
<br />
So sometimes I need to go on a data diet. I need to dial back to the amount of inputs that my parents and their parents had. That means putting down the computer, putting down the phone, and doing something for a sustained period of time that doesn't involve reading, and that requires direct focus. (Cooking is good for this, because if you don't pay attention, you're likely to get hurt or at least burn the food.)<br />
<br />
Think of it as <b>Paleo for the Brain</b>. I need to consume data the good old-fashioned way, without artificial inflation or modification: having one input going on at a time, where you chew your data thoroughly before you swallow. <br />
<br />
If you are feeling frazzled and want to try this, I suggest starting with a <b>Data Cleanse</b>. Stop reading. Don't read <u>anything</u>. Don't talk to anybody. Don't listen to music or watch TV. Just sit in a room without doing anything more complicated than, say, folding laundry. It can be excruciating if you're not used to it; you miss the constant streams of inputs, and your brain doesn't know how to generate its own at the same rate. But eventually your mind will spin down, and you'll go back to having one clear, comprehensible thought at a time that will last for more than a few seconds.<br />
<br />
Then you can work your way back into connectivity. Watch one movie -- all the way through, beginning to end (and without the commentary track turned on). Write a long letter or blog post. (See what I did there?) Call someone up on the phone (how quaint!) and talk to them for at least fifteen minutes. Draw a picture. Go for a slow walk. Be conscious of what you're taking in, and throttle the rate. Make sure to take data breaks where you're not interacting with anything or anyone in a way that involves language.<br />
<br />
Don't get me wrong: the Internet is a nifty place. But sometimes it feels as if I've been feeding my brain a bunch of pork rinds and ice cream. In order to have higher quality thoughts, I need to have fewer of them, and that means taking in less fuel for the fire.<br />
<br />
<br />
<br />
<br />
<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-59307857062600035472013-02-21T17:41:00.004-08:002013-02-21T17:41:59.737-08:00Pack all the things!!It's almost RSA time. I haven't figured out yet how many pallets of business cards to bring along, but I have got my two dozen Band-Aids and blister cream, three pairs of shoes, and two backup power supplies. So I'm pretty well set.<br />
<br />
I'm looking forward to spending at least some of Sunday at B-Sides San Francisco, where there are some cool talks such as "Sorry Your Princess is in Another Castle: Intrusion Deception to Protect the Web" by Kyle Adams, and "My First Incident Response Team: DFIR for Beginners" by Chort (I feel as though the latter one should come with a picture book and a juice box).<br />
<br />
I missed TongaCon last year, and I simply can't do that any more; if @Gillis57 is going to rickroll the Tonga Room again, I need to be there to lend a hand.<br />
<br />
On Monday I'll be moderating two panels at the AGC Partners' 9th Annual West Coast Emerging Growth Conference (and that's the only time I'll try to type that again or say it out loud for the week). "New Frontiers in Endpoint Security" and "Taking the Fight to the Adversary: Threat Intelligence in 2013" are both going to be fun -- not just because there are going to be great panelists from many companies, but also because there have been some <a href="https://blog.bit9.com/2013/02/08/bit9-and-our-customers-security/" target="_blank">recent</a> <a href="http://intelreport.mandiant.com/" target="_blank">headlines</a> that fit very well with both topics.<br />
<br />
Tuesday is our company's breakfast event, and it's another chance to catch up with a lot of people I missed seeing last year. Wednesday morning is my panel with esteemed colleague Daniel Kennedy, "Psychographics of the CISO," also starring two actual live rockstar CISO types. And it'll be great to see the crowd at the Security Bloggers' Meetup in the evening. Rumor also has it that the <a href="http://www.misogynynetworks.com/" target="_blank">Girls of Misogyny Networks</a> will be in evidence somewhere on the RSA exhibit floor.<br />
<br />
Friday is my talk with Andy Ellis (@csoandy) on "Living Below the Security Poverty Line: Coping Mechanisms," and I'm happy to be able to present this topic in front of an important audience.<br />
<br />
And the rest of the time? Well, my Outlook calendar view for the week currently says "66 items."<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-6O4cPTe4_ks/USbLUVGK3hI/AAAAAAAAAHk/yvuHCGu2GFE/s1600/jugglin.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="179" src="http://3.bp.blogspot.com/-6O4cPTe4_ks/USbLUVGK3hI/AAAAAAAAAHk/yvuHCGu2GFE/s320/jugglin.gif" width="320" /></a></div>
<br />
See you there.<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-6557851523296345352013-02-19T17:57:00.001-08:002013-02-19T17:57:56.644-08:00Exercises left to the reader.The <a href="http://intelreport.mandiant.com/" target="_blank">Mandiant report on the threat group it calls APT1</a> has made a big splash, and deservedly so: the combination of juicy details and actual data such as IOCs (indicators of compromise) is another example of groundbreaking data-sharing around security breaches. Of course, the other side to the publication is its assertion that APT1 is Chinese in origin, and most likely a part of the Chinese government; this is going to provoke a lot of heated discussion.<br />
<br />
I've seen some responses already from skeptics, <a href="http://jeffreycarr.blogspot.com/2013/02/mandiant-apt1-report-has-critical.html" target="_blank">casting doubt on the report's conclusions</a> based on the fact that it didn't include any alternative conclusions other than two that pointed at China (operating either officially or unofficially). Before I get into my own opinion on it, I'd just like to throw out some considerations.<br />
<br />
First of all, <u><b>read the report in its entirety</b></u>. The authors spent a lot of time connecting every dot they listed, with the proper amount of hedging words in place. Like other high-data reports such as the Verizon DBIR, this one included alternative explanations and caveats in many places. Follow the chain of logic and look at all the data presented before you start to poke holes in it.<br />
<br />
Now let's think about some of the assumptions, either implicit or explicit, in the report's assertions. We can call out some alternatives, whether they're realistically possible or not. In no particular order:<br />
<br />
<i>Because of the scale of its operations, APT1 must be centrally organized and funded. </i><br />
Alternatives: it could be organized, but not from within China; it could be loosely affiliated without being centrally so; it could be using individually contributed resources.<br />
<br />
<i>Only the Chinese government has the resources for such an operation.</i><br />
Alternatives: a very large company or extremely wealthy individual could provide the necessary resources; a different government could be providing them. <br />
<br />
<i>An operation that large in scale could not go unnoticed by the Chinese government; therefore it would be operating at least with approval, if not support.</i><br />
Alternatives: it could be an operation outside of China, faking very large amounts of China-based IP blocks, domain registrations, and other indicators of origin (such as phone numbers); the Chinese government might not know about it, might be unable to stop it, or might simply not care to. <br />
<br />
<i>Bad English speakers that use simplified Chinese keyboard layout settings must be native Chinese speakers.</i><br />
Alternatives: the APT1 group is very good at planting false flags using Chinese speakers (native or not) and using bad English.<br />
<br />
<i>Because the three revealed personas appear to be working together and sharing resources in the same geographic location, they must be working for 61398.</i><br />
Alternatives: they could simply be three people in a social group or other organization that is also located in the region, or is using the same false flags.<br />
<br />
<i>Because this linked activity has been going on for so many years (with domain registrations starting as early as 2004), it must be using the same people, the same resources and be supported by the same central organization.</i><br />
Alternatives: it could be the same people over time, but not affiliated with the same organization; it could be different individuals who "take the reins" and continue the same general activity.<br />
<br />
<i>Because APT1 is attacking industries listed in China's strategic five-year-plan, it must be furthering China's goals.</i><br />
Alternatives: those industries could be on the strategic lists of a lot of countries, and the match with China could be a coincidence.<br />
<br />
These are just the ones coming off the top of my head. Now, let's take a step back:<br />
<br />
Would any one of these alternatives, if it proved to be correct, torpedo all the other assumptions? Or would a large number of all the alternatives have to be correct, and fit all the available evidence?<br />
<br />
In other words, how probable do these alternatives need to be in order to supplant all these assumptions?<br />
<br />
(This is my amateur version of an <a href="https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/psychology-of-intelligence-analysis/art11.html" target="_blank">ACH</a>, because I am not in that line of work.)<br />
<br />
Now, bear in mind that the evidence laid out in the report may not be <u>all</u> the evidence; it might just be the parts that Mandiant feels are safe to disclose. So the evidence may be even more compelling than we know. Working with what we're given, it appears to me that unless you assume a large-scale conspiracy or an equally well-resourced organization that can fake being sourced in China extremely well (without the knowledge or cooperation of the Chinese government), the preponderance of the evidence points most simply to Mandiant's conclusion. To put it another way, an alternative conclusion would have to be supported by a larger number of less probable, more complicated scenarios that would all have to fit themselves to the facts even better than the China theory does.<br />
<br />
It could be that the evidence in the report is either partially or wholly incorrect, or there's a bunch of evidence that contradicts it (and supports the alternative conclusions) that we just don't see. What other evidence would need to show up to do the trick? And how probable is <u>that</u>?<br />
<br />
So it's kind of like insisting that the Moon landing was faked -- it would require a perfect conspiracy of silence from hundreds of people over decades, and <a href="https://www.youtube.com/watch?v=_loUDS4c3Cs" target="_blank">more sophisticated special effects technology than we know to be available at the time</a>. Sure, you could come up with an alternative conclusion that fits the same evidence, but you'd have to work a lot harder at it.<br />
<br />
Would you rather believe that there's an extremely clever and powerful organization out there that is managing to look over years as though it's sourced in China -- without making any mistakes to give it away -- and that the Chinese government can't do anything about? Or would you rather believe that there's a long-term, Chinese government-approved hacking group that isn't perfect and has left quite a few clues behind?<br />
<br />
There will never be an airtight case one way or the other, but these things aren't binary. From what Mandiant has presented, the simplest explanation is the one it's offering. It's politically explosive, of course, and that's why belief comes into play. But if you have to do more work to deny something than to accept it, you might want to reconsider your chain of logic.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-68872002540374031592013-02-09T05:19:00.001-08:002013-02-09T05:19:27.875-08:00All up in your bitness.We knew it would happen: another security vendor gets hit: this time Bit9, which was <a href="https://blog.bit9.com/2013/02/08/bit9-and-our-customers-security/" target="_blank">admirably quick to disclose</a> after it got in touch with its affected customers (and that's the order it <u>should</u> have happened in, folks). We also knew this would follow: the piling-on (I think <a href="http://blogs.bromium.com/2013/02/08/the-absolute-impossibility-of-white-listing/" target="_blank">Bromium wins the ambulance-chasing award</a> this time around). Which is only fair, in that <a href="https://blog.bit9.com/2013/02/08/its-the-same-old-song-antivirus-cant-stop-advanced-threats/" target="_blank">everyone is tempted to pile on </a>every time there's a failure that is linked to a competitor.<br />
<br />
But there's a big gap between those who are all pointing and laughing and those who sympathize. It falls along very clear lines: those who have spent time in defense and those who only know offense; those who enjoy pointing out flaws and claiming to have the answers, and those who have had to clean up after the proof that there <u>are</u> no complete answers.<br />
<br />
Guess what? If your "solution" needs to be 100% implemented to be successful, then it's never going to solve the problem. Because in the real world, there's no such thing as 100%. <br />
<br />
Security is an unrelenting business, one that you can never prove is done adequately. You'll never be finished, and you can never know if you can even take a break. And it's never fully appreciated by the people who make a living based on that reality: the vulnerability finders and the "solution" providers.<br />
<br />
You may walk into an enterprise as a consultant, and you may be focused on addressing one particular problem (let's say, implementing monitoring). You may just assess the current situation, prescribe some controls, wish the customer luck, and be on your way to the next gig. Or you might even stick around to see that one project to its "completion" -- in which case, you'll be there for months or years. But unless you spend a year in the captain's chair, trying to cover every possible contingency with fallible humans, limited budget and "helpful" researchers coming up with new ways that your systems are attackable, <u>you don't understand a thing about real defense.</u><br />
<br />
If you are playing just one position, you don't understand the whole game.<br />
<br />
Defense is frustrating; it's boring; it's tedious. It's not sexy when you are sitting in a boardroom with an auditor, or when you are looking at a list of scanner findings and trying to manage the year-long projects to fix them. You need an accountant's attention to detail, the skills of a master social engineer, the diagnostic skills of a doctor, and the patience of a saint. In short, you need to know everything that every possible attacker does, and you need to block all of it, all the time, immediately, using resources that you will never completely control.<br />
<br />
So if you're one of the ones scolding a breach victim, you're just displaying your own ignorance of the reality of security in front of those who know better. Think about that for a while, before you're tempted to pile on.<br />
<br />
<br />
<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-88887322369793280412013-01-31T17:09:00.001-08:002013-01-31T17:09:34.870-08:00Training for RSAC.Yes, I'm getting ready for the RSA Conference next month in San Francisco. RSA is a particularly brutal week for those in my line of work; thus far I'm meeting with 23 vendors, most of them in 30-minute sessions, and that's not counting the time I'll be walking the exhibit hall, trying to meet with more. I have three panels and one talk to give during the week. We won't mention all the vendor events in the evenings, both public and private, the side conferences taking place, or the fun gatherings like the Security Bloggers' Meetup.<br />
<br />
In order to get ready for this challenge, I've been doing the following exercises, which you may want to try as well:<br />
<br />
<ul>
<li>Walk three miles in heels; drink two cocktails and then walk another mile without spraining an ankle.</li>
<li>Stand for two hours in one spot, holding a tiny napkin full of mini-quiches and seared tuna canap<span class="st">é</span>s in one hand, a glass of Pinot Noir in the other, and handing out business cards with the other other hand.</li>
<li>Go to a public restroom and practice removing stains from the aforementioned wine and canap<span class="st">é</span>s from a white shirt.</li>
<li>Do wind sprints through a hallway full of high school students to practice the art of the two-minute break between one-on-one meetings.</li>
<li>Speed-read through books on calculus, knitting, quantum mechanics, teleology, bread baking, and constitutional law to get ready for reams of vendor brochures and white papers.</li>
<li>Practice lip-reading in a dark room by the light of strobes and lasers. </li>
<li>Memorize 80 names per day out of the phone book.</li>
<li>Practice listening earnestly to comedian monologues without cracking a smile or giggling. <br /><br />and finally ...<br /></li>
<li>Go geocaching in Costco to practice finding the one vendor at RSA that is not claiming to do 'big data' or 'analytics.'</li>
</ul>
See you there.<br />
<br />
<br />
<br />
<br />
<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.comtag:blogger.com,1999:blog-5041863148829165266.post-75248842027410908632012-12-29T09:04:00.001-08:002012-12-29T09:04:53.048-08:00Levelling up in the real world.Here's a great post from Victor Wong on <a href="http://lifehacker.com/5971711/what-they-dont-tell-you-about-promotions" target="_blank">What They Don't Tell You About Promotions</a>. <br />
<br />
All of his points are so, so true -- and I thought I'd add some more from my own experiences and perspective. There are a lot of misconceptions out there about what entitles you to a promotion, so let me get those out of the way first:<br />
<br />
<b>What does not get you promoted:</b><br />
<ul>
<li>Being the oldest person on your team. (Really, some people seem to believe this.) It's not about how old you are; management or senior positions are not about babysitting other people.</li>
<li>Being in your position the longest. Your position does not expire after a certain date, and you don't level up just by doing your job.</li>
<li>Doing your job the best of everyone else on the team. It's not about how well you do what's expected of you; it's about what you do above and beyond your job description.</li>
<li>Needing the money. Sorry, but that is not a sufficient reason for your boss to actually give you more money, much less move you to another position. You have to prove that you're worth it. </li>
<li>Working the hardest on your team. Again, it's not about fulfilling your current responsibilities. If you are working much harder than others, your boss might be looking at you and thinking, "This person doesn't know when to stop." Or your boss might decide, "We really need this person to keep the group afloat, so we're not going to change her job." It might even be, "This person has to work harder to do the same job as everyone else -- he's not as competent."</li>
<li>Taking a course or two to prepare for your next level. Courses are nice, but there's no guarantee that you can actually execute on what you've learned. You need to prove that you can do the next higher job by actually doing it. Think of a promotion as an acknowledgment of what you've already been doing rather than a change into a brand-new set of responsibilities that you haven't done yet.</li>
</ul>
<br />
<b>Here are some other things that will keep you from getting promoted:</b><br />
<ul>
<li>Not playing well with others. If you upset people inside or outside the team, it creates extra work for your boss, who has to smooth things over. When you create extra work for your boss, you are totally not getting rewarded for it. </li>
<li>Taking a negative view of things. If you complain about other people, your workload, or talk about customers as if they're idiots, you're not going to level up. Nobody likes a pill.</li>
<li>Having no helpful ideas of your own. Your boss wants a problem-solver who can be trusted to do it the right way without creating other problems (see above). Just reporting on problems isn't enough.</li>
<li>Not seeing the big picture. If you are thinking only about your current job or your current team, you're not thinking big enough. You need to prove that you can approach things from your boss's perspective (or that of your boss's boss). Even better, you should be coming up with ideas that they haven't (but that they like). </li>
<li>Not doing the job your boss wants you to do. You may be the most brilliant person in the world who is going to change the whole industry; you may think you have the right answers (and in some cases that might even be true). But people who haven't managed teams have no idea how annoying it is to have an employee who won't just do his fricking job because he thinks he knows better. If you don't agree with your boss on how to do things, <u>go find another boss</u>. You'll be doing everyone a favor.</li>
</ul>
<br />
And finally, here's one that not a lot of people think about:<br />
<br />
Being irreplaceable. Yes, <b><u>being irreplaceable will keep you from being promoted</u></b>. If you are so key to operations that you can't take a vacation or sick day without things falling apart, you are not going to get moved to a different position so that things can fall apart full-time. If you are already a manager, your job is to make sure your team has the skills and empowerment to take care of anything that comes up in your absence. A succession plan is vitally important in every organization. Your own boss will feel much better knowing that you have a stable and successful team, and knowing that you're not endangering operations by indulging your ego's need to feel special.<br />
<br />
When you are looking out for the welfare of your organization instead of focusing on what you can get for yourself, that's when you'll be given the chance to do more and own more. <br />
<br />
<br />Wendy Natherhttp://www.blogger.com/profile/01481433737997124919noreply@blogger.com