We are facing a big problem, one that's hidden behind the more prominent issues of cybercrime, encryption wars, and vulnerability disclosure. It's endemic to our digital infrastructure, and it's going to get worse over time. And it's so complex that I'm not sure I can do it justice in a blog post. I've been talking about it here:
https://www.youtube.com/watch?v=lU8_S0V_zOQ (B-Sides London)
https://www.youtube.com/watch?v=mKnKQv-0cwE (HouSecCon)
In a nutshell, it has to do with digital delegation.
What do I mean by that? I mean any situation where an online user needs to be able to delegate all or part of their access or capabilities to someone else -- whether temporarily, intermittently, or permanently. Most identity and access management models only deal with delegation in an enterprise context: Alice needs to go on PTO, and Bob needs to cover for her during that time, without anyone confusing the two people for the purpose of accountability.
But real life is more complicated than that, and it involves legal protections as well. Take the reasonably simple example of a minor child. A parent or legal guardian has the authority to administer many things for a child, but the design of online accounts is often muddled. Which signups does the parent have to do, and which ones does the parent simply approve at some part of the workflow? If a registration is asking for a date of birth, whose date of birth are we talking about? And what happens when the child reaches the age of legal majority? Does the parent suddenly have to turn over access to a login, or does the parent drop out of the approval workflow?
At the other end of the spectrum, we have the problem of what happens to online accounts after the owner dies. We still haven't worked that out too well yet -- there are good talks out there by people who have had to deal with it personally -- but death is a pretty permanent condition, as well as a binary one. What about temporary or intermittent delegation?
If you were incapacitated today for a month -- let's say, due to the proverbial bus accident -- who would be able to pay your online bills? A friend can't just go to your bank and say, "Yeah, I just need to be set up as a secondary on this account so I can get into billpay." No, if you were conscious, you would probably just give your friend your password. And if you're using 2FA with that account on your phone (as everyone should do, right?), you'd have to hand over your phone -- oh yes, and the passcode for the phone too. Or would you let your friend change over the 2FA registration to their phone for a while, to make it easier?
That's just one scenario. The harder one, which I've had to live through twice now, is the declining parent who has good days and bad days, and doesn't want to give up control of their accounts. They may be so impaired that they make mistakes with them, or forget how to use the sites, but they won't simply sign everything over to their child (and in some cases, they may already be so disabled that they can't take the legal steps to sign things over anyway).
Dealing with an incapacitated loved one is heartrending. You want to allow them as much autonomy as possible, while protecting them from themselves. Above all, you don't want to have to get them declared legally incapacitated; that will ruin your relationship forever. You simply want to be able to help them out. "Hey Dad, do you just want me to log in and take care of this for you? I know you're tired today, but this bill is due." And in the future, they may have good days where they can go back to doing it themselves; you don't want to have taken over their logins, changed passwords, set up your own phone as the recovery number, and so on. There has to be a better middle ground, between impersonation (which can trigger fraud alerts) and the permanent, legal takeover.
So here's the story behind the title of my talks, and this post. I had to take over my father's Gmail account when he had a stroke, so that I could get into his accounts and reset the passwords, so that I could pay my parents' bills (as well as watch those accounts for fraud). Later, I had to take over my mother's Gmail account, and I set up my personal (non-Gmail) email address as a secondary on her Gmail.
What I found out was that Google helpfully associates the two addresses when you do that, so whenever someone using Gmail tried to mail me at my personal email, Google would say brightly, "Oh, you mean [my mother]!" So whenever anyone mailed me using Gmail -- business associates, friends, merchants, etc. -- the messages would be sent to me, but under my mother's name. That's pretty creepy when it happens.
I went in and removed my email address as the secondary, but it didn't fix the problem. I am now permanently associated with my (now deceased) mother as far as Google is concerned. I reported this to them, but they did not consider this to be a security or privacy issue, so there you are. (I don't want to delete my parents' Gmail accounts, because I don't want an impostor popping up in the future, and there may still be alerts coming in from other accounts I don't know about.)
The bottom line here is that we need a massive overhaul in the design of consumer-facing systems that can take into account different delegation cases. They need to handle authentication, re-certification, and legal proxies, and they need to understand non-binary conditions, while at the same time continuing to protect against account takeovers and fraud.
Right now, this is not a crisis, since the majority of people who are becoming incapacitated did not set up much in the way of online accounts. But as the tech-savvier baby boomers age, it is going to get much worse; I have hundreds of accounts out there dating back decades, some of which I'm sure I've forgotten about and never entered into a password manager. If my children had to take over my business affairs, there would be no other way for them to do it other than online (they don't know how to write a check, and all my statements arrive electronically anyway).
Luckily, a few companies out there are starting to become aware of the issue and offer emergency access functionality. It's a start. But we need global, consistent mechanisms for doing this, and they need to be set up at the point of initial registration, not months after someone has managed to get a legal power of attorney signed and notarized, and has had to fax it to fifteen different entities.
I don't have a ready answer for this, except that a bunch of us need to get to work on it. Our digital future as a society depends on supporting our real-world life cycles.
Idoneous Security
i·do·ne·ous [ahy-doh-nee-uhs] adjective: appropriate; fit; suitable; apt.
Because I'm all about the "good enough."
Saturday, February 20, 2016
Tuesday, December 8, 2015
A matter of taste.
I've figured it out: The word "cyber" is like garlic.
For most palates, just a bit of cyber in anything is enough. It makes it all a bit more interesting.
Some people love cyber so much that they put it in everything, in massive amounts (chicken with 40 cloves of cyber, for example). Others are so sensitive to cyber that they can't stand the faintest whiff of it.
If you've been raised in a culture that uses cyber a lot, you won't realize how it comes across to those who haven't grown up with it. People will pull away from you with horrified or disgusted looks on their faces and you won't know why. When you've been steeping in cyber, you don't notice the smell any more.
There's even a certain part of the United States that just loves its cyber. It puts on a regular cyber festival, where you can get cyber flavor in everything. I've never been to it myself, but I can tell you right now that I will never accept cyber-ice cream.
Some cultures love cyber, and some don't, but if you're part of a couple and only one of you has ingested cyber that day, you're going to have compatibility problems later on that evening.
And one final thought: if you feed your toddler too many spinach pierogi with cyber, she's going to be exhaling that stench for days until it clears out of her little body. Trust me on this one.
For most palates, just a bit of cyber in anything is enough. It makes it all a bit more interesting.
Some people love cyber so much that they put it in everything, in massive amounts (chicken with 40 cloves of cyber, for example). Others are so sensitive to cyber that they can't stand the faintest whiff of it.
If you've been raised in a culture that uses cyber a lot, you won't realize how it comes across to those who haven't grown up with it. People will pull away from you with horrified or disgusted looks on their faces and you won't know why. When you've been steeping in cyber, you don't notice the smell any more.
There's even a certain part of the United States that just loves its cyber. It puts on a regular cyber festival, where you can get cyber flavor in everything. I've never been to it myself, but I can tell you right now that I will never accept cyber-ice cream.
Some cultures love cyber, and some don't, but if you're part of a couple and only one of you has ingested cyber that day, you're going to have compatibility problems later on that evening.
And one final thought: if you feed your toddler too many spinach pierogi with cyber, she's going to be exhaling that stench for days until it clears out of her little body. Trust me on this one.
Wednesday, November 25, 2015
Why the airplane analogy doesn't fly.
Don't get me wrong — I love Trey Ford. He is one of the most inspiring infosec pros I know. He's smart, creative, full of mind-blowing ideas, and has energy to spare. And I love his talk at SecTor about what we can learn about information sharing from the aviation industry.
There's just one problem: aviation isn't all that comparable to cybersecurity.
Imagine that instead of flying the plane herself, a pilot had to convince all the passengers on the flight, EVERY flight, to do the flying together. And many of them aren't good at it, and don't care; they just want to sleep or watch videos or whatever.
The passengers change all the time, so you can't keep them educated on what to do. Depending on the size of the plane, there may be tens or hundreds of thousands of passengers helping with the flying. Instead of a finite maintenance crew that's under the direct control of the airline, there are dozens or thousands of different crews from third-party companies, all doing their bits (or not).
The aircraft types range into the thousands, dating back to Kitty Hawk and up to the newest models, and most of them have at least some custom alterations that can be changed between flights, so the various manufacturers won't take responsibility for anything they didn't add. Remember, too, that each of those alterations was probably made for a good reason — or at least, a reason that was good at the time. (That's a huge part of what we don't know about breaches today: we sometimes know the chain of events and contributing vulnerabilities, but we make rash judgments about why they happened without knowing the full story.)
The airlines all have different ideas on how they should equip their planes, so some pilots have one of everything new and shiny, and others have to make do with duct tape and bags of pretzels. (And some airlines are just now thinking that maybe having a dedicated pilot is a good idea.)
Oh, and did I mention? The weather is actively trying to disrupt your flight, usually in a way that you won't notice until it's too late. (Although you still have to worry about hacktivist storm cells that want you to look bad.)
All of these differences highlight our challenge in security: because everything is so complicated, so flexible, and so NOT under our individual control, we can easily blame someone else for their breaches because they did things so differently. And the pilot is nominally in charge, so that's where we concentrate the attention, but even the pilot can't get the toddler in 32B to stop screaming and fly straight. I'm not even going to mention the armchair aviation enthusiasts who sit near the runway with binoculars and lasers and provide "helpful" critique. (Oops, I guess that slipped out.)
So how can we still make use of what we've learned from information sharing in aviation? As Trey says, we can at least collect data now in a way that we may be able and willing to share later. If only we had a black box that collected vital information about a breach in a way that didn't expose the inner workings of the business, or those custom-built additions. If only we could sanitize the data in a way that communicated the important lessons ("don't combine these tray tables with that boarding process, and especially don't add a pilot over 6 feet tall without upgrading the landing gear") but defanged our industry's reflexive attempts at a certain kind of blame ("how stupid was that? We'd never do that!").
When I consider all this, sometimes I despair that we'll ever figure it out. But with positive thinkers like Trey, we may just have a chance.
There's just one problem: aviation isn't all that comparable to cybersecurity.
Imagine that instead of flying the plane herself, a pilot had to convince all the passengers on the flight, EVERY flight, to do the flying together. And many of them aren't good at it, and don't care; they just want to sleep or watch videos or whatever.
The passengers change all the time, so you can't keep them educated on what to do. Depending on the size of the plane, there may be tens or hundreds of thousands of passengers helping with the flying. Instead of a finite maintenance crew that's under the direct control of the airline, there are dozens or thousands of different crews from third-party companies, all doing their bits (or not).
The aircraft types range into the thousands, dating back to Kitty Hawk and up to the newest models, and most of them have at least some custom alterations that can be changed between flights, so the various manufacturers won't take responsibility for anything they didn't add. Remember, too, that each of those alterations was probably made for a good reason — or at least, a reason that was good at the time. (That's a huge part of what we don't know about breaches today: we sometimes know the chain of events and contributing vulnerabilities, but we make rash judgments about why they happened without knowing the full story.)
The airlines all have different ideas on how they should equip their planes, so some pilots have one of everything new and shiny, and others have to make do with duct tape and bags of pretzels. (And some airlines are just now thinking that maybe having a dedicated pilot is a good idea.)
Oh, and did I mention? The weather is actively trying to disrupt your flight, usually in a way that you won't notice until it's too late. (Although you still have to worry about hacktivist storm cells that want you to look bad.)
All of these differences highlight our challenge in security: because everything is so complicated, so flexible, and so NOT under our individual control, we can easily blame someone else for their breaches because they did things so differently. And the pilot is nominally in charge, so that's where we concentrate the attention, but even the pilot can't get the toddler in 32B to stop screaming and fly straight. I'm not even going to mention the armchair aviation enthusiasts who sit near the runway with binoculars and lasers and provide "helpful" critique. (Oops, I guess that slipped out.)
So how can we still make use of what we've learned from information sharing in aviation? As Trey says, we can at least collect data now in a way that we may be able and willing to share later. If only we had a black box that collected vital information about a breach in a way that didn't expose the inner workings of the business, or those custom-built additions. If only we could sanitize the data in a way that communicated the important lessons ("don't combine these tray tables with that boarding process, and especially don't add a pilot over 6 feet tall without upgrading the landing gear") but defanged our industry's reflexive attempts at a certain kind of blame ("how stupid was that? We'd never do that!").
When I consider all this, sometimes I despair that we'll ever figure it out. But with positive thinkers like Trey, we may just have a chance.
Monday, September 7, 2015
When your risk profile is different.
Ready for some (more) unfounded speculation?
Both people and organizations tend to want to keep their data within a circle of trust; it's why there has been (and continues to be) resistance to putting sensitive data in the cloud. It's a function of human nature to keep things close -- which is why people still keep files on their desktops or laptops, use USB drives, and run servers at home. You keep your treasures in an environment that you know best, and where you feel you have the most control over them.
According to the Washington Post, President Bill Clinton had had a personal email server at home; Hillary Clinton had a server which had been in use during her first presidential campaign in 2008, and this same server was then set up for her at home when she took the Secretary of State post.
Besides this controversy with her home email server (and yes, I commented on that on CNN, but they must not have liked most of what I had to say), I noticed the other day that apparently Caroline Kennedy had been using personal email as well for State Department business. This suggests to me that they may have had a reason in common for doing this, one that hasn't been highlighted so far:
They both have a very different risk profile from most public officials.
When you're a celebrity -- independent of the position you currently hold -- your threat modeling has to include just about everyone. Any friends you have, any staff members you hire, could turn on you at any time for some perceived advantage. Now, Hillary could have had knowledge that the State Department was bad at securing its own systems, but I don't think that was it. I think she just couldn't trust staffers that worked for the agency and not for her personally. Any of them might try to access her email for political or personal reasons -- and let's face it: she's spent many, many years being embattled. The same would go for Caroline Kennedy, as well as anyone else who was famous before they took office.
In other words, their threat model holds colleagues to be a higher risk than hackers.
If you think this is surprising, you haven't been inside the minds of most non-security people. They have seen and experienced many more threats on a personal level than they have The Notorious A.P.T, so they will defend against the threat they believe in more.
None of us really knows how secure the server ended up being (although it looks like Hurricane Sandy caused natural disasters to become a more prominent part of the threat model, which is why they finally moved it to a provider with an actual data center), so I can't comment on that. Nor am I in any position to comment on the legal or classification issues, since those seem to be changing depending on who's got the microphone at any given time. But from a threat modeling perspective, I can absolutely understand why people want to hold their staff close and their data closer.
Oh, and by the way: if you can't view things from other peoples' perspectives, you're not going to be very good at threat modeling.
Both people and organizations tend to want to keep their data within a circle of trust; it's why there has been (and continues to be) resistance to putting sensitive data in the cloud. It's a function of human nature to keep things close -- which is why people still keep files on their desktops or laptops, use USB drives, and run servers at home. You keep your treasures in an environment that you know best, and where you feel you have the most control over them.
According to the Washington Post, President Bill Clinton had had a personal email server at home; Hillary Clinton had a server which had been in use during her first presidential campaign in 2008, and this same server was then set up for her at home when she took the Secretary of State post.
Besides this controversy with her home email server (and yes, I commented on that on CNN, but they must not have liked most of what I had to say), I noticed the other day that apparently Caroline Kennedy had been using personal email as well for State Department business. This suggests to me that they may have had a reason in common for doing this, one that hasn't been highlighted so far:
They both have a very different risk profile from most public officials.
When you're a celebrity -- independent of the position you currently hold -- your threat modeling has to include just about everyone. Any friends you have, any staff members you hire, could turn on you at any time for some perceived advantage. Now, Hillary could have had knowledge that the State Department was bad at securing its own systems, but I don't think that was it. I think she just couldn't trust staffers that worked for the agency and not for her personally. Any of them might try to access her email for political or personal reasons -- and let's face it: she's spent many, many years being embattled. The same would go for Caroline Kennedy, as well as anyone else who was famous before they took office.
In other words, their threat model holds colleagues to be a higher risk than hackers.
If you think this is surprising, you haven't been inside the minds of most non-security people. They have seen and experienced many more threats on a personal level than they have The Notorious A.P.T, so they will defend against the threat they believe in more.
None of us really knows how secure the server ended up being (although it looks like Hurricane Sandy caused natural disasters to become a more prominent part of the threat model, which is why they finally moved it to a provider with an actual data center), so I can't comment on that. Nor am I in any position to comment on the legal or classification issues, since those seem to be changing depending on who's got the microphone at any given time. But from a threat modeling perspective, I can absolutely understand why people want to hold their staff close and their data closer.
Oh, and by the way: if you can't view things from other peoples' perspectives, you're not going to be very good at threat modeling.
Saturday, May 16, 2015
Lessons in grown-up security.
Okay, so for the sake of those who can't say anything, I feel I have to say something.
Remember how much you hate people talking about things they don't understand? So do I. And let's face it: if you're not on the inside of an organization, you don't know 100% of what's going on there. Oftentimes it's less than 50%. And if it has to do with security, the percentage can drop as low as 10%.
The hysteria around Chris Roberts supposedly hacking a plane and "making it go sideways" has reached an all-time high. Which isn't to say it couldn't go higher, because media. But let's go through the versions here:
There's what he told people he did.
There's what they interpreted from what he said.
There's what he thought he did.
There's what he actually did.
Then there's the usual Telephone game of people misinterpreting, mis-reporting, and deliberately twisting all those things when they hear them second- and third-hand.
But one fact remains: there are people who actually know what's possible to do, and they ain't talking. Nor will they. Even if Roberts was talking complete bullshit, nobody on the inside is going to step forward and say it publicly. So in this case, silence does not equal assent.
We don't know whether the airline manufacturer already has experts doing pentesting, and they don't need any more, thankyouverymuch. Just because they're ignoring your reports doesn't mean they don't already know about what you think you're trying to say. They don't actually owe you an answer: "No, you didn't really get through, but if you had done THIS instead ..." Just because you decide to walk onto the court, it doesn't mean you get to be a player.
We don't know why United decided to come out with a bug bounty program, although it's mighty responsible of them NOT to encourage randoms to try hacking the avionics. Those who are complaining that it's missing from the bug bounty program are completely clueless in that regard, and have probably never been personally responsible for anything more consequential than a runaway shopping cart.
There may be no truth at all to what the FBI claims Roberts did, and they're just prosecuting him because letting him go free would send the wrong message to other juvenile delinquents out there.
The bottom line is, if you're not actively working WITH the company whose technology you're researching, then you're an adversary. So don't be surprised if they treat you like one. United has every right to say to Roberts, "You didn't actually do anything harmful, but you're a dick, so stay off our airplanes."
You can be a security researcher, but in the immortal, wise words of @wilw: Don't be a dick.
Remember how much you hate people talking about things they don't understand? So do I. And let's face it: if you're not on the inside of an organization, you don't know 100% of what's going on there. Oftentimes it's less than 50%. And if it has to do with security, the percentage can drop as low as 10%.
The hysteria around Chris Roberts supposedly hacking a plane and "making it go sideways" has reached an all-time high. Which isn't to say it couldn't go higher, because media. But let's go through the versions here:
There's what he told people he did.
There's what they interpreted from what he said.
There's what he thought he did.
There's what he actually did.
Then there's the usual Telephone game of people misinterpreting, mis-reporting, and deliberately twisting all those things when they hear them second- and third-hand.
But one fact remains: there are people who actually know what's possible to do, and they ain't talking. Nor will they. Even if Roberts was talking complete bullshit, nobody on the inside is going to step forward and say it publicly. So in this case, silence does not equal assent.
We don't know whether the airline manufacturer already has experts doing pentesting, and they don't need any more, thankyouverymuch. Just because they're ignoring your reports doesn't mean they don't already know about what you think you're trying to say. They don't actually owe you an answer: "No, you didn't really get through, but if you had done THIS instead ..." Just because you decide to walk onto the court, it doesn't mean you get to be a player.
We don't know why United decided to come out with a bug bounty program, although it's mighty responsible of them NOT to encourage randoms to try hacking the avionics. Those who are complaining that it's missing from the bug bounty program are completely clueless in that regard, and have probably never been personally responsible for anything more consequential than a runaway shopping cart.
There may be no truth at all to what the FBI claims Roberts did, and they're just prosecuting him because letting him go free would send the wrong message to other juvenile delinquents out there.
The bottom line is, if you're not actively working WITH the company whose technology you're researching, then you're an adversary. So don't be surprised if they treat you like one. United has every right to say to Roberts, "You didn't actually do anything harmful, but you're a dick, so stay off our airplanes."
You can be a security researcher, but in the immortal, wise words of @wilw: Don't be a dick.
Friday, April 24, 2015
Achievement unlocked?
This week was Hell Week for analysts, otherwise known as Meet All The People, Inspect All The Things, otherwise known as the RSA Conference. Everything was going as expected: I made it through all the speaking engagements (at least one a day this time), spent a little time on the expo floor making a video with the awesome @j4vv4d, did the press interviews, and kissed all the hands and shook all the babies in 30-minute meeting slots.
I was heading over to the Security Bloggers' Meetup, wearing some really spectacular (if you'll pardon the pun) blinking-LED sunglasses that Javvad had given me, and I decided to leave them on for the short walk across the street to Jillian's; I figured they would look good in the dark bar.
All of a sudden, some male conference-goer walks by me, and in passing, he tells me, "There's a switch on the earpiece of the glasses, probably on the right, and you can turn them off that way so they won't run down the battery."
WTaF. Is this guy really mansplaining to me HOW TO OPERATE MY OWN SUNGLASSES?
Yes. Yes, he was.
Now, this is only the most harmless of micro-aggressions compared to what other women go through ("I want to talk to an engineer, not a booth lady"), but what most people don't understand is why we don't take people's heads off at the time. It's simple: you're so stunned, you don't think of the right words until much later. Imagine someone comes up to you out of the blue and says, "Hey buddy, you're wearing socks, we're going to have to ask you to leave." Completely on automatic, you might say, "Oh, okay, sorry about that," and start moving before the rest of your brain finishes processing the "What?" And many of us are trained to be polite first and foremost, so it's a reflex that has to be overcome.
So I said to the guy, "THANK YOU FOR EXPLAINING THAT TO ME. I WOULD NEVER HAVE FIGURED IT OUT BY MYSELF." (Blogger doesn't have a sarcasm font, but imagine my saying it in one.) And now I'm sure that this Derpasaurus Rex took that completely seriously and thought I was really thanking him. So I should have done better, but it did take a few more minutes for the incredulity to drain away, and then it was too late.
What causes this level of pea-brained sexism to happen? I don't normally encounter it, or at least not so that I'd notice. I'm neither young nor pretty, but I was wearing a skirt at the time, which I don't normally do. What thought process goes on to make someone decide that a middle-aged mother of two, minding her own business, urgently needs sunglasses instructions?
The best I can come up with is this: the guy was truly bothered by the sight of someone wearing blinking sunglasses (on top of the head) in daylight.
"That's wasteful. Oh, it's a woman. She must not know how to turn them off."
And it would never have occurred to him to go through the same thought process if it had been a man. He would have assumed the man had a good reason for leaving them turned on, and it might still have bothered him in some Derpy Engineer Syndrome fashion, but he would have let it go.
Anyway, that was the one surreal moment from the conference this week. I think I'll put away the skirt for next year.
I was heading over to the Security Bloggers' Meetup, wearing some really spectacular (if you'll pardon the pun) blinking-LED sunglasses that Javvad had given me, and I decided to leave them on for the short walk across the street to Jillian's; I figured they would look good in the dark bar.
All of a sudden, some male conference-goer walks by me, and in passing, he tells me, "There's a switch on the earpiece of the glasses, probably on the right, and you can turn them off that way so they won't run down the battery."
WTaF. Is this guy really mansplaining to me HOW TO OPERATE MY OWN SUNGLASSES?
Yes. Yes, he was.
Now, this is only the most harmless of micro-aggressions compared to what other women go through ("I want to talk to an engineer, not a booth lady"), but what most people don't understand is why we don't take people's heads off at the time. It's simple: you're so stunned, you don't think of the right words until much later. Imagine someone comes up to you out of the blue and says, "Hey buddy, you're wearing socks, we're going to have to ask you to leave." Completely on automatic, you might say, "Oh, okay, sorry about that," and start moving before the rest of your brain finishes processing the "What?" And many of us are trained to be polite first and foremost, so it's a reflex that has to be overcome.
So I said to the guy, "THANK YOU FOR EXPLAINING THAT TO ME. I WOULD NEVER HAVE FIGURED IT OUT BY MYSELF." (Blogger doesn't have a sarcasm font, but imagine my saying it in one.) And now I'm sure that this Derpasaurus Rex took that completely seriously and thought I was really thanking him. So I should have done better, but it did take a few more minutes for the incredulity to drain away, and then it was too late.
What causes this level of pea-brained sexism to happen? I don't normally encounter it, or at least not so that I'd notice. I'm neither young nor pretty, but I was wearing a skirt at the time, which I don't normally do. What thought process goes on to make someone decide that a middle-aged mother of two, minding her own business, urgently needs sunglasses instructions?
The best I can come up with is this: the guy was truly bothered by the sight of someone wearing blinking sunglasses (on top of the head) in daylight.
"That's wasteful. Oh, it's a woman. She must not know how to turn them off."
And it would never have occurred to him to go through the same thought process if it had been a man. He would have assumed the man had a good reason for leaving them turned on, and it might still have bothered him in some Derpy Engineer Syndrome fashion, but he would have let it go.
Anyway, that was the one surreal moment from the conference this week. I think I'll put away the skirt for next year.
Tuesday, January 27, 2015
Looking logically at legislation.
There's a lot of fuss around the recent White House proposal to amend the Computer Fraud and Abuse Act, and some level-headed analysis of it. There's also a lot of defensive and emotional reaction to it ("ZOMG we're going to be illegal!").
First of all, everyone take a deep breath. The reason why proposed changes are made public is to invite comment. This is a really good time to step up and give constructive feedback, not just say how much it sucks (although a large enough uproar will be taken into account anyway). Try assuming that nobody is "out to get you" -- assume that they're just trying to do the right thing, as you would want them to do for you. Put yourself in their shoes: if you had to figure out how to protect citizens and infrastructure against criminal "cyber" activity, and do it legally, how would you do it?
There's another really important point here, beyond the one that if you don't like it, suggest something more reasonable. Jen Ellis talks about the challenge of doing just that in her great post. And I agree with Jen that an intent-based approach may be the most likely avenue to pursue, although proving intent can be difficult. I'm looking forward to seeing concrete suggestions from others. As I've pointed out before, writing robust legislation or administrative rules is a lot like writing secure code: you have to check for all the use and abuse cases, plan for future additions, and make it all stand on top of legacy code that has been around for decades and isn't likely to change. We have plenty of security people who should be able to do this.
If they can't -- if there's no way to distinguish between security researchers and criminals in a way that allows us to prosecute the latter without hurting the former -- then maybe that's a sign that some people should rethink their vocations. (It also explains why society at large can't tell the difference, and doesn't like security researchers.) After a certain point, it's irrational to insist on your right to take actions just like a criminal, force other people to figure out the difference, and not suffer any consequences. If you want to continue to do what you're doing, step up and help solve the real problem.
First of all, everyone take a deep breath. The reason why proposed changes are made public is to invite comment. This is a really good time to step up and give constructive feedback, not just say how much it sucks (although a large enough uproar will be taken into account anyway). Try assuming that nobody is "out to get you" -- assume that they're just trying to do the right thing, as you would want them to do for you. Put yourself in their shoes: if you had to figure out how to protect citizens and infrastructure against criminal "cyber" activity, and do it legally, how would you do it?
There's another really important point here, beyond the one that if you don't like it, suggest something more reasonable. Jen Ellis talks about the challenge of doing just that in her great post. And I agree with Jen that an intent-based approach may be the most likely avenue to pursue, although proving intent can be difficult. I'm looking forward to seeing concrete suggestions from others. As I've pointed out before, writing robust legislation or administrative rules is a lot like writing secure code: you have to check for all the use and abuse cases, plan for future additions, and make it all stand on top of legacy code that has been around for decades and isn't likely to change. We have plenty of security people who should be able to do this.
If they can't -- if there's no way to distinguish between security researchers and criminals in a way that allows us to prosecute the latter without hurting the former -- then maybe that's a sign that some people should rethink their vocations. (It also explains why society at large can't tell the difference, and doesn't like security researchers.) After a certain point, it's irrational to insist on your right to take actions just like a criminal, force other people to figure out the difference, and not suffer any consequences. If you want to continue to do what you're doing, step up and help solve the real problem.
Subscribe to:
Posts (Atom)