Wednesday, July 21, 2010

why the disclosure debate does in fact matter

some time ago dennis fisher published a post on threatpost explaining why the disclosure debate doesn't matter.
from the article:
In recent discussions I've had with both attackers and the folks on enterprise security staffs who are charged with stopping them, the common theme that emerged was this: Even if every vulnerability was "responsibly" disclosed from here on out, attackers would still be owning enterprises and consumers at will. A determined attacker (whatever that term means to you) doesn't need an 0-day and a two-week window of exposure before a patch is ready to get into a target network. All he needs is one weak spot. A six-year-old flaw in Internet Explorer or a careless employee using an open Wi-Fi hotspot is just as good as a brand-spanking-new hole in an Oracle database.
his argument seems to be that since a determined adversary is going to get in regardless of whether people practice full disclosure or responsible disclosure, the method of disclosure makes no difference. if they don't use the vulnerability in question then they'll just use something else.

what this basically boils down to in practice (whether dennis likes it or not) is 'since they're going to get in anyways we might as well make it easy for them'. does that seem right to you? it doesn't to me. how about this - if it doesn't matter whether we keep a particular vulnerability out of the attacker's toolbox (since they'll just find some other way to get in), why does it matter if we fix the vulnerability at all? whether the vulnerability is kept hidden or made non-existent, it should have the same effect, namely that it doesn't get exploited, so if one of those is pointless doesn't that mean the other one is too?

this strikes me as the security equivalent of nihilism, which quite frankly is not conducive to progress. as such i have an exercise for all those agree with dennis' sentiments (that the disclosure debate doesn't matter) to rouse them from their apathy:
publicly post your full personal details, including name, address, phone number, bank account number, credit card number, social security number, etc, etc.
after all, if someone really wants to steal your identity they're going to do it anyways, so you might as well  hand the bad guys the tools they need on a silver platter, assume you're going to get pwned (in accordance with the defeatist mindset that has become so popular in security these days), and start the recovery process. just bend over and think warm thoughts.

"that's not the same thing" you say? well of course not. in one instance you're handing over tools that enable attackers to victimize somebody somewhere (often many somebodies all over the place) and in the other you're handing over tools that enable attackers to victimize YOU. clearly things are a lot different when it's your own neck on the line than when it's some nameless faceless mass of people who are out of sight and out of mind.

will responsible disclosure prevent attackers from victimizing people or organizations? in the most general sense, no. but there is definitely value in making things harder for them, and it should be blatantly obvious that there is no value in making things easier for them. the concept of not making the attacker's job easier is why there's a disclosure debate in the first place, and the fact that so many people still don't understand that is why it's still important.

2 comments:

Chester Wisniewski - Sophos said...

I agree with your sentiments and have another example that I take a lot of crap over. I work for a company whose identity is associated primarily with being an anti-virus firm.

A new zero day like the current CPLINK exploit comes to light and of course we write every kind of detection for our products possible to provide the best protection for our customers.

Then I go public and say in addition to using Sophos Anti-Virus I have also implemented the work-around proposed by Microsoft. I have turned off all icon rendering on my Windows machines, both at home and at work.

The community comes back and says "Oh, You don't even trust your own software! Why should I?" and I think this totally misses the point. Of course I believe we have done all that we can possibly do, but I also believe that a determined adversary is a formidable opponent. I believe I can secure myself to the maximum practical level and why would I not implement layers of protection and not depend on one that likely is not perfect.

This is not to say I don't have confidence in my product, this is to say that I believe I can be protected. To be protected means to be proactive and to utilize all the tools at your disposal to ensure you are protected. It may still be possible to exploit my computer through other unknown or even known and accidentally ignored threats, but why give criminals the satisfaction of using such an easy one to get in?

kurt wismer said...

"The community comes back and says "Oh, You don't even trust your own software! Why should I?" and I think this totally misses the point."

indeed - the community apparently has no concept of what defense in depth means. multiple different but overlapping controls. fault tolerance has always implied redundancy in some way or another, and fault tolerance in defense is no exception.