Tuesday, July 13, 2010

i see a standards organization

ed moyle prefaced his recent post about how AMTSO is perceived by the industry by saying that he really didn't want to continue talking about this subject (he has, after all, penned a number of posts about AMTSO recently). having seen this blog go more or less dark in the past, i have no qualms about following whatever path my interests and creativity take. if the subject doesn't bore me, i see no reason not to write about it.

and the subject of how AMTSO is perceived has a few interesting bits to it, i think. first and foremost, while david harley may bend to the notion that the use of the word "standard" in AMTSO's name might mislead people, i think the use of the word "standard" is entirely appropriate. if people are mislead into thinking AMTSO is anything like ISO, it is actually ISO and organizations like it that have mislead people into thinking enforcement has anything to do with standards. a few pertinent definitions for standard:
noun:  a basis for comparison; a reference point against which other things can be evaluated
noun:  the ideal in terms of which something can be judged ("They live by the standards of their community")
developing a basis upon which anti-malware tests can be evaluated or an ideal which testers should strive for is precisely what AMTSO is about. it is not about enforcement - following the standards is entirely voluntary. if enforcement were on the table at all then testers wouldn't participate for 2 reasons:
  1. many testing organizations were (and perhaps still are) too far away from the ideal. signing up for obligations at a time when one cannot meet them makes little or no sense. with voluntary standards the obligation, instead, is to keep improving and moving closer to the ideal.
  2. enforcement would mean that the standards were actually rules, and nobody thinks vendors should be involved in making rules for testers.
but apparently, as ed moyle has pointed out, the security industry perceives AMTSO as something different from what it actually is. now when you get right down to it, when people's perceptions don't match reality it's because they lack knowledge of that reality. if AMTSO were being purposefully deceptive or secretive (basically acting to deprive/deny people of that knowledge) then one might legitimately be able to blame them for the false perception problem. ed doubts that AMTSO is to blame, and so do i (mostly), but then who is to blame? let's look at an example from ed's own post. near the end he constructs some hypothetical situations where a vendor might challenge a test and then have the AMTSO review board composed entirely of employees of that vendor. he asks where the line is - i can tell him where it is, it's in the 'fine' manual. the document describing the analysis of review process states on the second page that review committee members that are employed by the challenging party must recuse themselves from participating in the analysis. this document is freely available, easy to find, well labeled and not hidden in any way. anyone who wants to learn the answer of where the line is drawn can easily do so by downloading the document and reading it - so when they don't, when they make assumptions or treat it like a questions that needs answering instead of one that's already been answered, that's really on the person themselves.

page three of that same document has something for ed as well. he asks the following:
So if it’s not the role of AMTSO to standardize, it’s also clearly not their role to accredit.  But aren’t they doing just that?
the answer is no, they are not. AMTSO makes no judgments or endorsements of reviewers or products. for an AMTSO member to suggest otherwise is considered misrepresentation. the analysis of reviews is just that, analysis. the output serves as an interpretive aid for individuals wishing to know how close to ideal a particular review was. the review analysis that ed looked at as an example (the analysis of NSS' review) was actually quite close to the ideal (though apparently not close enough for NSS' liking). only 2 real problems were found, and they've been described by members as 'minor'. in the analysis itself, the explanation for the first one even goes so far as to say that NSS' test is still better than most out there in spite of the problem. ed moyle interprets this as a pass/fail sort of judgment and i suppose in the strictest sense the NSS test did fail to reach the ideal but it's hard to say the analysis of their test is calling it a failure when it's clearly stating it's better than most out there.

of course as an interpretive aid, the reader is free to pick and choose the ideals that are important to him/her - as ed does when he discusses testing features individually. the ideal that the testing industry is trying to move towards is whole product testing. the reason is because it's understood that different products have very different technologies and thus have different ways of stopping different threats. it's especially difficult for testers to devise testing methodologies that aren't biased in favour of certain technologies. if i test feature X and product A blocks 5 things but product B only blocks 3 things, how can i possibly show that a test of feature Y shows that product B blocks everything it missed in the feature X test and product A blocks nothing because it doesn't even have feature Y? and if i can't show that then is what i'm presenting really relevant? isn't the important thing that B blocks the threats in one way or another? does it really matter if it uses feature X or Y to do it? current opinion in the anti-malware community is no, it shouldn't matter, which is why whole product testing is becoming the standard. NSS themselves bang the drum of whole product testing pretty loudly, so it seems ironic to me that they failed to test the whole product (seemingly testing everything but the spam filter).

of course, as interpretive aids go, even AMTSO's analysis isn't necessarily perfect. i say this because point 7 of the NSS review analysis is interpreted by ed one way, and a different way by myself. i don't know if ed's interpretation is correct or if the analysis is implicitly assuming domain knowledge of NSS' practices. ed quotes the following from the analysis:
Does the conclusion reflect the stated purpose? No. The report’s Executive Summary states that test’s purpose was to determine the protection of the products tested against socially-engineered malware only. Later in the report (Section 4 -product assessments) it says: “Products that earn a caution rating from NSS Labs should not be short-listed or renewed.” This is clearly a conclusion that you can’t make out of the detection for socially‐engineered malware only, as the products have other layers of protection that the test did not evaluate.
ed's interpretation is that the conclusion supposedly didn't reflect the stated purpose simply because NSS failed to include spam filters in their test. my interpretation differs in part because i know that NSS breaks malware down into 2 categories and "socially engineered malware" is only one of those categories - so making purchasing recommendations on the basis of the results of the socially engineered malware test alone seems like a premature conclusion to me. i suspect that the spam filters were only 1 of many features that weren't tested since the other malware category NSS recognizes involves drive-by downloads and other sorts of malware that don't involve user intervention. but clearly, someone who doesn't know what i know may interpret the meaning of the analysis in an entirely different way than i did.

i understand why ed feels that perception is important, but the key to making perception match reality is knowledge and understanding and there's only so much anyone can do to impart those things on others. people have to be willing to look past their preconceptions and actually acquire new knowledge and understanding.

4 comments:

David Harley said...

It seems to me that Ed Moyles and Kevin Townsend, whether or not you agree with their respective points about malware generation and user engagement, are basically making the same point, namely that it doesn't matter what AMTSO does, only what people think it does. If people are really as shallow as some of the reporting, maybe we are wasting our time.

kurt wismer said...

well, umm, this is an interesting position to be in, considering what i know about the upcoming post.

i don't think AMTSO is wasting it's time, but do think there's some obvious room for improvement.

David Harley said...

I doubt if anyone in AMTSO disagrees with the view that there's room for improvement.

kurt wismer said...

well then hopefully you won't take offense to tonight's post.