Thursday, March 26, 2009

failure rates and the law of truly large numbers

probably the best known metric for measuring the effectiveness of an anti-virus product is the detection rate... it's something that's been around for a long time and there are frequent attempts to measure it...

i'm not about to try and suggest something to supplant that metric, just so you know - the detection rate metric has served us well for many years, even if there's a bit of confusion of what actually constitutes a detection rate, and even though there's controversy over how it should be measured and who is qualified to do so...

no, although failure rates are much more straight forward to measure, i'm not about to suggest that they're better in any way - i only want to bring them up in order to provoke thought...

let's say you're an average user and you have anti-virus protecting your computer (possibly at many different levels, like a desktop client, an email gateway scanner run by your email provider, etc)... let's further say that on average your anti-virus product fails to prevent your computer from getting compromised once every 5 years...

now, since it is possible to have anti-virus protecting you at multiple layers and with different optimizations at each layer it's important to define a failure as a piece of malware slipping past all of those layers... maybe that means incident response is required, or maybe you've got some other preventative control that stopped it after (not before) it slipped past your final layer of av prevention (ex. maybe you're a little less than average and are actually running without admin privileges that the malware needed to do it's dirty deed)...

i know what you security practitioners are probably thinking - 'where can i get this magical av product that only fails once every 5 years?'... i'm sure that would make your lives easier, wouldn't it... well right now it's just a hypothetical av product but later on we'll see what we can do...

so now let's say you're a security practitioner, you're part of the IT department of some company... let's further say that at this company you and your immediate coworkers are supporting approximately 2,000 of those same average users and you're using the same anti-virus technology... guess what - you can now expect to see an anti-virus failure once a day! (by the way, when one compromised machine goes out and leads to 5/10/50 more machines on your network getting the same malware, i call that the same event - it's the same failure)...

did the av somehow magically become less effective? no, of course not, it's the same technology - but in this enterprise scenario there are 2,000 times as many opportunities to fail per unit of time than there were in the single user scenario because there are 2,000 times as many users... malware compromise depends in part on decisions made by the user (which should be equally good/bad between the two scenarios) but also on exposure to the malware in the first place which, while it may be regular or even frequent, is an inherently random event... that means even if you could guarantee perfectly predictable (not necessarily correct, just predictable) decision-making from the users (note: you can't actually guarantee this), anti-virus failure events are still at least partially random...

highschool level math tells us that a pair of flipped coins are more likely to have one turn up heads than a single flipped coin would, and the same principle applies to anti-virus failure events - more trials means higher overall probability of the failure event occurring, and more concurrent trials means less expected waiting time between failure events...

now let's think about this in the real world - you security practitioners out there are in a perfect position to know how often your company's anti-virus fails and how many people you're supporting so what's the per-person failure rate of your av?.. at my previous employer we had (to my knowledge) one notable failure in a period of 2 years for a company size of about 20 people (and i helped clean it up)... that's a per-person failure rate of once every 40 years!... now i'm willing to bet that we were an anomaly, a statistical outlier, that the true per-person failure rate is more than once in four decades, but i'm also willing to bet that larger companies with 2,000 people in them do not suffer a new failure every single day - which means their anti-virus has a per-person failure rate that is actually less (and therefore better) than the magical example of once every 5 years...

so take an honest look at how often your anti-virus really fails on a per-person basis... one of the things i've noticed is that a lot of the people who are convinced that av isn't doing a good job anymore are people who are using enterprise experience as their anecdotal evidence - not realizing that the more users you bring into the picture the more the law of truly large numbers works against you... it's not that av actually fails often, it's that failure scales up (and that's true for all failure, not just av failure)...

Saturday, March 21, 2009

choosing a good password

recently i've seen two separate videos talking about the problem of choosing a good password - one with michael santarchangelo and one with graham cluley... what bothers me, though, is that they're both repeating relatively old ideas with their pass-phrase method...

here's a different approach to the password choosing dilemma:

don't

you heard me, don't... don't choose a password... the primary reason for people choosing passwords is so that they can have a password that they can remember... this is an obsolete requirement - not because people's memories are that much better (they aren't, if anything the opposite is probably true) but because we now use so many things that need passwords that there's no way we can remember different passwords for all of them and have had to adapt... the poor way most people have adapted is to re-use passwords so that there's less to remember, but the smarter way is to store them so that you don't have to remember them at all...

let the computer generate a password for you using a program like password safe, it will be superior to anything you could choose manually... then store the password in password safe because computer memory is also superior to yours... if it's a password you need when the computer isn't on or a password you need in order to get into the computer in the first place, write it down and stick it in your wallet... if it's a password you need at many computers, you can carry the encrypted password database password safe creates around on a USB flash drive (in fact, password safe itself runs quite nicely from a flash drive without needing to be installed everywhere you need your passwords)...

stop following old password advice that hasn't kept up with the times, adapt to the realities of the today and start using technology (even low-tech technology like a pencil and paper) to make dealing with the password problem easier...

the best laid plans of mice and men often go awry

by now news of the bbc's botnet blunder has spread pretty far... the bbc's actions seem to have violated the law, and there's no question that they were unethical and that prevx's complicity also points to an ethics problem in that company...

many people have written about it already, but alex eckelberry's point about the potential for unintended consequences in taking down a botnet reminded me of a discussion at securosis that i had intended to revisit here but never got around to...

the securosis post in question involved a proposal to set up some system by which an authority could shut down botnets - basically to nullify the legal and ethical hurdles that currently keep most malware researchers from taking down the botnets they're studying...

my two comments on the subject were as follows:
#
kurt wismer Feb 17

you make a compelling argument - botted machines are a public security hazard and some hazards are grievous enough to warrant unauthorized intervention…

i instinctively rebelled against this notion because i don’t like the idea of authorities mucking around on my computer out of some potentially misguided notion that they know better than i do… but i can’t find any flaw in the applicability of your analogy…

the only problem i foresee is that if the bad guys can’t hide their creations behind legal red tape then they’ll hide them behind something equally compelling, like commands to self-destruct and wipe the host machines (to get rid of evidence and also to just be mean) if the network is tampered with… this switch from legal to technical controls may mirror anti-tampering efforts in other domains… if they can figure out a way to make killing the botnet do more harm than good then it will be equivalent to the situation we’re in now and no change in law will affect such a technological adaptation…
#
kurt wismer Feb 17

@rich:
i’m not sure the good it would do would outweigh the bad… when 1,000,000 people suddenly have no operating system, what do you think will happen? steve balmer is already balding, the rest of his hair would be gone the instant microsoft started receiving support calls from all the victims… and that’s just the home users…

what happens when some of those machines are in the enterprise? or in government or military? what if they’re part of critical infrastructure? worse still if it’s in such machines in other countries - taking down the botnet could cause an international incident…

self-destructing botnets are something i wouldn’t want to touch with a 10 foot pole…


currently, ethical malware researchers steer clear of tampering with botnets or the machines they're on - if we change the rules of engagement, if the forces (whatever they may be) that currently prevent us from tampering with botnets went away and we started behaving like the bbc did then the malware authors would have to adapt and self-destructing botnets are an obvious technical approach to regaining the tamper-resistance they currently enjoy thanks to the good behaviour that makes the good guys so 'good'...

Sunday, March 08, 2009

ethical hacker not so ethical when hacked

one of the posts to catch my eye during my absence was this post on cd-man's blog about ethicalhacker.net getting compromised...

what struck me about it was that the folks at ethical hacker waited months to inform their membership that they'd been compromised and that users should change their passwords...

now, i suspected some months ago that ethical hacker had either had some kind of breach or had significantly changed their MO when i started getting spam at the address i registered with, but to find out that user's credentials have been in the hands of attackers for 8 months before ethicalhacker.net decided to warn anyone is simply outrageous...

i understand that no site is impenetrable so a breach of this nature is inevitable - i don't have a problem with that... i also understand that most of their users are probably using good password hygiene - however i also know that a surprising number of security folks probably don't... there are many people in the security world who, although they will enforce security policies rigidly within the enterprise, do not take anywhere near the same measures with their own systems at home and so most likely reuse passwords... these people were put in harm's way when the folks at ethical hacker chose sneakiness over transparency...

it's events like this that make me glad i use a different randomly generated password for each site (in addition to the different randomly generated email address i use for each site)... ethical hacker's name is now ironic, much like 'little john' or 'tiny', because they definitely aren't putting their users first...

ok, i'll draw a bunny

yes, this is a fluff post, you have now been warned...

some of you may have noticed i haven't posted in a while... in fact it's been over a month since i last posted anything - my post archive has no entries for february at all... as some of you may recall, anti-malware is not my job - in fact security itself isn't really my job unless you count implementing security features in other software (and even then the only reason i'm the one doing it is because i'm the one who's really familiar with the concepts), so security in general and anti-malware in particular isn't my day job but rather something i've just always had a natural affinity for... that being said, sometimes you need to step back for a while, smell the roses, and gain some perspective, even when it comes to something you're passionate about... ergo, i've been absent...

but today i got an email from cd-man, checking up on me and letting me know that he's tagged me in some internet meme about drawing bunnies... now, i must admit i have a fascination with memes (for purely pragmatic reasons) and how better to study a meme than from the inside by participating in it so i figure why not give it a shot... thus i dug out my highschool sketchbook, an appropriate pencil, found a worthy subject and threw this together (if only i also had a scanner and didn't have to use a crappy digital camera to put it on the web)...



so now i've got to tag 3 people... hmmm, 3 bloggers - jonathan poon, costin raiu, and aaron walters...