Monday, December 31, 2007

user education from a different angle

this is rather old but back in september, mike rothman posted an introduction to security mike's guide to internet security and while i was reading it a light bulb went off in my head...

i'm not sure a book (the guide is an ebook he sells, though there's portal and blog associated with it) can really start the kind of grassroots security movement mike is aiming for... i think there are inherent barriers in the scenario that would inhibit that, in fact... for one thing, the security knowledge that is supposed to be the currency of that grassroots movement is bound to an artifact (the ebook) and that artifact's distribution is controlled (more or less) by a commercial business model (mike put effort into that book and rightly wants to get paid)... the end result is that people have to want the knowledge in that book.. they have to want it bad enough that they're willing to pay for and read the book and that means that to some extent mike is probably going to wind up preaching to the choir...

what really piqued my interest, however, was the question that came to mind of whether or not those barriers could be removed... obviously the book could be made free, that would be one barrier down, but the knowledge contained within it would still be bound to it... in order to get the knowledge you'd need to get the book and in order to pass on the knowledge you'd have to pass on the book... passing the knowledge on from one person to the next is clearly a requirement for mike's grassroots security movement, and in the broader context that security movement sounds an awful lot like the "culture of security" i've often heard we need... but culture tied to a book just doesn't seem like it would be successful now... it certainly was in the past when books and culture were inexorably linked, but that time ended long (on the order of centuries) ago... what if the information could be passed from person to person without the book? perhaps not all as one big chunk but rather piece by piece... what would that look like?

then it struck me - that would look like a meme... a unit of cultural information that replicates from one mind to another by way of imitation... so then i set about trying to learn more about memes (did anyone miss me in october?) because i didn't (and still don't, really) know all that much about them... what i found was that virtually all culture can be regarded as being memetic in nature, whether it be religion or consumerism, politics or littering (you didn't think memes were the exclusive domain of lolcats, did you?)... in fact, once you have an idea of what you're looking for you start being able to see it in all sorts of things...

as an aside, even going to school and reading books and learning things the old fashioned way are memetic, so you might be wondering why a security ebook wouldn't be just as successful... the reason has to do with the hook for the meme... up to a certain age you have to go to school, it's not even a choice, but if you want to be even moderately successful in later life you need to get good grades and not flunk out - which means reading the books and learning the material... later on, if you want an even better life, you enroll in post secondary education and read books and learn material so you can get your diploma, get a good job, and so on... what's in it for you as far as a security guide goes? do people generally want to learn about security? is it going to make a clear and obvious improvement in the quality of one's life? will there be frat parties along the way or hot guys/girls to chat up in class? no, a security guide doesn't have nearly as much going for it from a memetic hook point of view as academia does and academia isn't exactly the most successful meme either (just look at how relatively few participate in it compared to religion or tv watching, for example)...

another thing that i've learned is that in order to use memes to disseminate security knowledge (or at least promote more secure behaviour) it's going to be necessary to engage in memetic engineering in order to construct suitable memes - though i'm still looking for better sources for what's involved in meme synthesis and/or meme splicing because so far my best attempts have turned out to just be meme hacks... now, if you're thinking that memetic engineering sounds a bit like social engineering, well, you'd be right and the irony of using such a technique for good instead of evil is not lost on me... i suppose you could call it a kind of white-hat social engineering...

the more interesting bit of irony (to my mind at least) is that using memes to help people make themselves more secure against malware and other security threats means using something with similar properties to the most well known form of malware - viruses... indeed, memes have even been referred to as viruses of the mind... it is this very viral quality that i think needs to be exploited in order to reach a wide enough group of people to "suffocate the bad guys" (as mike put it) and bring about the "culture of security"...

Friday, December 28, 2007

what average users need to know

i read a very interesting post about average users and how they only care about usability to the exclusion of security and it got me thinking...

i think one of the main reasons people focus so much on usability and so little on security is because the threat is too abstract... they've heard of viruses (and so probably use anti-virus software, though probably don't update it) but the current threat landscape (as opposed to the one from 20 years ago that they are more familiar with) is too disconnected from the average person's day to day reality for them to comprehend the need for the security measures we more security conscious folks keep advising...

this is a problem, especially for those who advocate safe hex, so how do we address it?

one avenue we should probably consider is describing what threat a particular safe hex practice is meant to counter - but that only connects security measure with the threat, it doesn't actually make the threat itself seem any more real or anymore like something the user actually needs to worry about...

i think users might benefit from knowing what they have that attackers would want as well as what lengths attackers are willing to go to in order to get those things... what attackers would want from average users isn't a difficult list to compile (it may not be complete, but it certainly gets the point across):
  • money
  • credit card numbers for getting money
  • personal identification information for getting new credit cards in your name so as to get money
  • user names and passwords for financial institutions like banks or paypal so as to get money
  • user names and passwords for any other site because you might be one of those people who uses the same user name and password everywhere and if so they can use that to get money
  • cpu cycles, storage space, and bandwidth for attacking others, usually to get money from them
  • fame and various other social rewards (though these are older goals that are much less relevant nowadays)
obviously the major goal is to get money and the more money the attackers get, the more they can invest on developing more effective and sophisticated attacks that reach even more people...

what attackers are willing to do to get what they want isn't too hard to list either:
  • trick you (via social engineering) or your computer (via exploits) into installing malware to steal your credit card number, passwords, or any other information they can use
  • trick you (phishing) or your computer (pharming) into believing a fake bank/paypal/whatever website is the real one so as to steal your account details or trick you into buying fictional goods - ultimately to steal your money
  • trick you or your computer into installing malware to show unwanted advertisements (adware)
  • trick you or your computer into installing malware that makes your data inaccessible until you pay a ransom
  • trick you or your computer into installing malware to give the attacker enough access to your computer (generally making it part of a botnet) in order to use it to attack others (by trying to overload legitimate sites, hosting fake and/or exploit laden sites, sending junk mail, sending malware or links to malware sites, etc)
  • trick administrators or systems at legitimate (and in some cases very popular) sites to host exploits for tricking the computers of visitors to those sites
  • plant malware on or construct malware that can spread itself to removable media (floppy disks, cd's, dvd's, flash media, or basically anything with memory that you can plug into your computer)
and of course, the bad guys are willing to launch their attacks on average users on a wide scale so as to reach as many potential victims as possible... encountering such attacks are not isolated incidents, there are very few computer users out there who haven't been a victim in some way at least once...

ultimately the average user needs to be made to understand that a computer is not an appliance that just does what they want it to (nor can it be), but rather it's a tool that can allow many people to do many things and not all people want to do good things... if they have stuff (money, personally identifiable information, data, etc) they want to keep safe then they need to care about security...

ethical conflict in the anti-'rootkit' domain - part 2.1

sometimes microsoft hires really good people like jimmy kuo and sometimes they really screw up and hire folks you maybe wouldn't want to meet in a dark alley... that seems to have happened with their acquisition of ep_xoff et all behind rootkit unhooker... you may recall i posted about this individual once before in relation to an apparent ethical conflict (ep_xoff wrote and released a stealthkit, or 'rootkit' for those drinking the rootkitDOTcom koolaid, capable of bypassing all stealthkit detectors save possibly microsoft's own strider winpe ghostbuster technology)...

what i didn't post about before were his reactions to the concerns expressed by myself and cd-man (here)... while his criticisms of me were little more than childish, according to dmitry sokolov, ep_xoff veered more into the realm of criminal behaviour by attempting to incite a DDoS against or defacement of cd-man's blog...

normally i would say that hiring such a goon would reflect poorly on a company, but since microsoft's moral compass isn't really known for pointing to true north, i suppose i shouldn't have expected better from them...

Monday, December 17, 2007

when is a botnet not a botnet?

when the term botnet is misused... at least misuse seems to be the interpretation allysa myers made... although i'm not sure the headline "fbi: 'botnets' threaten online shopper security" can actually be attributed to the fbi (because the media is well known for twisting things to make a catchy headline) there certainly does seem to be a lot of ambiguity in the way the term botnet is being used...

that said, i really don't think the suggestion of coming up with a new term for what used to be called a botnet is the answer... i'm reminded of another term that got watered down in a similar way... that term was virus... it seems to me that we never tried to come up with an alternative for virus (or if we did it thankfully died a quick death), rather we came up with terms for what the label virus was being misapplied to...

come to think of it, it seems to me that not too long ago the same problem occurred with the term spyware... arguably rootkit as well...

i don't think playing musical chairs with terminology is the proper way to resolve the problem... if people are misusing a term and confusing the issue in the process, abandoning the term in favour of a brand new one isn't going to make the issue any less confusing... instead it will simply introduce a new term that they've never heard of before and are unfamiliar with and they'll wonder why it's being used where botnet was being used before... that seems likely to confuse people, if you ask me...

i think the first thing to consider is what the problem really is - to my mind the root problem (ignoring it's consequences) is terminology misuse... changing terminology to run away from that misuse doesn't actually address the problem... to address the problem we need to know why it happens...

so why does terminology misuse happen? the simple answer is ignorance - people who misuse these terms do so because they don't know any better (or because the audience they're trying to reach don't know any better and they don't care to elevate their audience)... they don't know any better because malware is not a mainstream topic in our society... certain concepts bleed through into the mainstream and get assimilated by mainstream culture... those concepts then get used to try and explain things in the malware field, but with only a few concepts in their repertoire those explanations wind up being a distortion of reality rather than an accurate model...

in this case it seems that people are struggling with the idea of identity theft related malware and how botnets scale that problem up... they're struggling because the general public doesn't have the conceptual currency to properly express these ideas, while a select few (relatively speaking) do... some people are haves, but most are have-nots...

that imbalance is something i've certainly been trying to address for some time by trying to make information more available and accessible and hoping that the knowledge would trickle down (for lack of a better phrase)... obviously that is a rather slow process (and just as obviously, i seem to appeal more to technically minded folks) in part because only those who seek the information will find it... i think what we really need is a revolution in the way we disseminate knowledge, not just a set of new words...

Wednesday, December 05, 2007

why X is insecure - and probably always will be

about 2 weeks ago (old i know) you may have come across these two articles (by drazen drazic and lonervamp respectively) about why businesses are insecure (the 7 reasons why businesses are insecure and more reasons why businesses are insecure)...

i'm sure they're very good business reasons for why businesses are insecure, but i'm also sure that a business that addressed all of these problems would still be insecure for reasons that have nothing to do with that business or businesses in general or business security in general...

the fact is there's a technical reason why virtually any non-trivial thing (of which anything computer related would definitely fall under) we'd want to secure is almost certainly not secure and probably never will be... i'm not talking about the fact that there is no such thing as secure, rather i'm talking about the asymmetric relationship between attack and defense... if you're trying to defend something you have to try to defend it from all possible attacks, but if you're trying to attack something you only need to find one successful attack vector...

clearly defense takes a lot more work and that's a problem, but it's not clear that we can ever really change that... if we were going to try to change it, though, how would we go about it? the two obvious answers are: 1) make defense easier (presumably by reducing the amount of possible attacks we need to defend against), or 2) make finding that one successful attack vector harder...

making defense easier sounds good but it's easier said than done... sun tzu talked about this very thing when he said that one should force the enemy to engage in an environment of one's own choosing and thus choose what one has to defend and what the enemy can attack (art of war, part 6: weak points and strong)... now you might be tempted to limit the scope of your analysis to an arbitrarily narrow frame of reference (as schneier does here when he refers to cryptography to the exception to the rule of asymmetry between attack and defense) but in reality that doesn't actually get us any closer to our goal of reducing the amount of defenses we need... what we would really need to do is reduce the pool of potential attack vectors, to literally remove things from systems that could be used as an avenue of attack... that means fewer hosts on our networks, less diversity amongst the hosts on our networks (gasp! yes, i said it - diversity is great for minimizing the overall effect a successful attack has on a given population of hosts but it increases the pool of potential attack vectors and so makes compromising assets on the network easier; in essence, what's good for availability may not be so good for confidentiality), fewer services running on those hosts, fewer system components exposed to incoming content (ie. browsers, email clients and other network clients/servers that can do less/have less functionality), less potentially sensitive data stored on those hosts, etc... unfortunately this is completely backwards when viewed through the lens of technological progress, and while minor efforts in this area are no doubt considered beneficial, it would take extreme measures (perhaps even beyond the realm of the realistic given the complexity of modern operating systems) to actually make a significant change in the asymmetry between attack and defense for a system...

making it harder to find that one successful attack vector isn't necessarily a piece of cake either... there's one fairly well known school of thought that posits that reducing the number of vulnerabilities will shrink the pool of potentially successful attack vectors... this school of thought may be right, in a theoretical sense, but in practice it's starting to look like the total number of vulnerabilities is high enough that patching vulnerabilities at the rate we're going right now isn't really having that big an impact on the difficulty of finding a successful attack vector... another well known approach is to devise a system where the attacker has to successfully defeat multiple defenses in order to be successful on the whole... this is, of course, defense in depth... naively one might think this could put attacker and defender on more or less equal footing because now not only does the defender have to defend against a large number of possible attacks, the attacker has to breach a large number of possible defenses... unfortunately, there are only so many defenses one can reasonably deploy and, even with all of them deployed, the amount of work an attacker has to do still won't compare to the amount of work required for defense - nevermind the fact that all those defenses carry with them potential vulnerabilities which could themselves be used in an attack...

that said, it isn't necessarily true that we can't use the asymmetry to our benefit... we can, we just can't do it as a defender... richard bejtlich would i'm sure suggest what he likes to call threat-centric security but which, in the context of this post, i'll call offensive security - that is where we (who have things that need defending) go and 'attack' (as in track down, identify, charge, and imprison) those who would attack us... to quote sandi hardmeier:
Also - I have a special warning for the bad guys - you can hide from some of us, but you can't hide from all of us, and you most certainly cannot hide from your victims.
alas, this too is a kind of defense, and although we can turn the asymmetry around for individual cases, to actually protect our systems this way we'd need to go after all potential attackers (which is an unknowable set of people) whereas the attackers realistically only need to worry about the actual organizations/people they attacked (which is a much smaller and more knowable set of people)... ultimately, reducing the pool of attackers is much the same as reducing the pool of vulnerabilities - for each one you remove there's more where that came from...

so there really doesn't seem to be a good way to turn the asymmetry around and make defending easier than attacking... there are things that can improve the situation to some extent but it can be a real balancing act sometimes...

Tuesday, November 20, 2007

looking for information

a little different from the normal fare for this blog, i just wanted to point people towards a comment that many probably wouldn't have otherwise seen (because they rarely leave their feed reader)... vesselin is looking for some info about a number of office-related vulnerabilities/exploits... i would imagine there must be some people out there who can help...

Monday, November 19, 2007

defense in depth revisited

so as a result of my previous post on the use of multiple scanners as a supposed form of defense in depth i was pointed towards this set of slides for a presentation by sergio alvarez and thierry zoller at n.runs:

http://www.nruns.com/ps/The_Death_of_AV_Defense_in_Depth-Revisiting_Anti-Virus_Software.pdf

the expectation was that i'd probably agree with it's contents, and some of them i do (ex. some of those vulnerabilities are taking far too long to get fixed) my blog wouldn't be very interesting if all i did was agree with people so thankfully for the reader there's a number of things in the slides i didn't agree with...

the first thing is actually in the filename itself - did anyone catch that death of av reference in there? obviously this is qualified to be specific to a more narrow concept but the frame of reference is still fairly clear...

i'm not going to harp on that too much because that's just a taste of what's in store, actually... the main thrust of the presentation these slides are for seems to be that defense in depth as it's often implemented (ie. with multiple scanners at multiple points in the network) is bad because of all the vulnerabilities in scanners that malware could exploit to do anything from simply bypassing the scanner to actually getting the scanner to execute arbitrary code...

this says to me that instead of recognizing that you can't build defense in depth using multiple instances of essentially the same control, the presenters would rather call the construct a defense in depth failure and blame the failure on the scanners and the people who make them (and make no mistake, there certainly is some room for blame there)... the fact is that it was never defense in depth in the first place and if you want to assign blame, start with the people who think it is defense in depth because they clearly don't understand the concept... in a physical security context, if my only defensive layer is a wall and i decide to add a second wall (and maybe even a third) i add no depth to the defense... an attack that can be stopped by a wall will be stopped by the first one and one that can't be stopped by the first wall probably won't be stopped by subsequent walls...

the slides also have some rather revealing bullet points, such as the one that lists "makes our networks and systems more secure" as a myth... this goes back to the large surface area of potential vulnerabilities; the argument can be made that using such software increases the total number of software vulnerabilities present on a system or in a network - however this is true for each and every piece of software one adds to a system... i've heard this argument used in support of the idea that one shouldn't use any security software and instead rely entirely on system hardening, least privileged usage, etc. but it's no more convincing now than it has been in the past... yes the total number of vulnerabilities increase but there's more to security than raw vulnerability counts... the fact is that although the raw vulnerability count may be higher, the real world risk of something getting through is much lower because of the use of scanners... there aren't legions of malware instances exploiting these scanner vulnerabilities, otherwise we'd have ourselves an optimal malware situation...

another point, and one they repeat (so it must be important) is the paradox that the more you protect yourself the less protected you are... this follows rather directly from the previous point, using multiple scanners is bad because of all the vulnerabilities... the implication, however, is that if using more scanners makes you less secure then using fewer scanners should make you more secure and thus using no scanners would make you most secure... i don't know if that was the intended implication, i'm tempted to give the benefit of the doubt and suggest it wasn't, but the implication remains... again, there's more to security than what they're measuring - they're looking at one part of the change in overall security rather than the net change...

yet another point (well set of points, really) had to do with how av vendors handle vulnerability reports... as i said earlier, some of the vendors are taking far too long but some of the other things they complain about are actually quite reasonable in my eyes and i find myself in agreement with the av vendors... things like not divulging vulnerability details when there is neither a fix nor a work around ready yet (no point in giving out details that only the bad guys can actually act on), condensing bugs when rewriting an area of code (i develop software myself, it makes perfect sense to me that a bunch of related bugs with a single fix would be condensed into one report), fixing bugs silently (above all else, don't help the bad guys), and spamming vulnerability info in order to give credit to researchers (if you're in it for credit or other rewards you'll get no sympathy from me)...

finally, and perhaps the most novice error in the whole presentation was the complaint that scanners shouldn't flag archive files as clean if they're unable to parse them... there is a rather large difference between "no viruses found" and "no viruses present"... scanners do not flag things as clean (at least not unless the vendor is being intellectually dishonest) because a scanner cannot know something is clean - all a scanner can do is flag things that aren't clean and the absence of such flags cannot (and should not, if you know what you're talking about) be interpreted to mean a thing is clean... scanners tell you when they know for sure something is bad, not when they don't know for sure that something isn't bad... if you want the latter behaviour then you want something other than a scanner...

all this being said, though, i did take away one important point from the slides: not only is using multiple scanners not defense in depth, using multiple scanners in the faux defense in depth setup that many people swear by comes with security risks that most would never have considered...

Monday, November 12, 2007

the myth of optimal malware

this is going to be an interesting myth-debunking post because the object of the myth actually exists...

there are any number of various optimal properties that a given piece of malware can have, such as novelty (new/unknown), rarity (targeted), stealth, polymorphism, anti-debugging tricks, security software termination, automatic execution through exploit code, etc...

likewise, there are any number of optimizations a malware purveyor can adopt, such as continually updating the malware, targeting the malware to a small group of people, spreading it with botnets, only using malware which is itself optimal, etc...

there really is malware using some or perhaps most of those tricks and there really are malware purveyors using some or all of those techniques so you may be wondering where the myth comes in... the myth comes in when we start considering these optimizations as being universal or at least close to it - that most if not all malware is as optimal as it can possibly be and that most if not all malware purveyors use the the most optimal deployment techniques they possibly can...

while each of those optimizations on their own seem to have become common place, few instances of malware can truly be considered optimal... for example, most malware is not targeted - sure the instances we hear about make for a great story and draws lots of readers, but they're a drop in the bucket next to the total malware population... novelty is seemingly even more popular since all malware starts out as new at some point, but as i've said before novelty is a malware advantage that wears off... polymorphism attempts to keep that novelty going indefinitely but it, along with novelty and targeting, were really only ever effective against known malware scanning - they hold no particular advantage against anti-malware techniques that don't operate by knowing what the bad thing looks like...

the same holds true for malware purveyors as well, few do what it really takes to get the most out of malware... otherwise malware would be much more successful than it is... even security conscious folks like you and i would be getting compromised left right and center because our anti-malware controls would just not be effective...

but that doesn't stop people from believing or falling into the logical trap that is optimal malware... i'm sure you've seen and perhaps even constructed arguments based on this fallacy... the anti-virus is dead argument is based in this as it posits that scanners are not effective because of new/unknown malware despite the fact that that malware doesn't stay new/unknown for long and that the effectiveness of known-malware scanning is precisely the reason the malware creators have to keep churning out new versions of their wares... the school of thought that says software firewalls are useless because malware can just shut them down or tunnel through some authorized process is likewise based on the myth of optimal malware because although some malware certainly does bypass software firewalls, not all do, and so they remain at least somewhat effective as a security control... in fact, any similar argument that says security technology X is useless because malware can just do Y to get around it is based on the myth of optimal malware as there is plenty of malware that doesn't do Y... i think i've even fallen prey to this fallacy on occasion when constructing arguments (so there's no need to point examples out to me, i know i'm not perfect)...

so keep this in mind the next time you run across a school of thought that attributes near supernatural abilities to malware - with truly optimal malware the malware purveyors would be able to get past most if not all our anti-malware controls all of the time (not unlike fooling all of the people all of the time), and since that isn't happening we can conclude that most malware is in fact not optimal...

the user is part of the system

dave lewis posted a short observation on how XSS gets discounted and in the process touched on something much bigger... a lot of people want to discount anything in security that depends on the user...

daniel miessler more or less this very thing when he wrote about the new mac trojan and marcin wielgoszewski seems to have agreed with him... then there are those who discount the notion of user education as something that doesn't work or is wasted effort... more ubiquitous than that are the security models and software designs that do their best to exclude or otherwise ignore the user in order to devise purely technological solutions to security problems...

perhaps this is something that not everyone learned in school (like i did) but the user is part of the system... sure the user can be considered a complete and whole thing on it's own, the user is a person, an individual who can exist and be productive without the system if need be, but can we say the same about the system? does the system do what it's intended to do without the user? does the work that the system needs to complete get done without the user? if the answer is no (and it generally is) then the system is not complete without the user... that means security models that ignore or exclude the user are models of systems missing a key component - and so-called solutions designed to work without regard for the user wind up getting applied to problem environments that don't match the ideal user-free world they were designed for...

including the user in one's analysis is hard and messy, i know, but excluding the user trades that difficulty in for another in the form of reduced applicability to the way things work in practice... after all, treating user-dependent risks as second-class security problems certainly doesn't make a lot of sense when social engineering is proving to be more effective than exploiting software vulnerabilities in the long run...

Saturday, November 10, 2007

using multiple scanners is not defense in depth

i often come across nuggets of information that i want to respond to (often because they represent fundamental assumptions that i think are wrong) that don't really have anything to do with the main point of the article, so i'll leave it as an exercise for the reader to guess who mentioned using 2 different scanners as being a part of defense in depth...

that post didn't have anything to do with av and this post doesn't really have anything to do with that post but the idea that using one vendor's scanner at the gateway and a different vendor's scanner on the desktops qualifies as defense in depth is actually fairly old and oft-repeated so this really goes out to a fairly broad audience...

using multiple scanners is NOT defense in depth... at best it's defense in breadth... known malware scanners all have essentially the same strengths and weaknesses, they all look for and block essentially the same sorts of things, there's going to be very little caught by one that isn't also caught by the other so they don't really complement each other...

the premise of defense in depth is that any given defensive technique has both strengths and weaknesses and overall defense can be stronger if that technique is combined with one or more other defensive techniques that are strong where the first one is weak... no layer in the defense is impenetrable but in combination the layers together approach much closer to impenetrability...

so defense in depth requires complementary techniques/technologies and in so far as av companies are increasingly providing that in their suites, using similar products from multiple vendors doesn't get you any more defense in depth than you could have gotten with a single product because similar products are not complementary... what it can get you, however, is best of breed - some scanners may have features that make them better suited to gateway usage than others...

of course one could argue that they regard defense in depth as having defenses at multiple perimeters (the gateway and the host machines) but again, if those defenses are mostly the same then the inner layers of defense won't really be adding that much more to the overall defense... so using multiple similar products at different perimeters doesn't really add to the depth of your defenses, instead it adds redundancy which is the primary ingredient of fault tolerance...

Wednesday, November 07, 2007

what is a drive-by download?

a drive-by download is a form of exploitation where simply visiting a particular malicious website using a vulnerable system can cause a piece of malware to be downloaded and possibly even executed on that system...

in other words it's a way for a system to be compromised just by visiting a website...

the vulnerability(s) exploited in order to cause a drive-by download can be in the web browser itself or possibly in some other component involved in rendering the content of the malicious page (such as a multimedia plug-in or a scripting engine)...

drive-by downloads are particularly pernicious for two reasons... the first is that it can be hard to avoid being vulnerable and still maintain the functionality people have come to expect from the web... all software has vulnerabilities at least some of the time and there may be quite a few pieces of software on a given system that deal with web content (such as real player, quicktime, flash, adobe acrobat reader, etc) that may have vulnerabilities... add to that the fact that vulnerabilities aren't always fixed right away and that many users don't apply patches or updates as soon as they're available and you wind up with a fairly large pool of potential victims...

the second reason they are so pernicious is that it can be hard to avoid being exposed to an exploit leading to a drive-by download... the exploit can be delivered through legitimate, high profile, mainstream sites by way of the advertising (or other 3rd party) content on the site... if the ad network that supplies the advertising content is infiltrated by cyber-criminals (which has been known to happen) then they can sneak a malicious ad into the network's ad rotation and get it inserted into otherwise trusted and trustworthy sites... for this reason the old advice of only visiting trusted sites can't really protect you from this type of threat...

back to index

Monday, October 29, 2007

the myth of what anti-virus is

if you're like most folks the term "anti-virus" elicits images of a virus scanner methodically checking each and every file on a system for something that matches one of it's hundreds of thousands of signatures... obviously that is the most well known aspect of the anti-virus field, but if you (like many others in this day and age) thought that that was all there was to anti-virus then you'd be dead wrong...

it is an exceedingly popular misconception that can be found underlying such pejorative statements as "anti-virus is a fundamentally flawed technology" and the new favourite "anti-virus is dead"... the simplest way to express the idea is that 'anti-virus == known virus scanning', but it's an idea that betrays a profoundly superficial view of the field... for example, it ignores the fact that the first anti-virus tools weren't scanners at all - flu_shot (which predates virtually any anti-virus product you've ever heard of) was a behaviour monitor/blocker and that was just one of many generic tools developed in the early days of the virus problem...

of course known virus scanning was developed before too long and became hugely popular (largely because it required and still requires the least amount of know-how from the operator) but although it thoroughly eclipsed generic techniques in market penetration it never completely displaced generic techniques from the spectrum of anti-virus technologies as evidenced by tools such as integrity master, advanced disk infoscope, chekmate, and invircible (all primarily integrity based tools)... then of course there was the rather visionary product (for it's day) called thunderbyte anti-virus which was probably the first instance of what would today be recognized as an anti-virus suite, containing a known virus scanner (with one of the most instructive examples of an early heuristic engine), a behaviour monitor/blocker, an application whitelist, and more (if memory serves, frans veldman and the gang at thunderbyte also had hardware designed for virus detection but obviously that was separate from their software offering)... later thunderbyte's technology was bought by norman data defense which then went on to become known for their sandboxing approach to virus detection... i should also note that throughout the 90's a number of what most would consider conventional anti-virus vendors included integrity checkers in one offering or another in part due to it's theoretically perfect ability to detect the effects of viruses that got past their scanners (assuming the operator was capable of using an integrity checker to it's full potential)...

so what was anti-virus really? what did the av community/industry consider it to be? basically anything that was intended to fight (and i don't just mean prevent) viruses... blacklisting (scanners), whitlisting, sandboxing, forensic integrity checking, etc... in other words, the basis for virtually every anti-malware technology today...

now there were a couple of developments in the late 90's (and just beyond) that bear some attention... one is that the incumbent anti-virus industry was slow to jump on the non-replicative malware problem, which obviously created a market opportunity for things like anti-trojan and anti-spyware tools... another is that some new entrants to the emerging anti-malware field realized that, although they could do the generic technologies as well or possibly even better than the existing anti-virus industry, there was no way they were going to be able to compete in what was still very much the anti-virus market without comparable scanning technology and that developing comparable scanning technology from scratch at that point was probably next to impossible... so instead they had to try to differentiate themselves not just from anti-virus companies but from the entire anti-virus industry (while still applying essentially the same techniques) which eventually lead to an apparently fractured market - but not before the av industry finally committed itself to non-viral malware and became fully anti-malware...

of course we still call them anti-virus products in spite of the fact that they are intended to fight a lot more than just viruses now... this is because the concept of a computer virus has a far better foothold in the public's psyche than this new term 'malware' has... and why not? the computer virus has had decades to penetrate into the public's awareness... so much so that the term 'infect' frequently gets used in the context of any and all malware, even by otherwise knowledgeable security experts... we also still call them anti-virus products in spite of the fact (often completely ignoring the fact) that they are increasingly embracing the comprehensive suite approach that thunderbyte anti-virus took over a decade ago (which, by the way, is the real reason for the interest in new testing methodologies, as the old methods are still perfectly reasonable for testing known malware scanning on it's own) and thus broadening the technological footprint of individual products to be closer to that of the field in general...

so no, anti-virus is not just scanning... it was anything intended to fight viruses and has now become an archaic reference to (and the root of) what is now properly referred to as anti-malware (anything intended to fight malware)...

Friday, September 28, 2007

i know there's no panacea but i still want one, darnit!

have you read this post by lonervamp about the silver bullet syndrome? forgetting his specific example of jericho forums for the moment, i agree whole-heartedly with his observation that although we all seem to agree that there is no panacea people still seem to talk and act as though they expect their to be one...

need an example besides the jericho one? well anton chuvakin has been kind enough to provide us with one this evening that involves anti-virus... he's abandoned known-malware scanning completely after a friend of his had to rebuild their system despite being 'protected' by a major brand av....

now i'm not going to debate the merits of the decision itself, if he wants to use a whitelist instead of a blacklist, that's his decision and it's certainly a workable one... the problem is the motivation - getting fed up because of an instance of av failing (or even many instances) points to (as mike rothman would put it) mismatched expectations... if you know and agree that there is no panacea then you shouldn't be overly bothered by instances of failure, you should be expecting failure...

so what's going on here? i think that although we more or less all agree that there is no panacea, people don't seem to appreciate what that really means... i suspect that the use of the term panacea itself may be obscuring the real implications so i'm going to put it in simple terms that everyone, expert and novice alike, can understand:
all preventative measures fail
that's right, each and every single last one of them... that is what it means to truly accept that there is no panacea, you have to accept that there will be failures... getting fed up with the failures is an emotional rather than rational reaction, and if you base your decisions on it you are likely to be disappointed in the future when it turns out that the next big thing fails too...

people don't like failure, however, and they certainly don't want to accept it... this is a shame because if you're going to develop a successful security strategy you have to not only accept failure, you have to anticipate it... anticipating failure is really a cornerstone of strategic thinking, without it there would be no impetus to devise contingency plans, and without those a strategy is nothing more than a basic plan and a lot of poorly founded hope... in short you need to learn to succeed by planning for failure rather than running blindly from it...

Wednesday, September 26, 2007

how to partition your google identity

with all the reports lately of vulnerabilities in google i suppose it's time again for me to talk about how you can mitigate the threat these vulnerabilities pose to gmail users...

i last wrote about this subject at the beginning of the year when a similar vulnerability was in the news, and i've also made my feelings on single sign-on and federated identity (the direction identity management seems to be going these days) pretty clear... these google vulnerabilities and those that came before illustrate the problem - the 'one account to rule them all' approach creates a hugely valuable (to attackers) online identity and single sign-on integration between web applications (like that which google or microsoft or any number of other players provide) makes it that much harder to mitigate vulnerabilities by following advice like "log out of gmail"...

so what if you're like me? what if you use more google apps than just gmail? what if you use blogger for example, or google reader, or google notebook, or google groups, etc... if you're like most people you use the same google account for all of them - your gmail account... it's convenient, you only need to remember one username and password, and when you visit an exploit page while still logged in to one of these other google web applications your gmail account gets pwned because logging into one logs into all...

now, of course you could always hope google fixes these problems before you get caught, or use tools like the noscript firefox extension that should be able to help most of the time, but you might not realize (as some security folks hadn't) that you can also use a non-gmail google account for those web applications... then, not only is it easier to stay logged out of gmail while using the other web applications, logging into the account used for those other applications will actually force you to log out of your gmail account...

it's really quite simple:
  1. just head over to the google accounts page and create a new account using whatever non-gmail email address you want and presto - you have a non-gmail google account...
  2. you probably already have data on those other google web applications though but that's not a problem because many of them have ways of sharing that data with other users (ex. google reader exports opml files that can be imported to a different google reader account, google docs and spreadsheets can be shared literally, blogger lets you add a different account as the blog administrator, etc)... those sharing facilities can make it easy to migrate that data from your gmail account to your non-gmail google account...
  3. then all you have to worry about is remembering another username and password, or do you? i don't, i just use passwordsafe, then i only have to remember the master password and it works across all websites - even entering the username and password for me with the press of a key... in fact, password managers like passwordsafe work outside of the web too, for virtually any windows application that takes a username and password...

now you may have noticed i only described separating your gmail identity from your other google web application accounts - this is because right now gmail seems to be by far the most interesting target for attack (everyone seems to want your contact list or your emails)... you could just as easily have a different google account for each and every web application you use without having to remember anything extra if you feel your data or identity in the other applications warrant similar protection through compartmentalization...

Saturday, September 22, 2007

look who's talking about whitelists now

thanks to an email from james manning i got to see a company called signacert congratulating themselves for being part of the future of security technology...

you see, signacert produces what could be classified as a whitelist type of technology and symantec's canadian VP and GM, michael murphy, was quoted in the media as saying that whitelisting is the future of security technology...

now before i go further i'll make an obligatory disclaimer (because people have gotten the wrong idea in the past) that i am not anti-whitelist, i use whitelisting techniques, i think they can be a worthy addition to a security strategy, but unlike the hypesters i don't sweep their limitations under the carpet...

when a representative from a company as well known and respected as symantec says the type of technology your company happens to produce is the future of security, i suppose it's only natural to want to congratulate yourselves for being on the vanguard - but don't be so quick to pat yourselves on the back that you choose to highlight the words of someone saying foolish things as you'll only wind up looking foolish yourselves...

you see, michael murphy made a grievous error in his representation of scale leading to media statements like this:
The number of malicious software attacks, including viruses, Trojans, worms and spam, is rising exponentially, dwarfing the number of new benevolent programs being developed, making it increasingly difficult for security firms to keep up.
and this:
With more than 600,000 attacks catalogued – 212,000 of them added since January of this year – “we’re approaching a tipping point,” where there just won’t be room in antivirus databases for all of them, Murphy said. But legitimate applications are about the same in number as they were when only about 15,000 attacks had been documented.
that the signacert blogger wyatt compounded by characterizing the blacklist problem as infinite and the whitelist problem as finite... you cannot favourably compare the scale of the set of all known good programs to that of the set of all known bad programs unless your only intention is to say 'my database is bigger'... the set of good programs is orders of magnitude larger and growing faster than the set of bad programs, a fact that researchers from at least one whitelist vendor apparently concede...

if you're going to use the argument of scale against traditional blacklists then you cannot present a centralized whitelist as a viable alternative... the only conventional whitelist whose scale is more manageable than traditional blacklists is the one where the user him/herself decides what goes on the list (with all the potential for wrong decisions and the security implications thereof)... with whitelists you can have a manageable scale OR accuracy enough to protect users from their own bad decisions, but you can't have both...

this is something i would kinda hope the folks at signacert would already know, and i definitely expected the folks at symantec would know this - but the again, considering their CEO made the ridiculous claim that the problem of worms and viruses was solved, perhaps i should have known better to expect that from them... that or i should just know better than to listen to people in non-technical positions talking about technical things...

Saturday, September 15, 2007

the rumours of av's demise are greatly exaggerated

are you as sick of hearing about how 'anti-virus is dead' as i am? wow, what a worn out, tired, washed up meme... maybe now that amrit thinks stand alone av has actually finally died there will be a little less av is dead noise coming from his corner...

this isn't a freak out, though, just a reminder that what i said last year is still true - so long as people still want best-of-breed there will still be a market for stand alone av...

just because gartner changed the way they model the playing field (a necessity given the evolution, not death, of av), and just because vendors are gradually making the components of their security suites play nicer together (imagine that, they're actually managing to improve their products), doesn't mean stand alone av is going anywhere - and thinking it does probably qualifies as some sort of confirmation bias... personally, i'm going to wait for the fat lady to sing before i say it's dead... so long as i can still get an up to date and supported stand alone av app, stand alone av ain't dead...

Thursday, September 13, 2007

anti-virus as a commodity

i was reading the daily incite yesterday (as i tend to do) and i noticed one of the items was about anti-virus... it had an element that was pretty usual fare from mike rothman in that he talked about how this or that just reinforces his point that anti-virus has become a commodity - and i don't necessarily disagree with him, in fact i think i've said things that were more or less in line with that in the past...

however, as i was reading this particular instance i realized that there was a fundamental assumption to the idea that anti-virus is a commodity - the assumption that when it comes to choosing an anti-virus all malware is created more or less equal - and i began to wonder if that was really a justified assumption to make...

this may seem like nothing more than playing devil's advocate but humour me... let's look at what i think is a fairly typical thought pattern for calling av a commodity (from mike's post):
The one thing I come away with is that all the products are decent, thus I'm going to state the obvious. AV (and other malware defense) suites are true commodities. All stop viruses and other malware attacks.
so my question is what if we stopped treating the threats anti-virus deals with as one big amorphous mass but instead looked at the various subsets of malware and more specifically, drew a distinction between what is new and what is not... would av look like a commodity then? is the commoditization perspective born of an oversimplification of the problem? if we started paying specific attention to performance with new malware, wouldn't that provide a basis for vendors to differentiate themselves from the competition in a truly meaningful way? the retrospective testing at av-comparatives.org seem to show some significant variation in performance between the different products available so it certainly seems that if you drill down into the problem space that anti-malware products are suppose to address things can look a lot different than they do from a bird's eye view...

this isn't the only example of things looking different when you start considering the details... a few days earlier marcin wielgoszewski
posted this question about best of breed vs bundles... i have to admit if i were confronted with this question framed the way it was there i might actually go with bundles, but that really says more about the power of framing than it does about the efficacy of bundles... this is actually something that builds on the product class X is a commodity result from before because it only considers the presence of various broad classes of security technologies in the bundles and not the specific underlying properties of each implementation... if you were to again dig deeper into what the capabilities of the products are and evaluate what kind of coverage you get against the types of threat agents you're trying to defend against you're going to wind up not only with a much more granular picture but one that could easily lead to a different bundle selection... in fact, i feel rather confident that if you dug deep enough you might even see a picture where no bundle gave you satisfactory coverage... of course then you'd have to decide whether or not that level of granularity is worth it but that's another analysis entirely...

not that any of this is to say that anti-virus is not a commodity, i still think there's a level of abstraction or frame of reference where that's a perfectly valid thing to say - but it's not the only frame of reference... the more general you go the more true it becomes, but at the same time the more details get glossed over (and the devil is in the details)... i think it's important sometimes to question the assumptions that bind us to a particular frame of reference so as to remind ourselves that there are others out there that may be equally good or possibly even better depending on the circumstances...

Sunday, September 09, 2007

spyware terminator forum compromised

are you like me folks? does hearing about security site after security site being compromised make you more and more numb to the whole thing? i know i'm starting to feel desensitized...

isn't it weird that in trying to raise awareness for something important you can actually wind up doing the opposite in the long run...

anyways, luke tan pointed me towards these two threads about the spyware terminator forum being compromised (the second one is on the spyware terminator forum, by the way)...

now, maybe it's just me, but it seems to me that if you're going to run a security forum you might want to follow some basic security best practices and make sure you keep your software up to date!... i mean, come on, barring incidents like this, not following security best practices when you're supposed to know better teaches those who don't know better bad security habits...

then again, when incidents like this happen you serve as an object lesson to your users for what NOT to do... unfortunately it's an object lesson that has the potential to put those very same users in harm's way and do you think they visited the forum with the same precautions in place that they'd use when visiting a suspect site in order to analyze it? probably not...

that is, perhaps, another lesson users could learn from this sort of incident... although you might trust a given site's administrators not to do anything malicious with their site, you should never trust them not to make mistakes that would allow 3rd parties to do malicious things with their site, nor should you trust that the software the site runs on won't allow the same thing regardless of mistakes made or not made by administrators... make sure you have some kind of protection when visiting any site... this is probably one of the better arguments for always browsing from within a sandbox, whether a full virtual machine like the vmware browser appliance or an application sandbox product like sandboxie, so that possible malware intrusions as a result of visiting a supposedly safe site can be contained... there really isn't anyplace on the internet that is perfectly safe, you need some kind of protection in place at all times, for all sites...

and if you're a security site (or any other kind of site, actually) administrator that hasn't been hit yet, don't be the next one to get caught... please, think of the users... also, i might run out of ways to use it as an object lesson... maybe...

Tuesday, September 04, 2007

file infecting viruses vs digital signatures

vesselin's comments to my previous article inspired me to consider actual attack scenarios rather than just weaknesses in a proposed system so i've turned the title of this entry around to indicate more focus on the threats rather than the vulnerabilities...

in one of the responses to the comments i posited a scenario where a malicious entity compromises a legitimate software vendor and infects their software in such a way that they distribute infected programs with valid digital signatures and which can in turn sign executables they infect... today nishad herath posted a similar scenario over at the mcafee avert blog so at least i'm not the only one who thought of this possibility...

but you know what, that's actually a rather complicated scenario so i got to thinking maybe there's a simpler one... then it dawned on me; the premise for this protective technique is to manage the integrity of files and the assumption is that you can't infect files without changing them and thus affecting their integrity... it turns out that this assumption is wrong - certain types of companion viruses can infect a host program without modifying it at all... so if a malicious entity were to get a certificate they could easily sign their companion virus with it and so long as no one figured out the entity was signing malicious code their certificate would never be revoked and the virus could spread unhindered... and in joanna's world where digital signatures replace all the tricks that are used to detect the presence of viruses there would be nothing to alert the general public that the code was malicious and the certificate should be revoked...

ouch, that seems pretty damning, doesn't it... but you know what, companion viruses were always pretty obscure... it wasn't a particularly popular strategy in part because the extra files gave it away to those who were looking for extra files (oh, another one of those nasty virus detection tricks joanna thinks we can do without)... ok, so then how about a virus that inserts the host program into a copy of itself, rather than itself into the host program (the so-called amoeba infection technique) and then signs this new copy with the aforementioned maliciously obtained certificate? once again, a digital signature based whitelist isn't going to stop this from happening...

now that takes care of one of the aspects that kept companion viruses obscure but it doesn't really improve on the obscurity itself as this technique is even more obscure that companion infection... if you've followed me to this stage perhaps something else has dawned on you - if a virus can sign a copy of itself with a host program inside of it, why shouldn't it be able to sign a host program that it had inserted a copy of itself into using a completely conventional infection technique and the malicious entity's certificate? the answer is there is no reason it can't...

so it would seem that a digital signature based whitelist where vendors sign their own programs (effectively vouching for the safety of their own code) wouldn't really prevent file infecting viruses at all if that were the only thing the world were using... you still need all sorts of tricks to figure out when a vendor's certificate can't be trusted anymore, which points back to a fundamental problem with this kind of self-signing system - it correlates identity (which is the only thing a certificate authority can test) with trustworthiness in spite of the fact that they don't actually have anything to do with each other... just because the vendor's front man is who he says he is and hasn't done anything bad in the past (that anyone knows of) doesn't mean the vendor itself isn't a malicious entity... currently, standard certificates (like the ones used for websites) have become so easy for anyone to get that they've become meaningless and this lead to the creation of extended validation certificates which simply involves more in-depth investigation of the entity and which in turn has no bearing on what the entity will do after getting the certificate... i can see no way for a digital signature system for code to work any different than the one used for websites so the same problem will apply; and then even if we somehow figure out that the entity cannot be trusted, their virus(es) will continue to spread until a revocation is issued for their certificate and that information trickls down to all the affected systems...

trusting the vendor (or whatever else you want to call the software provider) to attest to the trustworthiness of their own software just seems far too naive from a security standpoint, which is why i originally didn't even consider it to be the model joanna had been talking about... a system where independent reviewers checked programs for malicious code before signing them (essentially certifying programs rather than program providers) seemed to be a safer solution, though it's got the same scaling problems that conventional centrally managed whitelists have... a system that certifies programs rather than program providers would be less vulnerable to the scenarios mentioned here (i think only a variation on the first one should be able to allow viruses to still spread) but either way, both options still allow viruses to operate if used on their own... at best (and by now this should sound familiar) digital signature based whitelists should be something we use with the more conventional tricks we're used to, not instead of them as joanna rutkowska would like you to believe...

Sunday, September 02, 2007

digital signatures vs file infecting viruses

so joanna rutkowska actually talks about things other than so-called rootkits... this time (i won't link to the article for known reasons) it's file infecting viruses...

from the article:
But could the industry have solved the problem of file infectors in an elegant, definite way? The answer is yes and we all know the solution – digital signatures for executable files. Right now, most of the executables (but unfortunately still not all) on the laptop I’m writing this text on are digitally signed. This includes programs from Microsoft, Adobe, Mozilla and even some open source ones like e.g. True Crypt.

With digital signatures we can "detect" any kind of executable modifications, starting form the simplest and ending with those most complex, metamorphic EPO infectors as presented e.g. by Z0mbie. All we need to do (or more precisely the OS needs to do) is to verify the signature of an executable before executing it.

I hear all the counter arguments: that many programs out there are still not digitally signed, that users are too stupid to decide which certificates to trust, that sometimes the bad guys might be able to obtain a legitimate certificate, etc...

But all those minor problems can be solved and probably will eventually be solved in the coming years. Moreover, solving all those problems will probably cost much less then all the research on file infectors cost over the last 20 year. But that also means no money for the A/V vendors.
first things first - this is essentially a whitelist technique (with the added bonus that the cryptographic component allows the proof of whitelist membership to be shipped with the file instead of requiring a lookup in a very big list) with all associated fundamental problems... think the problem of signing all good programs is small and will probably be solved? maybe for suitably large values of small... if you're going to focus on identifying good files instead of bad ones you have to keep in mind that the good files outnumber the bad by orders of magnitude and grows at an even faster rate... conceptually signing all good programs is simple, but in practice it's very, very hard...

but let's assume we do solve that problem... so if the file isn't signed then it doesn't run and if the file's signature is invalid then it doesn't run... the presence of a valid signature is assumed to mean that the file is a) not bad and b) hasn't had anything bad put into it after signing, but is that a valid assumption? given that mobile spyware can get digitally signed by symbian, i think not, at least not for the first part of the assumption... currently digital signatures like the ones joanna holds up as examples are meant to prove authenticity, not safety... putting the onus on the signatories to determine whether the code they're signing is safe doesn't solve any malware problem, it just offloads it onto the signatory... this is also not a small problem: distinguishing good from bad is and always has been the problem and offloading it onto someone else doesn't make it any easier to solve...

the second part of the assumption, that verified signature implies nothing bad has been put into the file, may well be true, assuming that the verification system itself hasn't been compromised... the digital signature proves authenticity and one of the prerequisites for authenticity is integrity and that's really the underlying ingredient here - managing system integrity... any application whitelist worth it's salt already keeps track of the integrity of the executables on the whitelist, otherwise it would be trivial to fool it by simply replacing a whitelisted application with a piece of malware with the same filename... but as yisreal radai showed in his paper "integrity checking for anti-viral purposes: theory and practice" (sorry, no suitable non-vx links at this time), systems that detect changes to the integrity of files are subject to attack and one based on digital signatures is no different... the signing key could be stolen (there's been malware designed to steal cryptographic keys in the past) and then generating valid signatures for infected files would be trivial, the key used to verify the signatures could be altered (either on disk or dynamically in memory) by a malicious process that has already been signed, or if the system allows adding new keys then one could be added maliciously that would allow the files to be modified (infected) and then resigned with the new key to trick the system into thinking the file's integrity is intact... in fact, taking a cue from the developers of stealth technology under windows, one could simply change the result returned by the signature verification function... in order to be immune from attack, an integrity checking system has to be offline and out of reach of the attacker, and that's not compatible with a system that checks integrity in real-time to prevent modified files from running...

of course there are other problems too... it's not just deciding what to sign, ensuring the signatures are themselves trustworthy, and finding the resources to sign every good program in existence... there's also the classic whitelist problem of deciding what to do in an environment where programs are being created (or even what's a program in the first place)... are we going to digitally sign word documents? yes? ok, and will that stop macro virus infection? no of course not... there are plenty of macro viruses that infect a document when the document is saved - a point at which a new digital signature would have to be created anyways... then when the person we send the document to opens it the virus runs and then proceeds to infect documents that person creates or modifies (and then signs) and so on and so on...

again, from the article:
So, do I want to say that all those years of A/V research on detecting file infections was a waste time? I’m afraid that is exactly what I want to say here. This is an example of how the security industry took a wrong path, the path that never could lead to an effective and elegant solution. This is an example of how people decided to employ tricks, instead looking for generic, simple and robust solutions.
unfortunately a digital signature based whitelist is no elegant solution either... whitelisting for anti-viral purposes dates back at least as far as thunderbyte anti-virus and there have always been ways to manually check the integrity of transfered files to make sure they haven't been altered from what original vendor was distributing by using crc's, hashes, and even digital signatures... a digital signature based whitelist makes certain aspects of usage a little more convenient, but it doesn't mitigate the inherent problems of a whitelist...

joanna may have wanted to use this to demonstrate the way security solutions ought to be in an ideal world, but the world is not ideal, and the virus problem as well as the many varied ways of addressing it are not as simple as she portrays them... thus her example of security gone wrong has no legs... in the real world there is a counter-measure for every protective measure, and elegance (subjective as it is) cannot be the basis upon which the measures we take are judged...

Saturday, August 25, 2007

a little egg on alwil's face

thanks to luke tan for once again drawing my attention to a compromised website, only this time it was not a blogger blog, it was the avast web forum serving up malware with an exploit to get it automagically installed...

wilders security has some good initial details details and it seems to the forum admin's credit (as well as those who reported the problem in the first place) it was caught fast and the forum shut down for maintenance to limit the damage that could be done to the public at large...

the forum is back up now and apparently free from the malware that was previously being served and apparently breaking core functionality of the forum... let's hope it stays that way - but for those who visited while it was behaving strangely, you might want to do a formal scan of your entire system just to be on the safe side (although since avast itself, along with a number of other anti-malware products, was able to detect and block the malware i suspect most of the forum's users would not be affected)...

Friday, August 24, 2007

threat centric reality check

readers of richard bejtlich's tao security blog are no doubt familiar with a concept he frequently promotes called threat centric security... this is a security paradigm that tries to eliminate threats as opposed to vulnerability centric security which aims to eliminate vulnerabilities...

he's mentioned it in a number of posts and i've often gotten the feeling that there was something that wasn't quite right but i could never really put my finger on it until i read this article on bad guys last week where he said:
It's important to remember that we're fighting people, not code. We can take away their sticks but they will find another to beat us senseless. An exploit or malware is a tool; a person is a threat.
when i read that it suddenly became crystal clear to me, the underlying problem i had with what he was saying about threat centric security was rooted in his classifications....

on the one hand i can see where he's coming from; just about every negative security consequence we can think of can be traced back to a person or group of people... whatever the attack, there's always a person who initiated it and as the saying goes kill the head and the body dies... this is why one would say malware is just at tool, because one sees it as nothing more than an extension of the attacker and hypothesizes that if you take the attacker out of the equation the malware will become irrelevant...

there are a number of problems with this and the first is that malware is more than just a tool... a hammer is a tool and a person has to physically swing it each and every time s/he wants to strike a nail... malware, on the other hand, is an agent and has the ability to be far more autonomous... the most fundamental benefit an attacker receives by employing malware is automation; s/he may only need to press a button to start the malware doing a complex and time consuming set of tasks and it's not going to stop just because the attacker has been put in jail, it doesn't need the attacker at that point, it will just keep going until either it's programming or it's controller tell it to stop...

generally speaking viruses and worms have neither a built-in stop condition nor a controller interface that would allow someone to tell them to stop, so putting the person responsible in jail isn't going to have any affect on the spread of that virus or worm... this is part of the reason why old viruses never die and why there are still people out there trying to remove 15 year old boot sector viruses... once a virus or worm starts self-replicating in the wild, the person responsible is already out of the picture...

in spite of the fact that old viruses never die, some would likely argue that viruses aren't really a big issue anymore... fine, then let's talk about what has replaced them as the scourge of the internet - botnets... botnets do have a controller interface, but what good is that if the person doing the controlling is put in jail? maybe part of his/her sentence could be to instruct the bot software to uninstall itself from all the victim machines but that assumes that someone else hasn't already taken control of the botnet... just as thugs employed by a crime boss find a new crime boss to work for when the existing one is busted by the cops, so too can an existing botnet be used by a new crook when the old one is taken out of the picture... in this case it's not kill the head and the body dies, it's kill the head and a new head will come along and take it's place... ultimately the same is true of virtually all non-replicative malware in some sense - take one attacker out of action and another one steps in and continues using the malware... this is why it's important to consider malware as more than just a tool or extension of the attacker, it's an agent operating on behalf of an attacker and once it's been put into action taking the person responsible out of action doesn't change what it can do...

that leads us to the second problem - the conceit that fighting people can replace fighting code... at the end of the day the threat centric security that focuses on people is called law enforcement because the people in question are criminals... we all know how effective law enforcement has been at eliminating crime in the physical world so it shouldn't be too much of a surprise to realize that it will probably be no more effective at eliminating cyber-crime... a sword on the battlefield stops being able to cause you harm only when there's no one left to wield it; and so too with non-replicative malware, it only stops being able to cause harm when there are no more cyber-criminals left to employ it - and since there's a seemingly endless supply of criminals the malware will continue to be capable of causing harm in spite of our effects to put the criminals behind bars...

finally, on a historical note, in case anyone is thinking that the threat centric security that richard bejtlich talks about is something we need to start doing, it's actually been going on for rather a long time now... remember christopher pile aka the black baron? how about david l. smith aka vicodines? then there's mike calce aka mafiaboy and kim vanvaeck aka gigabyte... even robert morris jr. faced legal repercussions for the morris worm... that's going back nearly 20 years and it's just the tip of the iceberg as far as sheer numbers go...

don't get me wrong, i'm not knocking threat centric security, i think it's important, but there's more to it than just fighting people... malware in general has nothing to do with vulnerabilities so anti-malware security can't be said to fall under the umbrella of vulnerability centric security... even encarta says that things can be threats too... malware is a threat agent (or threat as those who prefer more ambiguous terms would say), it may not be in charge but it is a thing that acts to cause harm, and taking out those instances that come your way qualifies as a type of threat centric security...

Thursday, August 23, 2007

who doesn't love bacn?

seems a new meme was born recently involving an email classification called bacn... it seems it's become important to classify notification and other emails that you actually want to receive but don't have time to look at right now...

it also seems that some people see a problem here... frankly, i don't... bacn isn't spam, it isn't anything like spam... it's sent by cooperative parties more or less at your request - if you don't want to receive it anymore you can ask for it to stop and it should actually stop... and since it isn't being sent maliciously it's not going to mutate and evolve rapidly in order to avoid filters that move it into folders specifically made to hold it and organize it and keep it from making your inbox unmanagable...

basically, everything we learned not to do with spam actually works on bacn because bacners (got a better term for bacn senders?) are cooperative rather than malicious...

what is a script kiddie?

script kiddies were people whose ability to attack digital resources came entirely from the pre-made scripts they found and shared with each other...

they were considered one of the lowest forms of attacker (even amongst other attackers) due to the fact that they showed no real aptitude for anything except clicking, copying, and pasting... as such the term 'script kiddie' was universally considered an insult...

before scripting became widely adopted as a way to create malware this class of attacker would have been the type to hex edit (with difficulty, i'm sure) other people's viruses in order to change text strings inside and pretend like they'd made something new or later to use virus creation kits to pretend basically the same thing...

scripting made editing existing malware easy because the malware didn't need to be compiled/assembled into hard to read machine code in order to run; it remained in source code form and could be opened and modified using nothing more than notepad... some of the more creative script kiddies could even cobble together something sort of new by cut-n-pasting parts of other malware scripts together... if any were to rise rise above this stage they might be recognized as being more than just a script kiddie, but most were too clueless to realize that they were regarded derisively even by their would-be peers...

back to index

the beneficiaries of malware kits

this is a little on the stale side (sometimes things just take a while to get done) but i was reading an article on dancho danchev's blog about the shark 2 diy malware kit and something struck me...

it's clear that malware kits benefit the less technically sophisticated attackers by making easy to for many people to create many new pieces of malware that no one has ever seen before (assuming no one else chose exactly the same options, which actually seems unlikely)... it's also clear that enabling these profit driven versions of script kiddies can serve to draw attention away from the activities of the more sophisticated cyber criminals but would you believe anti-malware companies can benefit too?

if you look at how well malware creation kits have fared in the past it becomes clear that malware produced by a kit doesn't provide much of a challenge... this isn't because the malware doesn't have great features that would serve conventional malware well in the wild, it's because it came from a kit and the kit itself became known... as an example, back in the day i spent some time (a couple weeks maybe?) observing a group of self-proclaimed virus writers whose entire stock of viruses were created using the nrlg virus creation kit - each and every one of them detectable, but not as distinct and individual viruses like you might expect, rather as nrlg generated viruses... the thing about generated malware is once you know the generator you can predict and recognize all of it's output so that even if some twit goes to the trouble of creating 5,000 vcl/nrlg/whatever variants it poses no real problem for the av vendors...

now the newer kits like this shark 2 diy kit are professionally made and updated frequently and you may think that would make things harder for the anti-malware vendors, what with there being multiple versions of the kit to have to deal with... consider how many different pieces of malware could be generated with all those different versions of the kit, however, and you'll soon see that adding detection for the output of the different versions of the kit is faster/easier than analyzing each piece of output and adding detection for it individually...

so not only do kits optimize ease of malware creation, they optimize ease of mitigation as well... let this be a lesson to all you less skilled malware profiteers out there - if you can't make malware yourself then go find something else to do because all the stealth and anti-debugging tricks in the world aren't going to help a piece of malware generated by a known algorithm... in the end you're probably just being used as a smokescreen by people with more technical expertise than you...

Wednesday, August 15, 2007

what is comment spam?

comment spam is spam that appears in blog comments instead of in email...

traditionally the idea behind comment spam was to add links a spamvertised (advertised by spam) site to various other sites so that a) people would follow those links and visit the spamvertised site and b) search engines like google would rank the spamvertised site higher since there were more links to it and so it would be more likely to show up on the first page of search results (basically a kind of search engine optimization scheme)...

different blogs have different means of coping with comment spam: some have a CAPTCHA, some require a moderator to approve the comment before it can appear on the blog, some use moderation after the fact (so that for a while the spam will actually appear there), some require the commenter to create an account, and some even have advanced content or IP-based filtering... not all blog owners implement anti-spam functionality for their comments, however, and no anti-spam technique is perfect so in some cases the spam still gets through...

to combat the SEO effect some (possibly most) blog platforms implemented a technique by which links in comments would be marked in such a way that search engines wouldn't count them regardless of whether they were good links or bad links...

that didn't stop comment spam either, of course - people can still follow the links and not all comment spam even has links anymore... as such blog owners still often need to use techniques like those described above to combat comment spam...

back to index

what are splogs?

a splog is a form of web-based spam in the form of a blog...

unlike comment spam, in the case of splogs the entire blog in question is spam, not just a small part of it... because blogs are so easy to setup and publish content on it's become a popular way of spamvertising a site...

to combat this type of abuse of service, blogging service providers typically employ anti-spam techniques like CAPTCHAs to prevent the automated creation of splogs but CAPTCHAs are becoming less effective as techniques for defeating them are developed... on top of that, CAPTCHAs don't prevent real live humans from setting up splogs... because of this, blogging service providers try to prune out splogs manually when they become aware of them...

back to index