Wednesday, December 31, 2008

the MD5/rogue certificate attack

i'm not going to bother pointing to all the many good stories out there describing the details of how a valid ssl certificate was faked by mounting a 2nd preimage attack on the MD5 hash using a legitimately purchased certificate as the starting point...

i'm just going to point out that, while some people think MD5 was broken in 2004, the fact of the matter is it's use in new systems was deprecated back in 1995, and existing systems should have been moving away from it with all possible haste...

apparently there are ways to make this specific attack impossible without even changing the hash algorithm used (essentially salting the message) and that's certainly a good idea - but still there's no good reason for anything to be using MD5 at this stage of the game... there's been enough time for any legacy system that used it to have been reworked or replaced, and while we should probably start moving away from SHA1 as well (at least to SHA256 until the new SHA3 standard is selected), we should all have moved away from MD5 by now and if you haven't then shame on you...

Saturday, December 20, 2008

returnil give-away at ghacks

i don't normally do posts like this, but in the interests of raising awareness of alternative anti-malware techniques there's a give-away going on at ghacks.net today and they're giving away the premium version of returnil...

returnil is (as far as i know, i haven't tried it yet myself) an instant system recovery sort of sandbox... the free version at least will revert any changes that were made (good or bad) the next time you reboot...

i know that sandboxie was mentioned on the network security podcast and i've mentioned it a time or 2 as well... i think sandboxes are an invaluable security measure and i encourage people to try them out and this returnil give-away is a chance for people to try out the premium version of a sandboxing approach that has a lot of proponents over at wilderssecurity.com...

Thursday, December 18, 2008

the post with no name

well, here i am on the first day of my christmas vacation, trying to do stuff that i didn't have time to do while i was still working...

like blog posts about the distinction between static and dynamic heuristics for martin mckeay (even though that distinction was incidental to the issue i was bringing up)...

since the holiday season is upon us, i thought maybe it might be nice to clarify to martin (and rich mogull before him, and others to whom this will apply equally well) that i wasn't trying to be critical of him personally or attack him... i try not to attack people just for doing/saying something that is technically wrong (ethically/morally wrong, sure, but not technically wrong), everyone makes technical errors... on the other hand, although i don't use people who make technical errors as targets, you folks do make wonderful examples - and examples are an important part of the learning process that (frankly) a lot of people could benefit from...

so, gentle readers who may at some point find themselves the subject of one of my posts, try not to take the fact that i don't usually sugar-coat things too personally - it's just my way...

what are dynamic heuristics?

dynamic heuristics are a branch of heuristic techniques that try to determine if a suspect program is malicious by running the program in a simulated environment and trying to detect the active (dynamic) malicious behaviour(s)...

there are a number of ways to accomplish this, whether it be emulating the program until it reveals the de-obfuscated (and, ideally, previously known) version of itself, or running it in some some sort of sandbox to catalog a broader range of it's behaviours looking for signs of malicious intent...

one of dynamic heuristics' strengths lies in being able to bypass many of the obfuscatory techniques that malware writers use to stymie static heuristic analysis as obfuscation has traditionally needed to be undone at run-time for the malware to operate... unfortunately a number of tricks have been developed by malware writers to try and combat these techniques and they generally exploit a more natural and general weakness involving the fact that for any non-trivial program there are multiple paths that program execution can take but any single invocation of the program will follow only one of those paths... program execution can take different paths (and thus produce different behaviours) depending on any number of different conditions present at run-time so the malicious behaviour a dynamic heuristic engine is looking for may not show up (either by chance or by design) during analysis...

back to index

what are static heuristics?

static heuristics are a branch of heuristic techniques that try to determine if a suspect program is malicious by examining the structure and contents of the program in an inactive (static) state and trying to find (for example) code fragments that have been commonly used for malicious ends in the past...

this type of technique is especially dependent on having detailed knowledge not only of the contents past malware but also of the contents of legitimate programs so as to avoid alerting on the presence of code that, though heavily used by malware, is also common in legitimate software...

one of the strengths of static heuristics is that, unlike dynamic heuristics, it is able to examine multiple possible program execution paths due to the fact that it's looking at all the contents of the program instead of just the code that would get executed during one particular invocation of the program... unfortunately, malware writers have developed a number of techniques to obfuscate their code in such a way as to prevent a heuristic engine from being able to see the actual code and thus preventing it from performing static analysis on that code...

back to index

Monday, December 15, 2008

nuggets of misinformation

over the weekend martin mckeay published a post asking people what free av they used at home... the story is ordinary enough, i'm sure a lot of people out there have faced the problem of what anti-malware software to choose, whether a free one, one of the big name for fee ones, or none at all (and for the record, i'm not in any of those camps)... martin is a well known security blogger and podcaster, he knows about a lot of security and privacy related subjects, but from this fairly informal posting i now know that martin does not know av...

what caught my eye about his post came near the end where martin pointed towards this proactive detection test report as showing how ineffective av really is... for everyone's benefit, tests of proactive protection capabilities are tests specifically designed to bypass the signature-based portion of an anti-malware product so as to test only the heuristic components... that one word - "proactive" - all on it's own would tell someone familiar with this field that the test does not measure the overall effectiveness of products but rather just the effectiveness of a subset of the technologies in those products - and that word was right in the main heading for the report...

reading further (ie. reading the introduction) reveals that the subset of technologies tested is further constrained... the test only measures the effectiveness of static heuristic techniques only, no dynamic heuristics, nothing involving run-time behavioural detection or anything like that... it should be clear that when you're only testing a small part of a product your results won't indicate it's overall effectiveness...[EDIT dec. 16, 2008: turns out i read the intro wrong, however it's still only a test of the heuristic components of anti-virus products rather than of the entire products, and thus not a reflection of their overall effectiveness]

of course if you don't understand the terminology being used and only look at the numbers and the graphs then of course you might think this represents the overall effectiveness - that's probably why martin thinks the effectiveness of av is somewhere between 60% and 80% (not too different from the numbers on the report he points to) when the latest on-demand tests (which still don't include run-time behavioural detection, but do include a broader range of the detective capabilities of the products) performed by both av-comparatives.org and av-test.org place the effectiveness of most products well above 90%...

sadly of all the people who responded to his post, none of them seem to have noticed this interpretation error so far... i'm sure everyone has heard the idiom that there are lies, there are damn lies, and there are statistics... since numbers can be so misleading, it behooves one to familiarize oneself enough with a topic to at least properly interpret those numbers so that you can't be so easily fooled by them...

Sunday, December 14, 2008

does av really suck that badly?

while looking through my rss feeds today i saw this comment rich mogull posted about a week ago (don't ask why it took so long to reach my reader, i don't know)...

primarily it's his observation about malware, anti-malware, and the mac platform and community, but he ends the comment much more generally with this:
To be honest, I think desktop AV sucks in general and isn't nearly as effective as everyone would like us to think.

this is probably a common enough sentiment among the more technically savvy crowd... i wouldn't go so far as to say this is part of the anti-av movement, but rather a consequence of the mismatched expectations people have with regards to anti-virus software and the persistent mischaracterization of av as being solely about virus scanners...

i can understand where the opinion is coming from - if you look just at scanners, and more to the point if you look just at what populist media reports about scanners then the image you get is of something that fails a lot... but here's the main problem with this line of thinking (besides the issue of what av is, which i think i covered adequately before) - no preventative measure is an island complete unto itself...

as i mentioned in my post about the blacklist value proposition, the primary benefit of a scanner is to take care of the exceptions that aren't covered by other measures... scanners have never, ever been the sole preventative measure in play, they've always been complementing something else... even when the only technological measure present was a scanner, there were still procedural measures, there was still common sense, there was still (in the distant past at least) the relative disconnected nature of the computing ecosystem, etc... judging a scanner's effectiveness in isolation as though it were supposed to take care of the entire problem all by itself is like judging how well table salt satisfies your appetite...

the problem is that people think the scanner is supposed to take care of the threat all by itself, and they think that because av marketing departments have been feeding them that line of rubbish for something like two decades now and they aren't really taking many steps to correct the imbalance in the image they're creating and the mismatched expectations they're giving the public... this is why i often frown on marketing, why i've accused those who overuse the concept of protection as snake-oil peddlars, and why i cringe when someone calls a set of security tools a solution...

the problem isn't the technology, the problem is what people understand (or fail to understand) about the technology, and by extension the thing that causes the misunderstanding... as mark linton points out, there is a definite false sense of security being fostered here, and as cd-man suggests in pointing to that same post, that false sense of security is causing harm - possibly even more harm than a scanner can make up for... av companies need to wake up and realize that by allowing their own marketing departments to subtly lie to the public they're going to be shooting themselves in the foot in the long run... by operating in bad faith they are increasingly losing the faith of consumers - and not only will that accelerate when the idea that av sucks makes it into mainstream public consciousness, but it is also very hard to win back once lost...

but back to rich's opinion - i don't think he's entirely wrong, av isn't nearly as good as it's often made out to be, but rich and probably a lot of other people out there are being so profoundly affected by the reality-distortion field put up by av marketing that when they finally start to see a glimmer of reality through a thin spot in the fog bank they see a stark contrast between it and the marketing message and start rejecting everything in the marketing message, even though the best lies are those that are hidden among truths, and come to equally imbalanced conclusions... the opinion is one that smacks of not seeing the whole picture... as i keep saying, what most people call av (the scanner) should be part of a larger whole, not abstracted out on it's own.. further, it's not everyone who wants you to believe av is so good, it's really just marketing (stop listening to marketing; seriously don't even bother rejecting what they're saying, just don't let them affect your thinking at all) and the corporate big-wigs who care more about market share than they do about actually contributing to their customers security and well being (ahem john thompson ahem)... there are plenty of honest, ethical, technical people in the anti-malware industry trying to spread a more balanced message, but they may not be as easy to find as the pitch on the outside of the product's packaging...

Wednesday, December 03, 2008

why perform virustotal-based av tests?

probably most people with any familiarity with the anti-malware field has heard of virustotal.com - for those that haven't, it's an online service that runs the commandline version of a collection of av scanners against submitted samples in order to perform static analysis on them and determine if they're known malware (or perhaps close enough to known malware to be picked up by static heuristics)...

as has been well stated by others - virustotal is for testing samples not for testing anti-malware software... unfortunately that doesn't seem to stop everyone and their grandmother (apparently) from performing comparative and/or effectiveness testing on anti-virus products using the virustotal service...

there are a number of reasons why you shouldn't perform av tests using virustotal, including:
  • those of us who know better will laugh at you - no, seriously, we will
  • virustotal doesn't (can't) include the full detective capabilities of the av products they're using and therefore tests based on their service misrepresent the effectiveness of those products
  • even the people who run virustotal say such testing methodologies are bogus right on their own site
  • retrospective testing already provides results on the effectiveness of av products against new/unknown malware (and it already makes av look pretty bad)


those seem like pretty compelling reasons not to do this kind of testing and yet the practice persists... here are a couple reasons why people might still do it regardless of the reasons not to:
  • it costs too much to do things the right way (proper testing takes a lot of work, time, and resources)
  • people are lazy and virustotal can appear to be a convenient short-cut to getting things done, even though it's really just a short-cut to irrelevance
  • some people seem to be genuinely ignorant of the irrevocable problems with test designs that use virustotal to compare scanners or gauge anti-virus technology
  • related to ignorance but on a grander scale, some people may simply not be capable of designing a scanner test that even flirts with validity, nevermind one that is actually somewhat valid
  • there are some pervasive misconceptions about anti-virus products/technology/vendors/industry that some people have an irrational need to affirm


of course that's just for individual people, when a security company (or worse, an anti-malware company) uses virustotal for quick and dirty av testing then it raises serious questions about the competency of that company's staff... although i have hinted before at the connection between innovation and not being constrained by the 'this is the way we've always done things' mentality, that isn't a license for the security industry to throw scientific rigor out the door...

Tuesday, December 02, 2008

lifehacker's mac anti-virus poll

if there's one thing that never fails to disappoint me it's the failure of the wisdom of crowds principle to work when it comes to malware-related topics, and this ask-the-reader style post on lifehacker lives down to that standard quite well...

you've got some people like astrosmash saying "There are no OS X viruses" - which ignores both the fact that there are in fact os x viruses (osx/leap.a is an overwriting file infecting virus, among other things) and the fact that anti-virus software targets non-viral malware too (of which there has been more than a few for the os x platform)...

you've also got people like texizboy saying:
I don't run A/V on some of my windows machines. All boils down to common sense in my opinion. Webmail services have helped out on this front also, to give credit where it's due, I believe there are less viruses getting around due to them.
despite the fact that email is just one of many different attack vectors that malware have been known to use for some time now, and despite the fact that not all malware is obvious enough for common sense to help (nevermind what they say about common sense)...

then there's people like kilianamphitrite saying:
The real strength of the Mac is that in general, when a Mac is running an untrusted bit of code, it is not doing so with system management privileges. Most of the time (and especially for home systems) Windows users run untrusted code as privileged users.
which incorrectly assumes that you need privileges to do bad things... a lot of windows malware depends on privileged access not because it's necessary for the ultimate goal of the malware, but rather just because such privileged access was almost always there so malware authors didn't have to think of alternatives...

on top of that you've got people like insomniac who says:
The idea of "Mac/Linux/Unix do not have enough market share so people don't develop a virus for them" is only partially true. Unix and Linux based systems are just a lot more difficult to infect because of their architecture and security design than a Windows machine (Vista does a much better job than previous versions of Windows).
which ignores the fact that the first academic treatment of the computer virus phenomenon back in the early-to-mid 80's had viruses successfully spreading in a professionally administered unix environment without aid from privileged users like root...

or how about sverrip who says:
I mainly surf around pages I trust, and don't download and open setup files like "Free-XXX.exe" on my Windows machine.
apparently ignorant of the fact that there is no such thing as safe/trustworthy sites, not to mention ignorant of the existence of the drive-by download vector... even the cbs website can serve malware to unsuspecting victims... and it's not like macs are immune to drive-by downloads - remember the safari carpet bombing flaw?

sad, isn't it? that people believe these fantasies about why they don't need anti-virus software on their mac (or in some cases even pc) computers... i have my doubts as to whether apple's quiet urging of people to use av is going to do anything at this stage of the game... the baseline level of ignorance about malware issues was bad enough but add to that apple's previous arrogance (which no doubt resonated with a lot of their fans) about security and the damage done is all but complete - the only thing left to do is wait for the fallout...

Monday, December 01, 2008

unexpected spam

you may recall me saying here or there that i have a 100% spam free email address... it's an address that i don't give out to people or sites... it's not that the address is unused - i actually use it a lot, but i use it in conjunction with sneakemail.com so it's not my real email address getting spammed - and because i use a different sneakemail address at every site it's no problem to just deactivate or even delete the address and not deal with that site anymore (see my post on avoiding spam)...

so as a result i don't check the spam folder very often - it's almost always empty and when it's not the messages in it are almost always in there erroneously... it's so rare that i actually hand out any address to an organization that will compromise it to spammers (or spam it themselves) that i see more false alarms from the spam filter than i see true alarms...

that all changed with a vengeance today as i found over 50 messages in my spam folder and almost all of them were correctly classified... and wouldn't you know it, the majority were addressed to the sneakemail address that i used for demonstration purposes in this post on phish detection... it certainly took a while for the spammers to find that one (i wonder if they liked the spam poison i laid out as well)...

unfortunately that wasn't the only address that was receiving spam... i don't pretend to know what exactly happened here, but the unique, randomly generated, unguessable address i used to sign up to for ethicalhacker.net has also started receiving spam... the chances of spammers finding that address by enumerating the sneakemail address space are incredibly low (it's a 7 digit base36 number) especially since i have quite a few sneakemail addresses and this is the only one getting spammed by this particular person using the freetellafriend.com service... somehow the folks at ethicalhacker.net let my email address get compromised so you can bet i won't be dealing with them any further (not that i did much there in the first place)...

so anyways, it was quite a shock to see so many spam messages in the spam folder of my spam free email account, but they were all sent to disposable addresses (not the real one) that are no longer reachable so it's all good...

suggested reading

  • As stock market drops malware rises - PandaLabs
    it's not even marginally novel to suggest that malware authors take advantage of the emotional reactions people have to significant world events, be they tsunamis, ice storms, or presidential elections... thus, it it shouldn't come as any great surprise that when people feel their personal finances are vulnerable they are more likely to fall for fake security software, ironically in an attempt to better protect themselves...
  • Schneier on Security: The Neuroscience of Cons
    schneier says fascinating and i have to agree... i just wonder how well this applies to the kinds of social engineering we see in malware and related online threats...
  • ThreatExpert Blog: McColo - Who Was Behind It?
    the story behind the story of mccolo... i wonder what the rap group's connection with the carders was (ie. why were rappers sending out their message for them)...
  • White Listing – The End of Antivirus??? | ThreatBlog
    another balanced whitelisting opinion... i especially like the airbag vs seatbelt metaphor at the end... blacklists and whitelists complement each other, folks - one is not a replacement for the other...
  • Shoulder Surfing a Malicious PDF Author « Didier Stevens
    interesting post about a couple pieces of pdf-embedded malware... the takeaways are 1) malware authors are STILL not great programmers (seems like script kiddies are packaging their 'work' in other files now), 2) incremental update functionality allows script kiddies like this to 'show their work', and 3) script kiddies don't learn from the past (re: formats that contain unique identifiers - might want to ask david l smith about the consequences of that)...
  • Spire Security Viewpoint: WabiSabiLabi Update
    wabisabilabi to close? sounds like good news to me... auctioning off vulnerabilities is a slippery slope that leads to providing a financial incentive for the general public to create attacks, which really isn't a precedent we as a society should be setting...
  • Pirates and Internet Crime - F-Secure Weblog : News from the Lab
    one of the most salient points i've seen made about online crime in a long time... it is indeed as much a social problem as it is a technological one - it is a subset of crime and the reasons for it's existence or it's driving factors are the same as those for conventional crime... so long as those social factors exist so too will crime (both online and offline)...

Wednesday, November 26, 2008

clarification on my morro worse case scenario

well, it looks like a blog conversation may be forming, or perhaps not - we'll see how things go but rich mogull has put up a response to my earlier post on morro, which in turn was partially a response to him (see, a conversation)...

rich doesn't exactly agree with my worse case scenario, but let me be clear it was a worst case scenario (one based in part on the idea rich put forward about microsoft gobbling up the consumer av market) - things can easily go differently if we just keep our eyes open for the signs and avoid them... that being said, the reasons he doesn't agree with me just don't make sense to me...

ignoring whether or not i'm assuming anything about the nature of the av market (granted i don't have the insider knowledge a member of the industry would have, but malware/anti-malware is my main focus as a security blogger), the fact is that there is a non-negligible amount of innovation in it... it may not be a lot (it depends on how you quantify things) but it's certainly not zero... zero innovation is what will happen when there's only one game in town - history has already taught us this and one of the same principals (microsoft) was involved then too...

lets look at some of his specific reasoning:
Morro will be forced to innovate like any AV vendor due to the external pressures of the extensive user base of existing AV solutions, changing threats/attacks, and continued pressure from third party AV.
the problem with this is that rich has already posited the scenario where microsoft gobbles up the consumer av market... what other pressures would it be subject to in that case? there is no extensive user base of existing av 'solutions' (hack, cough, i nearly choked on that term) when microsoft gobbles up the market because there are no other consumer products worth mentioning besides morro... as a result there's no real reason for them to keep on top of the changing threat landscape (anymore than there was for them to keep on top of the changing web landscape) because, once again, they're the only game in town...

Morro will force AV companies to innovate more. Morro essentially kills the signature based portion of the market, forcing the vendors to focus on other areas.
actually, if morro gobbles up the consumer market then whatever other av companies are left will be strictly enterprise av companies and they won't be affected by morro in the least since morro is not an enterprise av product...

there's also the question of ease of evasion... rich is right that it's already pretty easy for anyone to evade the current crop of customer-side scanners... that said, it would still be far easier if there was only one product... it's the difference in complexity of evading a single product versus the complexity of evading all of them - the two scenarios aren't even in the same ball park...

while we're on the topic of low innovation and ease of evasion, however, it seems a good time to mention a rather game-changing innovation that's been popping up in various products recently - scanning in the cloud... panda (not exactly one of the big three) brought this technology to market long before symantec, mcafee, or trend jumped on the bandwagon - but jump they have, and mcafee's artemis has even been included in virustotal... the way i see it this represents a significant innovation and as more and more vendors adopt this approach a number of the currently popular passive evasion techniques (such as targeted attacks and malware q/a) are going to increasingly become obsolete...

so it would seem that the state rich thinks we're currently in (low innovation, easy evasion) is one we may be getting out of, without any help/pressure from a certain known monopolist...

the benefits of scanning in the cloud

now i know a good chunk of the general security industry has been poo-pooing the cloud recently, and normally av is the security industry's favourite whipping boy, so maybe this is just a case of two bad tastes that taste bad together... that being said, the concept has significant promise to take back several tactical advantages that av hasn't had in, well, forever...
  1. signature generation to client update time is reduced/minimized

    usually a good thing and this time it isn't at the expense of q/a because rather than cutting a corner that affects quality they're cutting the corner of updating the client in the first place... instead they'll be updating the cloud which they have direct control over (unlike the client) and then the client doesn't need to detect that it's out of date and try to update itself (and hope updating hasn't foolishly been disabled)... this potentially gives vendors time to do more q/a on their signatures and so reduce the bad signature release rate (though it's already pretty low)...

  2. under-reporting of new samples is reduced/minimized

    under-reporting is the single biggest advantage that targeted attacks have... without the law of large numbers favouring someone noticing there's something fishy about a particular file, that file doesn't get submitted for analysis and nobody gets signature-based detection capabilities for it... in cloud-based scanning, however, just about everything (except those things the user feels are too sensitive to transmit, and as such are probably not malware) should get submitted to the cloud so that is no longer an issue...

  3. greater situational awareness/intelligence than ever before

    data on detections can be correlated and analyzed, etc. providing the potential for virtually every client to become a sensor in a giant honeynet... geographic and demographic trends/patterns in the attacks have the potential to be more easily seen with so much more real-world data, and those are things that can be used to better predict who's at increased risk or maybe even help to pinpoint the source(s) of the attacks...

  4. conventional malware q/a should be entirely thwarted

    the ease with which current malware evades known-malware scanning is on the verge of becoming history... the basic methodology for evasion is to iteratively produce samples and run them past a slew of scanners to see if it's different enough to avoid them all (or enough of them to be valuable)... this worked in the past because malware authors could do this without anti-malware vendors being any the wiser... with a cloud-based scanner you can't fully scan a new malware sample without the vendor getting a copy (either the sample is submitted to the servers controlled by the vendor, or the sample is not submitted and the malware author gets incomplete/inaccurate detectability results) and thus letting the cat out of the bag about not only what your new malware looks like but possibly also what the heck you're up to ("hello, police, i'd like to report a large number of new malicious programs being generated at the following IP address")...

  5. scanner reverse engineering is almost completely nullified

    before what we now know of as malware q/a existed, the more clever malware authors were believed to have reverse engineered various scanners looking for information they could use to make sure their malware would better avoid detection... and even today, those who deal in vulnerabilities (either for the betterment of security or for malicious gain) will analyze scanners looking for flaws that an attacker could take advantage of... with the scanning engine no longer residing on the client computer the only kind of analysis anyone without source code access can do is black-box analysis (and if a botnet can detect attacks and protect itself a cloud-based scanner should conceivably be able to as well)... in this way the scanning algorithm becomes as inscrutable as any server-side polymorphic engine...

Friday, November 21, 2008

badware busters - a 'me too' effort

i read yesterday that the folks at stopbadware.org and consumer reports webwatch are starting a community called badware busters to help ordninary people get malware off their computers...

the stated reasoning is that there's no central place where people can get this help, and they're correct there is no single central place, rather there are dozens of them... some of them even have the slashdot-esque features that badware busters seem to hope will set them apart....

that is if the folks at badware busters are even aware that there are so many communities already doing this... i'm really not sure what they're thinking or how they expect to become the central community for this sort of thing when there's already places like wilders security forums, castle cops, the communities that just about each and ever av vendor seems to set up, the long list of communities you can go to for help with hijackthis logs (and generally don't deal exclusively in hijackthis log analysis), various usenet newsgroups, etc...

dare to dream, guys/gals, but i just can't see you displacing all the other communities that are already out there (and castle cops takes this kind of assistance giving very seriously) and becoming the place to go for all your malware removal needs...

the secret truth about programs

do you know what a program is? are you sure? can you tell the difference between programs and data?

the average person probably thinks of programs as being things installed on their computers that they click on and that subsequently open a window on their computer... somewhat more sophisticated users might be aware of such things as *.exe and *.com files on microsoft platforms, the execute bit on linux, or whatever property tells osx that something is executable on that platform... more technical users like programmers are probably familiar with scripts and may even realize that those are also programs, despite them not resembling anything the average user would consider a program... any computer scientist worth his/her salt, however, knows that none of these are the truth...

if you think you can tell the difference between data and code then you actually don't know what a program is... the truth is that there is no intrinsic difference between data and code (thus, if you think you can tell the difference you're deluding yourself)... all data has the potential to be interpreted as code (and thus be a program), all it needs is the right interpreter to treat it as code (either by design or by accident)...

think of what that means for anything that tries to control what programs do or whether they execute... maybe you can control the actual program, but maybe the best you can hope for is controlling the program's interpreter (be it your web browser, word processor, or some arbitrary system component handling a malformed request)... controlling programs by way of controlling their interpreter is a little like controlling programs by way of controlling the user... if the user or interpreter needs a lot of privileges then the program running in his/her/it's context will have those privileges also...

the classic example of how this is a problem in malware is word macro viruses - sure you can prevent microsoft word from manipulating system files, but you can't reasonably prevent it from modifying other word documents and thereby spreading the malware - ms word is supposed to modify word documents, that's it's job...

Thursday, November 20, 2008

the blacklist value proposition

how do you defend the use of blacklists in the face of seemingly stronger defensive mechanisms like whitelists?

no matter what defensive technology you use there will always be some holes in those defenses... there will always be exceptional cases that your defenses don't currently handle and/or are unsuitable for handling... what's the fastest/easiest way to deal with exceptional cases you want to avoid? yeah, you guessed it, with a blacklist...

let me give you an example: lets say that we have an application whitelist... application whitelists control the execution of some subset of known program types... they're limited to the known types because, well, how do you intercept a kind of execution that's never been seen before?... they're also usually limited to a subset of the known types because developing the technology to intercept and block programs of an arbitrary type (such as script programs for a particular interpreter, or the unanticipated programs that exploit code represents) is not necessarily easy or cheap and for the more obscure types it's often just not worth the investment...

now lets say that a piece of malware is created that exploits this partial coverage of the set of program types... when there's only one such piece of malware, or even just a handful, the benefits of re-engineering the application whitelisting software to be able to cope with this additional type don't justify the costs in terms of time, money, and effort required to do it... when there are so few instances (relative to the billions of programs out there) it is faster, easier, and cheaper to just look for those particular instances (via a blacklist) than to re-engineer the whitelist to handle them... it won't be until the program type in question becomes mainstream that it becomes worth it to add capabilities for it to an application whitelist...

similar scenarios can be constructed for any other type of preventative measure... as such there will always be a need for blacklisting regardless of what other defenses are in play because there will always be a need to deal with emergent exceptional cases as fast and cheaply as possible... even malware blacklists (ie. known malware scanners) themselves have exceptional cases that they can't deal with - that being new/unknown malware... however, as i've stated in the past, novelty is an advantage that wears off, and as far as i can tell it's the only one that does...

Wednesday, November 19, 2008

possible downsides to morro

if you haven't heard the news microsoft is killing onecare and replacing it with a free anti-malware tool probably using the same engine as the current product...

i've written about microsoft's entry into the anti-malware space before and i wasn't very positive about it's chances... microsoft surprised me though, i have to give them credit, and i think it really came down to wooing some of the brighter minds in the av industry away from their then current employers to work on the new microsoft offering (of course ms has also wooed some less scrupulous minds as well)...

that being said there are still some issues to consider... both rich mogull and graham cluley feel this is a positive development for a variety of reasons but rich puts forward the possibility of microsoft bundling the anti-malware software into the OS at some point and basically gobbling up the consumer av market... i doubt you need to be a rocket scientist to see the parallels between that scenario and what microsoft did back in the mid-90's with internet explorer, and i don't think i need to remind anyone that that was actually not good for users (it resulted in microsoft winning the first browser war and then, in the absence of credible competition, they literally stopped development/innovation for years)...

what we don't want or need is for microsoft (or anyone else, technically, though microsoft has the most potential due to their position) to win the consumer anti-malware war in any comparable sense... it's bad on a number of different levels - not only is it likely to hurt innovation by taking out the little guys (who tend to be more innovative and less constrained by the this is the way we've always done things mindset), but it also creates another example of a technological monoculture... granted we're only talking about the consumer market, but the consumer market is the low-hanging fruit as far as bot hosts go and while it may sound good to increase the percentage of those machines running av (as graham cluley suggests) if they're all using the same av it makes it much, much easier for the malware author to create malware that can evade it...

i'm really not sure trading technological heterogeneity (and all the benefits thereof) for a somewhat broader coverage (or even complete coverage) of the consumer market would actually be a good thing, but i am sure i don't want to find out... let microsoft give away their technology if they must, but keep it out of the operating system itself... there are other, safer ways to get anti-malware more broadly deployed...

Tuesday, November 18, 2008

whitelist opinion smackdown

i realize i've been rather quiet as of late - not sure why, perhaps i lost my mojo... anyways, you can all thank cdman for rousing this ogre out of slumber...

in a recent post, cdman lays out his response to randy abrams' post on whitelisting... perhaps it was the hint at the possibility of an ad hominem attack against a fairly well known and long-standing member of the av community (randy was, for a long time, the voice of av from within the belly of the beast - aka microsoft) that piqued my interest, but that wasn't cool so let's move on...

cdman's first substantive beef is the suggestion that whitelisting companies can't do their job without anti-virus software... ignoring the fact that in practice this is actually true (whitelisting companies currently depend on anti-virus software to determine if something is safe to add to their whitelist) lets look at the hypothetical alternatives he suggests - specifically that whitelist vendors could rely on reputation or building the generic malware equivalent of marko helenius' automatic and controled virus code execution system...

relying on reputation offloads the problem of keeping bad software off the whitelist onto the very people providing the bad software... sure people who provide bad software consistently will get a bad reputation and not be trusted, but what about people who only do it once in a blue moon? microsoft releases tons of legitimate and safe software but they have on occasion also distributed virus infected materials... you'd be hard pressed to justify not whitelisting code from microsoft if you were relying on reputation but if you did whitelist all their code you would eventually whitelist something you shouldn't have... furthermore, relying on reputation is precisely the method that customer-generated whitelists are primarily made with, which would make a vendor-generated whitelist using the same technique rather pointless...

next is the idea of building a system to automatically execute samples and perform baseline comparisons to see if the sample compromised the system... and of course this has to be done on a scale sufficient to handle the rate at which sample files are produced (otherwise whitelist vendors wouldn't be able to keep up, much like av vendors supposedly aren't able to)... but have you looked at bit9's (a whitelist vendor) figures? av companies already augment their small armies of malware analysts with automated methods of determining what's bad, and old methods like this are almost certainly among them... if the av vendors can't keep up with the malware then what hope do whitelist vendors have in keeping up with the goodware when it's production rate is (necessarily) several orders of magnitude greater than that of malware? there are all kinds of capabilities peculiar to traditional av companies that whitelist vendors could try to replicate in-house, but the scale of the samples they have to deal with make it impractical for them to do anything other than to replicate the blacklisting capabilities in full in-house and that would mean they would still be using what the general population considers av - it would just be their own...

a third option cdman mentions is using technology like that developed by mandiant... whitelist vendors are unlikely develop such capabilities in-house when it's almost certainly cheaper to buy products/services from others who've already developed those same capabilities, but lets hope in this case they stay away from such ethically questionable companies as mandiant... bad enough that mandiant hires people whose marketability in security is thanks in no small part to their past efforts at making the problem worse, but to then turn around and have some of those same people do essentially the same thing in the company's name at an event like race-to-zero smacks of not just some lapse in HR's judgment but rather of an alignment of moral compasses... perhaps i'm in the minority here, but if a whitelist vendor gets in bed with a company like mandiant i wouldn't touch them with a 10 foot barge pole...

second to the beef about what whitelist vendors would do without av software was cdman's beef with randy's understanding of what actually constitutes a whitelist... i have to admit that my first impression on reading the statement that the TSA implements a whitelist was one of confusion... the most widely known (and reviled) measure the TSA implements is the no-fly list, which is fairly obviously a blacklist... i actually left a comment on randy's original post expressing my confusion but literally as i was writing it it dawned on me that there were other measures implemented by the TSA such as the newly revised rules for flights which basically require one to be granted permission in a 2-stage process before you can fly... of course, as i write this i'm reminded of the various trusted traveler programs that schneier has written about on occasion - those are also whitelists...

despite all the disagreement, though, in the end cdman and randy are actually in agreement about the role of whitelisting - it's simply another layer... both think it's got it's strengths and it's weaknesses, areas where it's more applicable than others, etc.. however, i think randy has once again distilled a complicated topic to a simple analogy when he compares the folks who say whitelists are the end of av with airbags calling seatbelts obsolete... what a clever way to say they're full of hot air...

Sunday, November 02, 2008

suggested reading

  • ThreatBlog » Blog Archive » Giving (Samples) to Charity
    responsible sample handling is very important and, from what i've seen, very misunderstood... i wrote about it myself quite some time ago but it's something that bears repeating and david harley does a good job of explaining what's accepted/expected in the anti-malware industry/community (as opposed to seeming to put one's foot down, as i did)...
  • ICMPECHO · Malware landscape in 2020?
    interesting question/answer about the future of malware from daniel nystrom... there's just one thing i think he missed - if the the past was about fame and the present is about fortune, power/influence seems to be the logical next step... no idea when we'll get there, but as we grow more connected and dependent on technology it will become more and more feasible...
  • hype-free: Popular ideas about AV
    here, cdman reminds me of what i can't stand about slashdot and similar sites - it's a mob of clueless people who somehow manage to influence the thinking of other clueless people... if only there were some way to get them to spread the right idea instead of the wrong one...
  • hype-free: Stepping beyond the vendor-centric security solution
    good post on the importance of understanding the threat and the tools as opposed to listening to marketing (stop listening to marketing!)... the wording reinforces the av = 'blacklist only' impression most people have, but other than that this is a good post with xkcd-style graphics (for people who need diagrams in their explanations - hmmm)...
  • Virus Bulletin : VB2008, Ottawa - conference slides
    no, i'm not going to cherry pick out the best ones... it really doesn't take long to flip through each one... use your best judgment about which are the most interesting to you...
  • Sunbelt Blog: Virus Bulletin 2008 keynote address
    great presentation about the perception of the av industry by both consumers and enterprises... also a great observation on why enterprises are less satisfied - it's scale... everything fails sometimes but when you're dealing with thousands of machines the problem posed by those occasional per-machine failures is magnified... the law of large numbers is not your friend in this context... this is not an easy thing for someone to put into the proper context (unless they've got a really good handle on finite mathematics) so the resulting perceptual bias isn't too surprising...
  • hype-free: Everything is grey
    an unfortunate observation about the virus bulletin conference this year... everything may be shades of gray these days, but i'm still an uncompromising s.o.b. who only sees black and white...

Wednesday, October 22, 2008

adobe clickjacking patch is a red herring

by now i'm sure just about everyone with even the slightest interest in security has heard about clickjacking, and most have probably even heard that adobe issued a patch that addresses clickjacking...

the problem is that clickjacking isn't exclusively a flash problem, it's a browser problem that could simply do a some extra things when flash is present...

specifically, without the flash patch a clickjacking attack could interact with a users microphone and/or webcam if either are present, allowing the attacker to spy on the victim...

that's pretty scary from an emotional point of view but not very interesting from a rational point of view... the majority (though not all) of online attacks these days are financially motivated and spying on individuals in the analog world doesn't easily lend itself to traditional models of cybercrime monetization where the victims' information is stolen en masse or their hardware is used to attack others... you might be able to steal information with a webcam or microphone, maybe, but that's something that definitely does not scale so you'd need to either target someone you expect to be able to get a lot of money out of or you won't make enough for it to be worth the trouble or risk...

what an attacker might be able to do is setup some sort of peep show website where the money comes from people paying him/her for access to feeds from compromised machines, but then the attacker would need to publicize his/her service and run an increased risk of capture...

what this ignores, however, is that clickjacking is not just about spying on people (or the other flash-specific things that fall under the clickjacking umbrella), that's just something you can do when flash isn't patched... clickjacking itself is still possible even after flash has been patched and all the attention given to adobe's flash patch may well cloud the issue that there is still a very troubling set of problems with virtually all browsers and, other than using firefox with noscript, very little ordinary people can do about it an the moment that doesn't break the internet for them... so while it is technically true that adobe did release a patch that addresses clickjacking, it only addresses those aspects of clickjacking that specifically affect flash... the rest of the set of attacks collectively known as clickjacking remain a problem for web users, site owners, and browser vendors alike...

Thursday, October 16, 2008

countering malware quality assurance

just a quick post to point out something i just realized - maybe it's obvious to others, maybe not...

i was reading dancho danchev's umpteenth post on malware q/a when it struck me that the recent trend by vendors to put the scanning engine in the cloud effectively kills malware q/a... i suggested before that randomizing heuristic parameters might combat it, but that's probabilistic and comes at the cost of false positives... cloud-based scanning on the other hand ensures that the scanner implementing this new architecture cannot be used effectively (if at all) in traditional malware q/a because the samples will either be given to a server that the av vendor controls (thus destroying the samples' value to an attacker), or if the malware tester manages to sever the ties with the av server then the testing will give an incomplete and misleading result regarding the detectability of the malware in question...

each new scanner that goes this route is another scanner removed from the pool of scanners that malware q/a testers can use and with symantec, mcafee, trend, and panda (and perhaps more that i can't think of at the moment) having already gone this route that's a significant portion of the av user-base which will soon no longer be at the mercy of malware q/a...

i have no idea if this was intended or serendipitous, but either way it's still a good thing - and once again it proves the point that for every measure there exists a countermeasure...

Tuesday, October 14, 2008

is secunia the new consumer reports?

(well, looks like i'm going to add to the noise about the secunia test... it's already been discussed on the security fix blog, eset's threatblog, the register, the sunbelt software blog, the panda security blog, and the zero day blog)

so secunia did a test with exploits they developed in the lab and found that av products sucked...

well gee, doesn't that sound an awful lot like the consumer reports test? if you don't make the distinction that exploits are a special case of malware then there would really be no difference between this and that terrible consumer reports test where they paid to have 5000 new pieces of malware created...

but exploit code is a special case, we need to create benign exploits, we need to be able to use them in order to determine whether our systems are vulnerable, whether the patches that supposedly fix the vulnerability have been applied properly, whether they truly fix the vulnerability, etc...

so then this test was alright then, right? nope, not by a long shot... first and foremost is the idea that anti-virus/anti-malware products should detect these lab-grown exploits in the first place... the issue is not so much that av is only in the business of detecting malicious software, it's that there are very good reasons why av can't and shouldn't be detecting benign exploits... as i just got finished saying, we need those exploits, we need to be able to use them, but how are you supposed to do that if your anti-virus is blocking access to them? it's one thing to use a benign exploit to test the vulnerable surface area of your systems, it's another thing altogether to turn off your security software to do so... there are a variety of technical, logistical, and legal reasons why anti-malware must be constrained to detecting only those things with a proven malicious pedigree, and if people don't like that it's just too bad - get over it, those reasons aren't going away just because they don't mesh with your ideology... either exploits are legitimate and necessary, in which case anti-malware apps shouldn't be alarming on them because it interferes with the proper use of exploits, or they aren't, in which case secunia acted in bad faith by creating new malware - secunia can't have their cake and eat it too...

the next problem was this notion of detecting exploitation... read that carefully - "detecting exploitation"... is exploitation a thing? no, it's a behaviour, and despite certain claims from various companies about dynamic behaviour-based heuristics, known-malware scanners (and by all indications that's the only part of the security suites secunia actually tested, begging the question why they bothered with the suites at all - incompetence maybe?) are built to detect bad actors not bad actions... that's not to say anti-malware companies don't have offerings to detect and even block bad or unauthorized behaviour, they do have HIPS offerings, but it's fundamentally different technology from what people are accustomed to with anti-malware and it's not always simple to setup/maintain properly so they don't necessarily bundle it with their anti-malware products or even in their internet security suites...

speaking of the distinction between actors and actions, that confusion seemed to be rooted in the use of the term "threat"... i have in the past remarked that "threat" is a bit of an ambiguous term where all kinds of things with "threat" in their name get called simply threats... in this case in particular, anti-malware apps use the term "threat" as a short form of "threat agent" (which is actually one of the more common things that "threat" is used to represent)... exploitation isn't an agent by any stretch of the imagination but because everything gets called simply a "threat" those who don't really understand what's going on (which surprisingly seems to include the folks at secunia) will treat all usages of the term the same and not realize that anti-malware scanners are only designed to catch some of the things that get called "threat"...

of course, a post on this site wouldn't be complete without pointing out the conflict of interest that is also present in this test... secunia's business is about vulnerabilities and exploits - they have a paid product for detecting vulnerable software (a different approach to the same ends as trying to catch/block the exploits) so it's in their financial best interests to publish a test that makes the anti-malware industry look bad (aka FUD) and the exploit problem look important (in other words, hyping up the problem)... it's a classic self-serving study and one wonders if the people responsible think the rest of us were born yesterday...

Wednesday, October 08, 2008

what i did on my sector vacation

well, today was the second/last day of sector '08 and now that it's over i figure i might as well write about my experience there... i don't do a lot of these sorts of posts primarily because i don't go to a lot of security conferences (the only other one i've been to was rsa '02) but since i'd heard good things about the last sector and since it is practically in my back yard (well, ok, it's approximately 1.5 hours away on public transit) i didn't feel all that guilty about broaching the subject with the higher-ups at work (the smaller price tag helps too)... it came at a pretty hectic time for me at work, but thankfully i was still able to attend...

the opening keynote of the first day was with the royal canadian mounted police; it was pretty dry - unless you're a fan of alphabet soup, there were a lot of acronyms that i had never heard before and i'm sure i'll never hear again...

the first talk i went to after that was kevvie (kevvie?) fowler's sql rootkits and encryption presentation... this was an excellent talk if for no other reason than it gave me information i can put to direct use when i get to work tomorrow (awesome - instant value for my employers in the first session of the first day)... it was also a pretty good at not misusing the term rootkit as so many others are want to do these days...

the lunch panel was unfortunately not all that memorable for me... maybe i was too busy eating or maybe the people talking just had too short a period in which to make a lasting impression, i dunno... maybe i'm just not a panel person...

the second talk i attended was jay beale's middler presentation... once again possible value for my employer, at least possibly... it's an unsettling realization that there's now an automated tool that can affect confidentiality, availability, and integrity of web data (by virtue of allowing an attacker to read, withhold, or even modify your data) basically if any part of your session happens outside of ssl...

next up for me was bruce potter's presentation on novel malware detection... now i admit this one was for me and not my employers - the first two sessions where the only ones where i could find that looked like they might touch anything relating to work so from here out it's purely for my own interest... bruce was a very entertaining speaker, however he got on a bit of an anti-av rant that wasn't really part of his presentation (that dealt more with detecting anomalous network activity by analyzing logs)... i just rolled my eyes at the rant - i considered saying something, but since 'the hoff' was just across the aisle and 1 row back i felt certain it would have resulted in smack upside the head and instructions to stop being such a jerk... ok, not really, but an in-person presentation is a very different forum from the online kind (where perhaps i'm known for being a jerk) in a number of ways, not the least of which being time constraints, and had no desire to sabotage the presentation (though as for that the length of the rant did that slightly anyways since bruce wound up running out of time)...

the final talk i attended the first day was matt sergeant's presentation on tracking current and future botnets... there was a fair bit of interesting details about current and past botnets, about their sizes and how those metrics were generated, about characteristics unique to the emails sent by each, etc., but matt (like bruce potter before) got a little anti-av saying they needed a kick in the butt about detecting those emails... my knee-jerk reaction (all internal because i didn't want to sabotage this talk either) was that av is in the business of detecting malicious code not emails generated by malicious code, but as i let that stew for a while i realized 2 things... the first was that that was remarkably like something i said back in the mid-to-late 90's about av software not detecting trojans... not that i thought they shouldn't detect trojans, but just that it was a defensible position to take - obviously detecting trojans was better than not doing so and i'm glad they started but there was a time when anti-virus software was literally just anti-virus... now that it's morphed into anti-malware it's once again defensible to say that detecting something that isn't malware (and emails aren't) is outside av's scope but (and this is the second thing i realized) the users would be better served and better protected if av did detect these things - it would serve as a negative control on a botnet's ability to acquire new nodes (at least until the bot designer change's the smtp footprint/fingerprint of the bot)...

so in that respect i think i'll agree with matt sergeant that av could be and perhaps should be doing more... his misapplication of a sophos graph of malware prevalence, however, i won't agree with... he really, really ought to know better than to try to compare a botnet's size with entries on a malware prevalence table... here's why it just doesn't work: a malware prevalence table breaks down malware prevalence on a per variant basis while botnets today are generally heterogeneous from a variant perspective (which is to say there are many times many different variants of a particular family of malware in any given botnet thanks to things like server-side polymorphism) so while a botnet may be huge, the prevalence of any particular variant in that botnet's ecology is still probably pretty low... that being said, something i've been mulling over in my mind for a little while now is whether prevalence tables broken down by family instead of variant are the more interesting metric these days in light of botnets and malware campaigns in general... personally, i'd like to see both types of tables...

the opening keynote for day two was with stephen toulouse and had the best opening ever ([looks at giant screen] 'dear lord, is that what i look like' - or something to that effect)... stepto thinks us security folks can bring some valuable insights and thought patterns to fields outside of security - i certainly hope so, i'm in software development and while i'm not high enough up the food chain to make the big decisions (and frankly don't want to be) i have been able to direct some things which i hope have been of benefit...

the first talk i went to on the second day was deviant ollam's presentation on lockpicking... i found the lockpicking at sector absolutely fascinating, both in this talk and also in the lockpick village... perhaps it goes back to me breaking into my own home as a kid when i (frequently) lost/forgot my keys, but i just went into sponge mode and absorbed as much as i possibly could... i imagine there were a lot of questions about dudley combination locks since that seems to be what we have up here in place of master combination locks and since they aren't exactly the same (our dials go up to 60, so there)... one of these days i should really put in some time and try to see if i can brute force a dudley lock combination because i have 4 here but only one with a known combination...

day 2's lunch keynote was johnny long's presentation on no-tech hacking... it was very entertaining to see the scope of the average (and often not-so-average) person's obliviousness to security concepts, but it was also a little disheartening especially when he ended the presentation without offering any hope for change... i think we all know there's a scarcity of security awareness in the general population, that's one of the reasons why i started looking into whether memetic engineering might be able to help things along (re: secmeme.com)... if only i had time to work on all the things i want to do (though i'm sure johnny's talk will provide a wealth of inspiration for the security idiot meme)...

the next talk i attended was james arlen's security heretic presentation... this presentation was in a rather unfortunate time slot, since chris hoff's virtualization presentation was going on at the same time (i thought of going to that one but really, the only thing i use virtualization for is sandboxing)... this was also the presentation that seemed to get the least amount of respect from attendees as people were constantly coming and going (and i picked a seat near the door, uggh!)... unfortunately it was also not the talk i was expecting it to be... while i was expecting to hear about one security pro's journey (as the description suggested) what i got instead was a very large number of calls for a show of hands... i'm sure it all makes sense to people who have been in similar positions but for someone like me who hasn't it just doesn't help me relate...

the last talk i went to was jason wright's presentation on finding cryptography in object code... strangely enough, i went to a talk on the same subject at rsa '02 where they talked about finding magic constants... jason lead off with that (which made me a little bit nervous) but that was only for context as the meat of the presentation was more about frequency of occurrence of operations usually only seen in crypto which was interesting... it also wound up being the shortest talk i saw...

and then it ended, and we gathered for one last time in the keynote/lunch hall, i interrupted hoff fulfilling his security rockstar duties to say hi (sorry i didn't see you later when everyone made for the door, chris, i did look though but i'm sure you'd already attracted another crowd), and then they handed out prizes (prizes! i don't remember that at rsa) and it was done... it was a great experience, i enjoyed the talks a lot, i didn't network as much as i probably should have ('cause i generally suck at that) but oh well, at least i can put some more faces to familiar names now - perhaps if the next time is soon enough (ie. not 6 years in the future) i'll be able to put that to good use...

Saturday, October 04, 2008

do we really need anti-virus

thanks to alan shimel for pointing out this post by kai roer asking if we need anti-virus in 2008...

alan is right, of course, that anti-virus does a lot more than just catch viruses these days, and that anti-virus helps control older virus populations (good on ya alan, most people don't consider that)... kai asked a variety of interesting questions, thouhg, which i tried to answer in his comments... like lonervamp mentions at the start of this post, discussions like this are something i'd prefer not to lose to the sands of time so i'm reposting my comments here (and i may start doing this more often, 'cause it seems like a great idea):
"Have the virus authors started to write smaller virus that stays below the radar - and thus are not detected by the AV-products?"

many of the virus authors of old have simply grown up and found more fulfilling things to do with their lives...

"Are they now only targeting special targets - like particular banks, SCADA or singled out corporations? Or countries and causes? Or are they too busy writing malware to care about virus? "

viruses are malware... non-viral malware, however, seems to be what the cyber-crooks prefer these days... self-replication has a way of getting out of hand and calling attention to the malware...

"Do we really need to pay out on gateway and client AV solutions if there are no virus knocking on the door? "

who says there isn't? just because you aren't hearing about new epidemics doesn't mean new viruses aren't getting written or even that the old ones have stopped... some of the most prevalent email-born malware are mass-mailing worms that are already a few years old (like netsky.p)...

"Do you believe that there are no more virus out there?"

absolutely not... some people are still getting infected by decades-old boot infectors...

"That other threats are taking over and rendering AV-solutions useless?"

other threats are just as detectable with av as viruses are...

"Is this the whole truth? Or have the AV solutions became so good that they catch everything, even without us noticing? That they are an absolute critical part of the solution for any entity connected to the net?"

let's put it this way - old viruses never die, their populations just shrink to a size too small to accurately report/track... av is one of the things that helps keep those populations small...

and when it comes to newer non-viral malware, av is what helps keep it's usability limited... without the blacklist, the bad guys would just find something that successfully bypassed other defenses and keep using it over and over because other defenses cannot be updated as fast as a blacklist...

Thursday, October 02, 2008

symantec's reputation is in the clouds

the folks at symantec posted something interesting today - It's All About Reputation...

well, they're not the first ones to go into the cloud (obviously, see panda, trend, mcafee, etc)... nor are they the first to go with a reputation system (drive sentry, for starters)... are they the first to put a reputation system in the cloud? i don't know, maybe, but at this point it still doesn't seem like such a big deal...

what gets me, though, is the idea that it's no longer using fingerprints... a reputation system that says X is good, Y is bad, and Z is unknown is basically just a combination of a blacklist and a whitelist - and it's not a bad idea, i've been saying they complement each other well for quite a while now so actually putting both paradigms into a single product makes a lot of sense... the blacklist is what says Y is bad, the whitelist is what says X is good, and since Z isn't on either list it gets called unknown... the thing is blacklists use signatures (fingerprints) and in their own way whitelists do to - they have to in order to make sure the thing you're looking at really is the same thing you saw before and determined to be good/bad... it can't work without a signature/fingerprint/whatever... this new reputation system may use a different form of signatures, but it definitely uses them...

and as for how this protects you from brand new threats as the post suggests, i can only imagine it works like this: things on the blacklist are stopped from executing automatically, things on the whitelist are allowed to execute transparently, and things that aren't on either list will cause the user to be given an "are you sure?" prompt... finally, someone's putting dr. solly's perfect.bat (which asked the user if the file being scanned was a virus or not) to good use...

the other way it might work is that the unknowns get automatically run in a sandbox of some sort... not a sandbox meant for malware classification, mind you (a number of products already do that), but a sandbox intended to separate the handling of untrusted items from the trusted host system... i mean, since they're already adding 2 of the 3 preventative paradigms into a single product (hopefully seamlessly), wouldn't it be cool if they added the 3rd as well? i won't hold my breath for them actually implementing this, though...

suggested reading

Saturday, September 27, 2008

from the 'what were they thinking?' file

using the (incredible?) hulk as an av spokesperson:


i actually saw a very small version of this in a print ad some time ago and i thought it was hilarious at the time but i couldn't find it anywhere online so that i could share it... graham just reminded me of it and better still pointed to a site where one might find it and low and behold here it is...

and why did i think it was hilarious? well, the hulk is certainly powerful, no one can deny that, but the hulk also has one major weakness - he's a dumbass... and that's who symantec chose to represent their product... well i guess it's better than a picture of peter norton who once famously said that computer viruses were an urban legend like alligators in the sewers of new york... yes, that's right, the same peter norton that norton anti-virus is named after...

av product removal tools

i saw this article on ghacks.net about the McAfee Consumer Product Removal Tool and it reminded me of that xkcd cartoon that i wrote about not too long ago...

on the subject of doing your job horribly wrong, i think we can all agree that both symantec and mcafee are doing their job of providing quality anti-malware software horribly wrong when you consider they make removal tools to get rid of their own products...

it has to be considered an ignominious distinction when you have to do for your own anti-malware product what you've previously done for particularly nasty bits of malware - write dedicated removal tools and manual removal instructions...

worse still when those methods don't work... some of the IT guys at work not too long ago spent nearly a full day all told trying to remove an older version of symantec's product from my work machine (in order to put on the new symantec endpoint protection product) with no luck... the removal tools didn't help - the manual instructions might have if there'd been time left to try them on the first day... thankfully when our head of IT took a look at it on the second day, he had better luck...

Wednesday, September 03, 2008

chrome follow-up

i mentioned in the previous post that there was a big gapping hole in chrome's sandboxing in that it doesn't sandbox plugins and that i was unable to obviate this problem by running chrome in a 3rd party sandbox... thanks to user Franklin on the wilders security forums i was pointed towards this sandboxie support forum thread that suggests you can make chrome work in sandboxie if you allow sandboxed apps to load kernel drivers outside of the sandbox... sandboxie itself strongly recommends against doing so, as do a few of the participants in the thread... lowering the security of sandboxie in order to make chrome work sort of defeats the purpose of using sandboxie to shore up the gapping hole in chrome's sandboxing...

in addition to that problem, however, it seems that even after you uninstall chrome it leaves a scheduled task behind to run the googleupdate program and the googleupdate.exe itself is also left behind... i've seen data files left behind after an uninstall before but i don't think i've ever seen binaries left behind (or if i have it's rare enough that i don't recall it) - that's a pretty crappy uninstall...

Tuesday, September 02, 2008

chrome plated security

seems like everyone is talking about google's new browser called chrome... google even made a comic book about the product...

the comic describes a lot of what went into chrome and it sounds like google made some very interesting design decisions, like making it a multi-process application instead of simply multi-threaded, or their new javascript engine... it also captures the google aesthetic quite well...

those aren't enough to make me switch browsers however... i enjoy a certain level of security with my current setup and would like at least an equivalent level from any alternative browser but it doesn't seem like i'd be able to get that from chrome...

chrome implements two of the three preventative paradigms i've written about before... specifically it implements blacklisting of sites (both sites known to host malware and sites known to be phishing pages) and it implements sandboxing... what it doesn't seem to implement (at least the comic made no mention of it and i saw no sign when i tested it out) is whitelisting...

there are a couple of reasons why whitelisting may have been left out of the mix... the most simplistic of which being they simply weren't familiar with the concept - indeed, website blacklists are in most modern browsers now so they're well known, and there was a mini boom of browser sandboxing tools (enough that google actually acquired one called greenborder last year) so they should be on google's radar too... as far as whitelisting active web content goes, however, there aren't a lot of players - noscript stands out as the only one i can think of (other than IE's trusted zone which almost no one uses because it's not user friendly)...

if we assume google was familiar with whitelisting active web content in general and noscript in particular then another possible reason for it's exclusion emerges - when you look at the frequency of noscript updates (updates that are more than just a new list as you'd get with a blacklist update, updates to the codebase itself) you come away with the impression that technology like noscript is far better off as a plugin than built into the browser... i don't think anyone wants to update their entire browser that often...

finally, a thought that formed while commenting on michael farnum's blog - there seems to be a fundamental philosophical conflict between the default-deny paradigm that whitelists represent and organic growth of application ecosystems... the way web content is developed can be considered to be the very embodiment of organic growth and google makes it pretty clear that they want to help that along, not get in it's way, so it could very well be that whitelisting simply doesn't fit in their security vision...

it's a rather important part of mine, however, so at the very least i won't be switching to chrome until someone makes a noscript-like plugin for it... things won't be all sunshine and lollipops when that happens, though... as i mentioned earlier, the browser is supposed to have sandboxing built in and on the surface that sounds great... unfortunately they haven't figured out how to sandbox plugins... this seems like a pretty big deal to me because flash is a plugin and shockwave is a plugin and quicktime is a plugin, etc... the very active content that isn't being controlled by way of a whitelist is apparently not being contained by their sandboxing technique either... this seems a little backwards because i'm fairly sure that back in the days when greenborder was still around if you ran your browser in that sandbox the plugins stayed in the sandbox too... no worries, we'll just run chrome inside another sandbox like sandboxie - right? try it and you too may be greeted with the "sad tab"...

i've no idea why (i'm no sandboxie power-user) but all pages lead to sad inside a second sandbox and the helpful reload advice sometimes leads to the frozen tab (and boy does he look cold)...
from the looks of things in process explorer, each tab process runs in a sandboxed process with the unsandboxed browser process as the parent but when the browser process is sandboxed it doesn't appear to be able to create it's own sandboxed children (even though a sandboxed firefox can launch vlc in the sandbox without trouble)...

so there's no whitelist and not only is the built in sandboxing insufficient, it appears to kill the option of using a 3rd party sandbox to make up for it's deficiencies... it's pretty, don't get me wrong, i like the look of it - i also like the idea of faster javascript, but i get more security with my current browser setup and when legitimate sites like yahoo mail or cnn are sometimes found to be serving malicious content that security becomes pretty important...

Monday, September 01, 2008

what are anti-virus best practices?

i'll be blunt - some of this (maybe even all of it) is going to seem dead obvious... i'm sorry if this is old news you, however it would appear that quite a number of otherwise smart people (be they security professionals or [ahem] rocket scientists) have decided either that av marketing is gospel and thus been bitten by the ensuing false sense of security, or that av marketing should be trustworthy (even though the marketing for virtually every other product on the planet isn't) and became bitter and jaded because av failed to live up to the expectations that the marketing created...

just to be clear, this is going to be best practices for known-malware scanning (what most people consider to be the entirety of av)...

  1. use it - i don't just mean have it installed, i mean sit down and actually scan things (like files you download or removable media you insert into your computer) from time to time (and scanning the entire drive on an automated schedule doesn't count)... install and forget security is bullshit... you need to interact with the software, to learn what it's alerts actually look like so you can distinguish them from fake alerts, and to become skilled in the actual use of the tool...

    some may say that's working for your security software instead of making it work for you and real people have real jobs to do, but it doesn't actually take much time or effort to scan incoming materials and both of those other concepts ('working for the software' and 'making it work for you') are nonsense... it's a tool, and like any tool you can only get out of it what you put into it... if you don't know how to use it properly then you ultimately won't do as good a job at protecting yourself with it as you might have otherwise... it's a poor craftsman who blames his tools...

  2. keep it up to date - known-malware scanners are only as good as the knowledge-base they embody... new malware is being created at a rather incredible rate and the only way to make known-malware scanners effective against that new malware is to update those scanners with 'knowledge' of that new malware...

    sure there are other types of anti-malware software that don't require such updates, but they also don't come with expert knowledge about known-malware built into them and so are of little diagnostic value when prevention inevitably fails... also, it's always easiest to prevent something bad if you 'know' specifically what to look for...

  3. quarantine first - don't trust the scanner to automatically delete things it thinks are bad... scanners make mistakes and you don't want to compound those mistakes by allowing the scanner to automagically delete critical files...

    trust the results enough to consider that the file(s) in question may be bad, but verify those results, and verify that it's safe to get rid of the file(s) before you actually do so... trust but verify...

  4. don't rely on it alone - just as you shouldn't place absolute trust in it's results when it detects something you also shouldn't place absolute trust in it when it doesn't find anything... this is probably the best practice most directly in conflict with av marketing, and there are a number of people i really wish would stop listening to marketing and catch up because i learned of the benefits of using a multi-layered approach (what would be better known now as defense in depth) back in the early 90's thanks to the people who actually made (rather than marketed) this stuff...

    you need to use other types of anti-malware technology in conjunction with scanners (not just additional scanners) if for no other reason than because there will always be a window of time between when a new piece of malware is created and when an update for that malware is made available... in other words: if the malware's too new, a scanner won't do...

  5. scan from a known-clean environment - just as you shouldn't necessarily trust the scanner you also shouldn't trust an infected or even possibly infected machine... this likely won't seem intuitive since the av industry itself has for years been producing features and services that contradict this such as web based scanners or the ubiquitous scheduled system scan... in an effort to be less of an uncompromising s.o.b. let me say that those are features and services that are offered for convenience and shouldn't be solely relied upon as they do not replace outside-the-box scanning...

    you can't trust a compromised environment to accurately report it's own integrity... the code the runs first wins and the only way to make sure malware doesn't run first is to operate in an environment where no code from the suspect system has run; not the operating system, not even the boot sectors...


now, hopefully, most or all those smart people who i know are familiar with the concept of best practices will modify their expectations and stop listening to those marketing departments that are filling their heads with lies... (stop. listening. to marketing!)