Monday, December 31, 2012

formula for the future

while reading over some of the prediction-posts that tend to spring up towards the end of the year it struck me that many of these predictions follow from pretty basic axioms. sometimes they're particular to the malware world but other times they're even more general and are simply combined with a malware concept to make a malware prediction. at any rate, i thought perhaps i should point out some of these things and help prediction-makers in the future, as well as to offer a different perspective on predictions of these types.

  • the end of something as we know it - the only constant is change, you can never step into the same river twice (because having stepped in it the first time changed it), etc, etc. things are always different in the future than they were in the past. note that the prediction is rarely about the end of something but rather the end of something as we know it. all that means is that that something is going to change in a way that some people will consider significant.
  • the dynamic equilibrium between focusing on software exploits and social engineering will continue to be dynamic - when exploits are hard to come by the bad guys will focus on social engineering to get the job done, because it is a job and they want to get paid. when exploits are easy to come by, focus on using them will increase because it's easier to fool an automaton consistently than it is to fool people consistently.
  • software that has been popular to exploit in the past will continue to be popular to exploit in the future - bad guys will continue to focus on many of the same pieces of software because that's what their victims use and because they've had so much success there in the past. it's where the money is. good guys will continue to focus on many of the same pieces of software because that's what many people use and thus where the greatest impact can be made. in essence, so long as frequently exploited software remains popular among users it will continue to be a valuable target.
  • we will learn more things about more things because of attacks - attacks are generally disruptive in one way or another. even if it's an attack that simply compromises confidentiality rather than availability, it still disrupts the process of using a system or service even if the system or service isn't itself disrupted. disruption has a tendency to highlight things that we would never have known if everything continued to run smoothly and the disruption had never taken place. disruption gives us an indirect view into things that might otherwise remain opaque.
  • trends that are increasing will continue to increase, trends that are decreasing will continue to decrease - inflection points are rare. if they weren't it would be much more difficult to recognize trends in the first place. as a corollary, emerging trends will go mainstream. 
  • the world is increasingly being made out of computers, and as new marriages of old-tech and computers become mainstream the resulting new-tech will become a target for attack - everything new opens up new possibilities for attack, and everything that is made new by sticking a computer in it opens up new possibilities for attacking that computer. furthermore, the more popular something is, the more profit can be had by attacking it, and so the more tempting a target it becomes.
  • people will grow tired of defending themselves the same way they always have and try to find new alternatives - it is a quirk of human nature that we are always looking for new things. it is also true that we are largely unsatisfied with our current defensive capabilities (for whatever reason).
  • new defenses will be developed to ward off new attacks and those defenses will be met with new offensive countermeasures - this is just the same offensive vs. defensive cat and mouse game that it's always been and always will be.
  • new platforms will offer promise and seem secure, until they stop - figuring out how to successfully attack something without a lot of prior knowledge is difficult and time consuming, but it eventually happens, and the more it happens the faster additional attacks can be formulated until eventually we recognize that the platform wasn't the promised land we had hoped for.
  • everything old will become new again - over time, as new people shift into a population and old people shift out, that population will collectively forget the past. at least for a while until someone figures out how to do new things using old concepts and then a renaissance occurs. we've already seen that occur with stealth, as well as boot sector infection/modification.
if you see the shadow of a prediction you've made in the list above, congratulations, you probably made a formulaic prediction. don't feel bad, you get better results using a formula than by trying to pluck the future out of thin air.

Thursday, October 04, 2012

sector 2012

i wasn't sure i was going to post anything about my experience at sector this year. i mean, there comes a point at which you all must get it that it's a good security conference and you should all go, right? well, some thoughts were brought to the fore at the end that pretty much cemented the fact that i was going to post something about it so it might as well be within the larger context of my usual "this is what i saw/heard/thought at sector this year" post.

this year the conference was back in the metro toronto convention center's south building. that's where it was held the first time i went in 2008, back when sector was still small. it's grown considerably since then. before the conference i was thinking that i kinda preferred the old days when it was small. turns out part of that preference was a preference for the south building. sure, they may be sticking us in the deepest, darkest hole in toronto (actually, it's quite well lit) but the space is so much better (and the washrooms so much bigger - no more lines that stretch out past the door, with presenters begging to cut in line so that they can get things done and still make it to their own talk in time).

i pretty much avoided the vendor hall as best i could. unfortunately that meant i didn't spend much time checking out cool things like the lockpick village, but i really didn't feel like i had the tolerance for all that marketing this year. i will say, though, i thought symantec's choice of putting their name on a rubik's cube was  interesting. i don't know if it was their marketing department's intention, but associating yourself with something that seems like it should be easy but turns out to be fiendishly complex sends a really interesting message.

speaking of questionable marketing, dave lewis grabbed this shot of a flyer that was placed around all the tables in the lunch/keynote hall. in case you're not aware, in this context "flame" refers to the 'super surveillance software' that was apparently related to stuxnet. they're trying to say that their whitelist would have stopped flame, but since flame is said to have been able to spread through windows update and since people who use whitelists generally whitelist the binaries that come through windows update, i have a hard time buying their claim.


the first keynote of day 1 was an excellent talk about lawful access by law professor and copyfighter michael geist. like others, i found the statistic that ISPs handed over subscriber data voluntarily (without a court order) over 90% of the time to be pretty troubling, and i also think it genuinely calls into question the need for lawful access regulations. is that remaining few percent really worth trampling privacy without judicial oversight? i don't think so.

the first regular talk i attended was jamie gamble's talk about the vulnerabilities that time forgot. i was surprised to learn that this was actually a fairly *nix centric talk, and that while *nix had once earned a reputation of being much more secure than windows the reality now may well be the opposite because of all the advancements microsoft has made.

the next talk i went to attend was steve werby's talk about QR codes. unfortunately the talk didn't actually happen. however, i could already anticipate what some of the problems with QR codes probably were and charlie miller's lunchtime keynote on attacking NFC had a number of parallels to what i anticipated the QR problems to be. maliciously crafted QR codes that could exploit the reader code itself or QR codes that pointed to websites that exploit the browser.

that charlie miller keynote was quite entertaining, of course, but i can imagine some more creative ways he might have tried to surreptitiously read his friend's hotel key card than just holding his phone up to his friends arse as they walked around. maybe they wouldn't have gotten the phone into the 4cm range from the key, but even a low probability is better than the zero percent chance associated with not even making the attempt.

following the lunch keynote i went to gunter ollman's talk on threat attribution via DNS. i did this in part to test out a theory. when he blogs he mentions DGA (or domain generation algorithms) a lot, maybe even too much, and i wondered if that was going to come up in the talk. it turns out not so much. unfortunately he does seem to be somewhat softer spoken than a lot of the other presenters and when you combine that with the open door and people milling about and chatting outside the room it seemed he was unwittingly competing with background noise and not always winning. i may just give the video of the presentation a watch in spite of having attended it live.

after that i attended  michael perklin's anti-forensic techniques talk where i got a lot of ideas about what to do to make investigations too long and expensive to be of value if i ever turn to the dark side. also there were countermeasures, but i consider the chances of me performing forensic investigation even less likely than turning to the dark side. still, always interesting to hear about topics outside my usual comfort zone.

finally, for the last talk of day 1 i attended the introduction to web app testing talk by dave miller and assef levy. in part because i thought that it could be relevant to the day job and also because i wanted to get a taste for this new security fundaments track that sector was offering and this talk was slotted into.

as an aside, i think the introduction of the security fundamentals track points to the influence of the guys from the liquidmatrix security digest podcast, as they have a similarly named/themed segment on their podcast that i think really stands out compared to the other security podcasts i've sampled. unless, of course, the fundamentals track was introduced last year (when i didn't attend), in which case i suppose the influence traveled in the opposite direction.

day 2 of the conference started out with a keynote by jim reaves about global efforts to secure cloud computing. unfortunately, at that early hour and with that topic, i found my mind wandering to other things more often than not. i'll touch more on that soon enough though.

the first talk i attended on day 2 was jon mccoy's hacking .net applications, which was very interesting and i plan on sharing it with my collegues at work when it becomes available for viewing. thankfully jon handed out materials at the end of his talk so that i can share stuff before the video even becomes available (probably later today, or earlier today depending on whether one goes by when this is written vs when it's published).

after that i attended ed bellis' talk about the security mendoza line. it didn't really speak to me. oh well, you can't please everyone all the time.

the lunch keynote was kellman meghu's very humourous attempt to star wars into an allegory for security efforts within an organization. the empire, as you may recall, encountered some problems on their way to ruling the galaxy and there are a number of things they could have done better.

following lunch i attended steve werby's talk about busting hashes. in fact, i attended it twice - before and after the fire alarm was pulled (which seems like it could have been an excellent diversion for some nefarious activity). it was interesting to learn about how one actually approaches a task like that, as well as how cheaply it could be done using amazon.

finally, i attended chester wisniewski's talk about the blackhole exploit kit. this was the second talk from the fundamentals track that i attended and frankly i'm confused about it's classification as a security fundamental. for one thing, a single exploit kit is a very specific and narrowly defined topic for a talk classified as security fundamentals. further, in spite of being a student of the malware field for over 20 years, i still found a few things worth taking notes about. in fact, as i mentioned to chester afterwards, his talk was on par with malware related talks i'd attended in years past, before they even introduced the security fundamentals track.

now, i know some people may not pay much attention to which track a talk may be in, but i actually do and i suspect very strongly i'm not the only one. i pay attention because experience doing otherwise has taught me the value of  those classifications. talks in the management track bore me (i'm no manager), and i've found sponsored talks to be rather disappointing in the past. this new fundamentals track i'd interpret as being for when you know you're not strong in a subject that you want or need to know more about it, and the earlier fundamentals talk i attended about web app testing certainly bares that interpretation out.

so why did this particular talk (and i suspect the previous fundamentals talk on targeted malware, which i didn't get a chance to see but which at first blush also seems poorly classified) get put in the fundamentals pile? well, there are 2 main possibilities i can see. the first is that the sector folks got some really good non-fundamentals talks that they really wanted to squeeze in somewhere and they just happened to have space where the fundamentals talks were supposed to go. this certainly seems plausible, but in that case there really was no need to present them to attendees as though they were actually fundamentals just because they're taking up spaces that had been reserved for fundamentals talks. that just winds up giving people a false impression of what the experience of attending the talk will be like.

the other possible contributor is something i've actually seen a fair bit of in general security circles. there's an attitude or school of thought that says essentially "malware is old hat, we know this stuff already", and while that may be true for some, the fact that there are attendees, presenters (including this year), and even thought leaders who appear incapable of drawing even the most basic distinction between viral and non-viral malware and instead simply call everything viruses demonstrates pretty clearly that not everyone actually knows this malware stuff already. sure they're familiar with the malware phenomenon (who isn't these days) but there's a world of difference between familiarity with a subject and actually knowing it. i'm familiar with the television show "dancing with the stars", even though i've never watched it and can't possibly know very much at all about it.

and make no mistake, i'm not talking about obscure little details like the difference between keyloggers, screen grabbers, and form grabbers. viral vs. non-viral malware is one of the most basic and fundamental delineations you can make in the malware set. viral and non-viral malware are as different from each other as plants and animals - sure they're both alive and you can kill them both with fire, but the one that runs away is a heck of a lot harder to kill that way.

what i would like to see, what had my mind pre-occupied during the cloud security keynote, and what the introduction of fundamentals track made me think might actually work, is a true malware fundamentals talk - malware 101 if you will - because from my perspective it's needed. it's painful watching one presenter after another, one thought leader after another, one authority after another, all reinforce in the people trying to learn about security the mental model about malware that your mom and pop had back in the mid-90s. how effective has that mental model really been for your parents? has it empowered them to better control malware-related outcomes? i have a feeling it probably didn't, so is that really the mental model you want to foster at the "security education conference" of toronto?

unfortunately, confirmation bias and the dunning-kruger effect being what they are, i suspect any such fundamentals talk would fall on a lot of deaf ears (or not even be attended, as seemed to be the trend with fundamentals talks this year - they seemed to have the poorest attendance with the most walk-outs of all the talks i attended).

Monday, September 03, 2012

on exploit detection

data is code and code is data. i know that people like to think of data and code as being inherently different and separate from each other but in the end it's all just symbols in various languages on a real-world analog to the turing machine's infinite tape. fetching the next instruction, decoding it, and performing an operation based on what resulted from that decoding is not intrinsically different from fetching the next chunk of data, parsing it, and performing an operation based on what came out of that parsing. the ability to treat code as data is what allows us to distribute software, and the ability to treat data as code is what allows us to add new functionality to our general purpose computers that they weren't able to do before.

the distinction between programming languages and other input languages is simply that a programming language is intended to be used by programmers to create programs that are used by other people and other input languages are intended to be used by everybody else for (ostensibly) less complex purposes. it's really a matter of complexity, more than anything else, but it turns out that there are many input languages that aren't thought of as programming languages but are turing-complete and so are just as complex as any programming language. further, it's not just that there are two groups of languages (trivial and complex), but rather an entire spectrum.

with that in mind it's little wonder that data in the form of exploits can be just as bad as more traditional malware, but the implications go beyond just that. since data and code are not intrinsically different, since exploits are essentially malicious programs written in the input language for a vulnerable piece of software or hardware, some of the things we know to be true about malware should also hold for exploits.

specifically, the problem of deciding if an arbitrary input exploits a vulnerability seems like it should be reducible to the halting problem. i can certainly imagine trivial input languages where the presence or absence of an exploit is easily decidable, such as one where there are only 2 possible inputs, a legitimate one and one that triggers a vulnerability. however, in general, and certainly in no small part due to the existence of turing-complete languages, i'm fairly confident in saying that this problem is analogous to the virus detection problem. and THAT means that the problem of exploit detection, regardless of how it's approached in practice, is ultimately subject to the same limitations as the problem of virus detection.

to that end, testing exploit detection can run into the same methodological problems that testing virus detection can. for example, creating one's own exploits and testing against those instead of drawing exploit samples from the wild presents the same kind of problem that creating one's own viruses and testing against those would. namely that what is created in the lab does not necessarily correspond to what exists in the wild and so a product's ability to detect what was created in the lab doesn't necessarily correspond to it's ability to detect what's in the wild. certainly it's trivial to see how that would be true for products that detect known exploits using a method similar to known-malware scanning, but even for products that attempt to parse suspected exploits exactly the same as a vulnerable application would this would be the equivalent of emulating suspected viral code which we already know can't work all the time either (otherwise the virus problem would be decidable and we could then use it to solve the halting problem) so there could be exploits in the wild that elude such detection. as such we really should be regarding exploit detection testing that uses in-house exploits on equal footing as virus detection testing that uses lab-made viruses.

what is a vulnerability

a vulnerability is generally considered to be a mistake or oversight that allows the vulnerable program or system to behave in an unintended and undesirable way in response to a particular input.

alternatively, in the sense that the inputs a program accepts represent a language, a vulnerability is a condition where unanticipated functionality is exposed and can be called like a programming API by 'programs' written in the input language in question (otherwise known as exploits).

the occasional exposure of unwanted functionality is unfortunately pretty much an inevitability because the complexity of modern systems makes it next to impossible to anticipate all possible outcomes for all possible inputs. only the most trivial of programs can be made proof against this problem.

although vulnerabilities generally do occur as a result of a mistake or failure to anticipate something, it's also possible for undesirable functionality to be exposed intentionally. these are sometimes considered to be a kind of backdoor. however the vulnerability came about, the exposure of that functionality is unintended and undesirable to someone - be they the software vendor or the software consumer.

back to index

Monday, July 30, 2012

the folly of offensive cyberwarfare

i often feel like i can't speak freely about cyberwarfare (due almost entirely to my principles about not helping or giving ideas to those who make things worse, be they criminals or warmongers), but it's hard to deny the importance of the subject, and frankly when i read what others have written i can't help but think they haven't really thought things through very well.

when it comes to the development and use of digital weapons there are a couple of key points whose implications need to be understood and kept in mind. the first of these is the problem of attribution. the difficulty in attributing the source of a computer attack is both tactically advantageous, and strategically constraining. the advantages should be obvious - you can attack an opponent without the opponent knowing who is responsible for the attack (unless you screw up and reveal yourself). the problems begin, however, when you consider that the opposite is also true - one or more of your opponents can attack you without you being able to tell who it was.

consider what that means. if you can't tell who is attacking you, how can you possibly retaliate? imagine you're blindfolded, you're ears are plugged, you're handed a gun, and stuck in a room with other people who may or may not also have blindfolds, earplugs, and guns. if someone starts shooting at you, how can you realistically return fire to defend yourself without knowing where to shoot? without the ability to target the your opponent you cannot retaliate, you cannot end him before he ends you. further, when the threat of retaliation becomes empty like this, deterrence no longer works. as a result, so-called cyberweapons have no defensive value.

in the absence of attribution, a conflict must consist entirely of first strikes. there is no retaliation, there is no deterrence, there is no scaring an enemy off by showing what you can do, there is no point to visibly stockpiling armaments. that is significantly different from most conventional models of warfare. this is one of the reasons why cyberwarfare must only ever accompany traditional warfare - only then can combatants avoid firing blindly in the dark.

another important aspect of digital weapons to keep in mind is the fact that they're digital. they're code, bits and bytes inside a computer. what is the one thing computers are exceptionally good at doing with those bits and bytes? copying them. imagine a world where it's expensive to develop guns and tanks and bombs from scratch, but it costs virtually nothing to copy them. that is the world of cyberwarfare, and that is a world that actually does not favour the attacker, per se, but rather one that favours the forager (one of the things sun tzu teaches is to forage on the enemy) because s/he gets the most benefit (a sophisticated digital weapon) for the least cost.

when weapons cost a lot to develop from scratch but very little to copy, what conditions do you suppose would make their development and use make sense? if you could eliminate the possibility of copying and re-use, if the weapon assured you a decisive victory over your opponent then it wouldn't matter that it would be falling into your opponent's hands simply by you using it. unfortunately in the real world a nation has many opponents. they cannot all be fought at once and so a decisive victory against all whose hands such a weapon may fall into is not possible. 

what's more, not all of those opponents are necessarily other nation states. the low cost of copying weapons means that the barrier to entry on this battlefield is lowered and more mundane opponents like terrorists or even sophisticated criminals can join the fray. as you can well imagine, those kinds of opponents are far less disciplined and restrained than a nation state would be.

our best example of a digital weapon thus far is stuxnet. it's believed to have cost millions of dollars and many man-years of effort to develop, and now anyone who wants a copy can download it for free from the internet. i would be remiss if i failed to point out that by now stuxnet is pretty well neutered (since the windows vulnerabilities it exploited have been patched and most anti-malware will detect it's presence) and it would actually take a fair bit of time and money to replace the neutered bits so it could be re-used; but there was a time before that was true when stuxnet was still in many people's hands and could have been re-used at a much lower cost. as strange as it may sound, the malware's discovery and subsequent neutering actually served to mitigate the potential for it's re-use. it's creators are lucky it happened before the malware could be re-used against them, their allies, or other interests they might have. that might not be the case next time.

it's a peculiar irony that the people most capable of developing digital weaponry (the technologically advanced and dependent) are the same people who have the most to lose if such weaponry is used against them. this should make it obvious that defense, not offense, is where one's money and effort would better spent. just so i'm not that guy who makes overly general, hand-wavy suggestions, here are some ideas that are more specific than just "you should do defense":
  • fault tolerant designs
    • redundancy is already something we know how to do, but we don't always do it well (as the 2003 blackout clearly demonstrated). the internet is said to be so fault tolerant that if part of it goes down the rest will just route around it. there are many paths to the same destination. obviously that's a property we want for power, communications, water, etc. it's something we should be designing for and unfortunately because it costs it's something we need to pay for. 
    • ease of recovery is something we perhaps don't think quite as much about. how easy is it to replace physical equipment that no longer operates as intended? how easy is it to overwrite logical systems from backups? how many minutes, hours, or days does it take? aiming to minimize that time also serves to minimize the impact of anything unfortunate happening to the system in question.
  • system hardening
    • vulnerability research and patching is something that already enjoys a certain measure of success in consumer and enterprise environments. if a nation wants to protect it's critical infrastructure then perhaps more money and energy should be poured into researching vulnerabilities in that critical infrastructure.
    • eliminating or rethinking external connections (including both network connections as well as removable media) basically stands in direct opposition to the trend of hooking more and more of our most important systems up to one of the most dangerous networks on the planet (the internet). as with most things, the business incentives that are driving the current trend need to be accounted for. the cost saving benefits of remote connections are understood, but there are other ways of achieving that goal without resorting to the internet - that's simply the cheapest/easiest option
    • whitelisting of code and possibly even data on critical infrastructure systems, because quite frankly why should new unknown material be introduced to these systems? it may make sense to occasionally and in a very controlled way apply fixes or make changes corresponding to changes in the industrial processes those systems are a part of, but in general those machines should be unchanging and that should probably be enforced. as a corollary, eliminating dual use is probably a good idea too. there's no reason you should be writing your TPS report on a machine that can control whether the lights stay on. 
  • early warning detection
  • evasion
    • disinformation can be useful in a couple of ways. it can raise the cost of successfully performing an attack by tricking the attacker into doing useless things, and it can also trick the attacker into doing something that sets off an alarm (ie. they walk into a trap).
    • decoy systems that look and act for all intents and purposes just like the real ones can reduce the impact and success of attacks, especially if they have the same warning sensors the production systems do, by turning the problem of attacking the right system into a game of chance for the attacker. holding out baits for the attacker to reveal their presence and/or intentions can certainly confer advantages on a defender.

i've made a few veiled (and not so veiled) references to sun tzu. while some people may argue that "the art of war" is over-played and not particularly relevant to information security, when it comes to warfare of any kind i think it's very relevant:
Sun Tzu said: The good fighters of old first put themselves beyond the possibility of defeat, and then waited for an opportunity of defeating the enemy.
that is to say, of course, that we need to take up a defensible position first before we start attacking. by most accounts (including president obama's) we aren't there yet.

Thursday, June 14, 2012

flame's impact on trust

if you haven't watched it yet, i encourage you to check out the video of chris soghoian's talk at personal democracy forum 2012. the TL;DR version is that, because it compromised the microsoft update channel, the flame worm damaged our trust in automatic updates and that's a bad thing because automatic updates have done so much good for consumer security. mikko hypponen is even reported to be planning to write a letter to barack obama to ask him to stop the US government from doing this sort of thing again.

unfortunately, i think this is short-sighted, and not just because you can't put the genie back in the bottle.

inherent in the idea of automatic software updates is this little thing called automatic execution. i've written before about how problematic automatic execution can be. it all comes down to delegating a security decision (to execute or not to execute) to an automaton, and fooling an automaton has never been considered difficult. this particular example might be one of the most difficult cases of pulling the wool over a machine's eyes there is, and yet it was still done and done in a big, headline-making way. an automaton may be more consistent about how it does things than ordinary people are, but that may not necessarily be a good thing for security. being consistent and predictable is, no doubt, part of what makes an automaton easier to fool than a person.

the trust we placed in automatic updates was, if not completely misplaced, then at least partially misplaced. microsoft may have made it harder to fool their code again, but i doubt every other software vendor in the world has put an equivalent amount of time and engineering effort into their own update security - some (many?) are probably within the realm of what more traditional cybercriminals can exploit.

we placed trust in microsoft's code, in the automaton they designed, not because it was trustworthy, but because it was more convenient than being forced to make the equivalent decisions ourselves. furthermore, we relied on it for protecting consumers because it's easier than educating them (in fact many still don't believe this can or should be done). it can certainly be argued that we can't rely on consumers to make good security decisions all the time, but clearly we can't rely on automatons to do it either. a lot of effort has been put into developing controls to prevent bad things from getting through, but what has been done with regards to detecting when those preventative controls fail? not a heck of a lot, and i don't have a lot of confidence in the idea of creating a second automaton to spot the failings of the first.

if the trust we had in automatic updates is fading then let it fade. we never should have been trusting it as much as we were in the first place. maybe, with more reasonable limits on that trust, we can begin to develop more meaningful countermeasures for attacks exploiting this particular brand of automatic execution (and it's important that we do so, because attacks only ever get better).

Sunday, June 03, 2012

correcting a rebuttal

so if you haven't read it yet, mikko hypponen wrote a non-apology for why his company and companies like his failed to catch the flame worm. in response, attrition.org's jericho wrote a rebuttal, taking mikko to task for the perceived bullshit in the aforementioned non-apology. while i think his heart was in the right place, a number of the specific criticisms jericho makes are unfortunately based on an understanding of the AV industry that is too shallow.

When we went digging through our archive for related samples of malware, we were surprised to find that we already had samples of Flame, dating back to 2010 and 2011, that we were unaware we possessed.
In the second paragraph of this bizarre apology letter, Mikko Hypponen clearly states that the antivirus company he works for found or detected Flame, as far back as 2010. In this very same article, he goes on to say that F-Secure and the antivirus industry failed to detect Flame three times, while saying that they simply can't detect malware like Flame. Huh? How did he miss this glaring contradiction? How did Wired editors miss this?
i can tell you right now how this apparent contradiction isn't actually a contradiction at all. AV companies receive many submissions per day, more than can possibly be examined by humans, and a great many of those submissions are not actually malware. AV companies use automated processes* that they develop in-house to determine if a sample is likely to be malware or not and (if possible) what malware family it belongs to. not everything that goes through these automated processes gets flagged as malware and that technical failure to recognize the sample as malware 2 years ago is almost certainly the failure that mikko was trying to explain. they still keep the sample, mind you, even though they don't have reason to believe it's malware, and that's how mikko was able to find it in their archives.

(* those automated processes can't easily be distributed for customer use, by the way, as they require too much expertise to use, not to mention too much data as a comparison against a large corpus of other known malware is part of those processes)

really, when you think about it, if they'd known it was malware 2 years ago, they'd have added detection for it (even if they didn't look any closer - adding detection is also largely automated, especially for something that doesn't try to obscure itself or otherwise make the AV analyst's job harder) and then would have trumpeted the fact that they've been protecting their customers from this threat for years when it was finally revealed what a big deal it was. i think we've all seen this scenario play out before, and it would certainly serve their business goals better than admitting failure would.

hopefully this explains why this sort of rage...
Rage. Epic levels of rage at how bad this spin doctoring attempt is, and how fucking stupid you sound. YOU DETECTED THE GOD DAMN MALWARE. IT WAS SITTING IN YOUR REPOSITORY.
... is misplaced.

 It wasn't the first time this has happened, either. Stuxnet went undetected for more than a year after it was unleashed in the wild, and was only discovered after an antivirus firm in Belarus was called in to look at machines in Iran that were having problems.
For those of us in the world of security, hearing an antivirus company say "we missed detecting malware" isn't funny, because the joke is so old and so very tragically true. The entire business is based on a reactionary model, where bad guys write malware, and AV companies write signatures sometime after it. For those infected by the malware, it's too little too late. It's like a couple of really inept bodyguards, who stand next to you while you're getting beaten up and say, "I will remember that guy's face next time and ask management to not let him in." Welcome to the world of antivirus.
this frustration is something i see coming from a lot of security professionals. unfortunately the truth is that there are technical and theoretical limitations on what can be done with "detection". it frustrates me that others can't seem to recognize and accept this fact. detection requires knowledge of what one is looking for, whether that be a binary or a behaviour or something else. you can't look for something without knowing something about what you're looking for.

while it is true that AV vendors claim their products have heuristics - technology that can detect as yet unknown malware - that is still based on knowledge gained from past malware. it's reasonably well suited to detecting derivative work, but anything truly novel (and stuxnet certainly was that) is going to get through.

 When researchers dug back through their archives for anything similar to Stuxnet, they found that a zero-day exploit that was used in Stuxnet had been used before with another piece of malware, but had never been noticed at the time.
This statement is a red herring Mikko. Stuxnet used five vulnerabilities, only one of which was 0-day. One local privilege escalation used in Stuxnet was unknown at the time, the rest were documented vulnerabilities. If your excuse for the AV industry is that Stuxnet wasn't detected because it used a 0-day, your argument falls flat in that you should have detected the other three code execution / privilege escalation vulnerabilities (the fifth being default credentials).

a couple things wrong with this. first, what i think jericho meant to say is that AV should have detected the exploits rather than the vulnerabilities, as the vulnerabilities weren't in stuxnet but rather in the software that stuxnet was attacking.

second, as the provided chart clearly indicates, the disclosure date for 4 of the 5 vulnerabilities is after the discovery date for stuxnet (2010-06-17). i'm pretty sure that means the only previously documented  vulnerability was the default password. either that or jericho is actually right and simply using poor/contradictory evidence to back up his point.

finally, and probably more to the point, mikko wasn't offering the 0-day exploit as an excuse for why AV failed to detect stuxnet. he was pointing to it as a previous example of something being missed despite part or all of it being in their archives. he was trying to explain how just because something is in their archives that doesn't mean they are aware of it's significance. try thinking of AV vendors as being like pack rats - the only reason they'd throw something away is if they've already got a copy of it (and even then, i'm not so sure).

 The fact that the malware evaded detection proves how well the attackers did their job.
Again, it didn't evade detection, according to Mikko Hypponen, Chief Blah Blah at an Antivirus Company. He said so in the second paragraph of an article I read. That said, the fact is antivirus companies miss hundreds of pedestrian malware samples every day. Is this because the authors did so well, or that your business model and detection capabilities are flawed? One could easily argue that they are intentionally so (reference that bit about 4 billion dollars).
and again, jericho misinterpreted that second paragraph. now as for AV missing hundreds of pedestrian malware samples a day, i suspect the number is much higher, but that it doesn't miss them for long. a great deal of malware can't be detected before signatures for it are added, and those signatures can't be added until after the vendors get a sample. what sets flame and stuxnet apart from those cases is the length of time the malware was in the wild before signatures got added to the AV products. is this a flaw? once again, you can't look for something if you don't know anything about what you're looking for. in the sense that such a limitation prevents the system from being perfect i suppose it could be considered a flaw; but show me something that is perfect - you can't, can you, because nothing is perfect.

 And instead of trying to protect their code with custom packers and obfuscation engines - which might have drawn suspicion to them - they hid in plain sight. In the case of Flame, the attackers used SQLite, SSH, SSL and LUA libraries that made the code look more like a business database system than a piece of malware.
Where to begin. First, custom packers and obfuscation engines have worked very well against antivirus software for a long time. I don't think that would have drawn any more suspicion. Second, Flame is 20 megs, around 20x more code than Stuxnet. In the world of antivirus, where you are usually scanning very small bits of obfuscated code, this should seem like a godsend. If it isn't using obfuscation, then what is the excuse for missing it? Are you really telling me that your industry is just now realizing the "hide in plain sight" method, in 2012?
first, while custom packers have worked well against the AV software that's distributed to customers, they don't work nearly as well against the processes, procedures, and techniques used by AV companies when processing sample submissions. as a result such custom packers only prevent detection at the customer's site for a relatively small amount of time (though obviously long enough for some customers to get pwned).

second, being huge and not obfuscated works against being recognized as malware precisely because it is so out of character for malware (and thus it's not just the AV industry that's just now realizing the "hide in plain sight" method). unless you're under the mistaken belief that AV companies still reverse engineer every sample they get each day (numbering in the tens of thousands, last i heard), it should be obvious how the lack of obfuscation provides no help in determining if something is malware.

 The truth is, consumer-grade antivirus products can't protect against targeted malware created by well-resourced nation-states with bulging budgets.
No, the truth is, consumer-grade antivirus can't protect against garden variety malware, but it can apparently detect the well-resourced nation-state malware. Oh, that makes me wonder, what do you offer that is better than consumer-grade? Other than a bigger price tag, does it do a better job detecting malware?  
it's actually the reverse of what jericho says here; AV tends to add detection for garden variety malware quickly (thus closing the window of opportunity for the malware to successfully compromise the AV's customers and consequently protecting most of them from it) while they tend to add detection for state-sponsored malware quite slowly (and thus not doing much to protect anyone from it initially). i specifically express this as tendencies because there are exceptions (unless the energizer RAT, which took 3 years for people to recognize, was actually state-sponsored and nobody bothered to say anything).

as for what else AV offers - there are tools (other than scanners) that typically aren't packaged with their consumer product because they require a higher level of technical expertise to operate than home users tend to have. IT folks in the enterprise are more likely to have the necessary know-how to use these tools. when mikko refers to consumer-grade anti-virus products in this context, he's talking about the technologies that require next to no knowledge to use, ones where the knowledge is baked in in the form of signatures. that kind of product isn't going to stand up well against nation-states. technologies which require more of the user, ones where it's the user him/herself looking for anomalies or where the user decides which code is safe to run or what programs should be allowed to do have a better chance of helping you defend yourself against a nation-state (at least when paired with a talented user of that technology).

 And the zero-day exploits used in these attacks are unknown to antivirus companies by definition.
And again, this is bullshit. Stuxnet had 3 known vulnerabilities (CVEs 2010-2568, 2010-3888/2010-3338, 2010-2729), 4 if you count the default credentials in SIMATIC that it could leverage (CVE 2010-2772). Flame apparently has 2 known vulnerabilities (CVEs 2010-2568, 2010-2729). Even worse, if antivirus companies had paid attention to these samples sitting in their archives, they may have ferreted out the vulnerabilities before they were eventually disclosed.
once again, i can't find any details suggesting these vulnerabilities associated with stuxnet were known before june 2010. i'm not doubting that that may be the case for one or more of them, i do seem to recall mention of something like that for a vulnerability exploited by stuxnet, but if it's in the documentation i can't find it.

also, once again, these pieces of malware had exploits for these vulnerabilities, not the vulnerabilities themselves. and even if the vulnerabilities were known, programmatically determining if an arbitrary program exploits a particular vulnerability is as reducible to the halting problem as programmatically determining if an arbitrary program self-replicates. if it's a known exploit it should be findable just as known-malware is findable, but otherwise don't hold your breath (unless you're willing to let it happen and detect it after the fact).

 As far as we can tell, before releasing their malicious codes to attack victims, the attackers tested them against all of the relevant antivirus products on the market to make sure that the malware wouldn't be detected.
Way to spin more Mikko. This is standard operating procedure for many malware writers. I believe VirusTotal is the first stop for many bad guys.
this may have been true once upon a time, but the way virustotal now shares their samples with vendors makes them a poor choice for doing malware q/a. and they've been sharing samples for quite some time now.

When a big malware event makes the news, it only helps you. Antivirus firms are the first to jump on the bandwagon claiming that more people need more antivirus software. They are the first to cry out that if everyone had their software, the computing world would be a much safer place. The reality? Even computers with antivirus software get popped by malware. They don't detect all of those banking trojans and email worms like you claimed in this article. They don't protect against the constant onslaught of new threats. Your industry, quite simply, has no reason to improve their detection routines.
as a matter of fact, the individual member companies in the industry have quite a compelling reason to improve their detection routines. AV companies may not need to be better than the bad guys, but they certainly need to try and be better than each other. to adapt the old adage, they aren't trying to outrun the bear, they're trying to outrun each other. they compete. do you think mcafee simply rolled over and accepted the number 2 market position and let symantec have #1? no of course not. they'll try and take that market share if they can and symantec meanwhile will try to hold on to their lead. part of that is done with marketing, but part of it is also done with technological advancement.

I remember in the early 90's, the next big thing in the land of viruses was polymorphing virus code. Antivirus vendors were developing "heuristics" that would detect this polymorphing code. In almost 20 years, how has the development of that gone? Obviously not stellar. Antivirus companies spend their time cataloging signatures of known malware, because that sells. "We detect 40 squirrelzillion viruses, buy our software!" What happened if you developed reliable heuristics and marketed it? "Buy our software, once, and the advanced heuristics can catch just about any malware!" There goes your business model. So of course you don't want to evolve, you want to wallow in your big pile of shit because it is warm and comfortable. You can ignore that overwhelming smell of shit that comes with your industry, because of the money. Don't believe me? Let's hear it from the pro:
actually, traditional polymorphism was attacked in a generic way through emulation. it worked pretty well, but has no effectiveness against server-side polymorphism due to lack of access to the server-side processes. as for developing reliable heuristics, why do you think malware writers perform malware q/a? their new pieces of malware will already evade known-malware scanners simply by virtue of not being known malware. the only reason to take that extra step of performing malware q/a is because the heuristics actually are fairly effective. what the the heuristics are not is perfect. they can be fooled (in some cases quite easily, but fooling an automaton has never been considered difficult). i suggested a heuristic countermeasure to malware q/a but i have no idea if anyone actually tried it.

i do think jericho has hit the nail on the head about evolving, though. what i don't think many people appreciate is the path which the industry's evolution needs to take. prevention is always going to have failures like what has happened with stuxnet or the flame worm or even the garden variety malware while it's still new. we as users need to grow up and start accepting that fact. there is no magical prevention fairy - if you're old enough to not believe in santa, the easter bunny, and the tooth fairy, then you're old enough to realize that prevention has limitations and always will have limitations.

the evolution that the industry needs to do is to help users come to terms with this fact and help them deal with it by providing tools for detecting preventative failures in addition to the tools they already provide for prevention. to a certain extent they already used to do this but those tools appear to have disappeared from the mainstream. at least a few vendors had integrity checkers back in the day. i remember one from kaspersky labs (which was painfully slow as i recall), and one from frisk software. i've long wondered (and have my suspicions about) what happened to wolfgang stiller because his product "integrity master" was excellent but . i believe adinf (advanced diskinfoscope) is still around but it's very much not part of the mainstream - most people would have never heard of it, and certainly wouldn't think to use it because it's not part of a larger AV suite (as most people have been trained to look for).

the strategy most people use to keep themselves safe is what they learn from the AV industry, but the industry mostly just tells them to use product X and product X is almost invariably prevention only (maybe with a dash of cleanup). it is probably the most simplistic and rudimentary strategy possible, but i don't hold much hope the AV industry will ever change that. in the process of teaching the public a more sophisticated security strategy they would in effect be making users more sophisticated and less susceptible to the marketing manipulations that vendors currently employ in their competition with each other. on top of that they'd have to stop using those manipulations themselves and risk losing their existing users to another vendor who hasn't stopped before they have a chance to make the users proof against those manipulations. the principles behind the PDR triad (prevention, detection, recovery) are foreign to most people, and among the rest many have perverted it into something almost as mindless as "just use X".

AV companies are businesses and like all businesses their primary concern is their own financial interests. those interests don't align with the security interests of their customers or the public at large and no amount of hand wringing or foot stomping is going to change that. i don't see any way that the industry can drive the change that's needed while serving their own interests, and i'm not going to hold my breath waiting for them to sacrifice their own interests for the common good. the AV industry responds to market demand and if you truly believe evolution is needed then it's beholden upon you to help build demand for tools and techniques that go beyond prevention.


(update 2012/06/03 5:11pm -  it appears that jericho has been remarkably responsive to comments from others on twitter so the rebuttal itself is in a state of flux. i'll have to wait and see how this turns out)

Monday, April 30, 2012

prediction vs. tempting fate

how many of you reading this remember conficker? i certainly remember it, but i have a long memory, especially when it comes to regrets. you may recall an apology i posted some years ago concerning the possibility that i might have made a small contribution to the feature-set of that malware by way of giving the bad guys ideas.

well, from where i sit, that may very well have happened again, only it wasn't me this time, it was an AV vendor. now you might expect that, as a result, said vendor may become much more scrupulous about censoring themselves. it's no easy task, let me tell you, but it's certainly something that some of you (and myself included) probably expect from the people who are supposed to be protecting us.

alternatively, you might expect just an apology, under the philosophy that it's better to ask forgiveness than permission. that would certainly be easier, although accepting responsibility for negative outcomes is not generally considered good for the public image of a company. as an individual, owning up to one's mistakes and accepting responsibility is considered a mark of maturity, but the rules for companies are unfortunately very different in this regard.

which brings us to the thing you might not have expected, but probably should have - bragging about it as though it were a "prediction":
that link, by the way, points to this story on informationweek.com which in turn points back to a post on the f-secure blog where it was suggested that if the people behind the flashback malware for the mac upgraded to unpatched java vulnerabilities (it had only been using exploits for old, already patched vulnerabilities before) they might affect a lot more people.

is that a prediction or an instruction? f-secure's blog, as you might be aware, is one of the most (perhaps the most) widely read blogs in the entire anti-malware field. it stands to reason that if the people behind flashback are reading any anti-malware blogs, that one is probably on their list. even if it isn't, that particular post was about their efforts and would most likely have been forwarded by someone who was aware of their work (just as, in a small software development company, every press release, news article, and TV spot that mentions your work gets sent to everyone in the company).

would they have upgraded to unpatched vulnerabilities without that suggestion being made? perhaps, perhaps not. we'll never know. do all malware profiteers who use exploits for patched vulnerabilities inevitably upgrade to ones for unpatched vulnerabilities? that's doubtful - exploits for unpatched vulnerabilities are much harder to come by than ones for vulnerabilities that have already been patched. the transition is anything but inevitable, so there exists the very real possibility that f-secure's "prediction" was more like a self-fulfilling prophesy.

but of course, it sounds better if you call it a prediction. it sounds like something that adds value to their voice (though they have plenty already without that) and so helps to build the brand.

it seems to me that openly predicting what the bad guys are going to do next, or speculating on what they could do better, only invites them to take your advice. you might then capitalize on that with liberal amounts of spin, but at the end of the day is giving them ideas really so much more benign than giving them code? don't you tempt fate either way?

Monday, February 20, 2012

to patch or not to patch: an edge case

i find myself in a rather odd predicament today. i've been using an older computer (we'll call it one of my secondary computers since it get very little use compared to the one i'm writing this with right now) and i got a pop-up notification that i was running out of space on drive C:.

now i want to put this in context; this computer sees very little use, mostly it gets turn on, has some files transferred to it or from it, and then switched off. i can't remember the last time i actually installed anything on it (for that matter, since i've switched over to using portable software, i can't recall the last time i installed anything on my primary system either) so let's say it's been a really, really long time since i touched the C: drive at all. mostly it's the larger secondary physical disk that gets used.

so you can imagine my surprise when the notification about running low on space popped up. was there something malicious going on? had the system been compromised? no, it was in the process of applying system updates. patches had actually eaten up the majority of my free space - the WINDOWS directory was taking up over 7 gigs of my 10 gig drive. i'm actually in the position where i have to uninstall software so that the patching will succeed.

now, this is an XP system so one might reasonably suggest that i upgrade to the latest version of windows so that i can avoid having all those patches on my system. unfortunately, this system is so old, i doubt it will meet the system requirements of anything newer than XP.

one might also, entirely reasonably, suggest upgrading the harddisk to something larger. memory is cheap, after all. it's a little difficult to justify upgrading the drive just to accommodate microsoft's attempts to fix their earlier mistakes, though. it's certainly not like i'm going to get any additional benefit from greater space on a drive i never make use of.

one could even go so far as to suggest upgrading all the things so that not only would i be able to move to the latest version of windows, i could have more space and a snappier system that is more amenable to being used day to day. but i already have a computer that's more amenable to being used, so really everything that was wrong with the idea of upgrading the drive is also wrong with this plan, in spades.

it's times like this that make one question things we normally take for granted, like why does it patching take so much space? is the fixed binary that much larger than the one with the error in it? no, that doesn't appear to be what's going on. it appears that windows keeps a bunch of stuff around so that you can uninstall the patch if you want to. does anyone ever actually do that? there may be a way to reclaim the space those uninstall files take up, but it's not obvious just by looking at the system, and right now simply letting the updates happen the way an ordinary user would is actually reducing the utility of the system.

thankfully the utility that's been lost wasn't really needed anymore. but what about next time? support for XP is ending, but it's not over yet, there are still more patches coming. i'm going to be facing the prospect of no longer getting patches anyway, so i might as well get used to it early - and since the system is little more than a network attached storage device that spends most of it's time powered off, i can't really see the harm.

in security, we normally think of applying patches as a no-brainer. it may present some logistical hurdles in the enterprise, but it still needs to get done. sometimes, though, there are cases where it just doesn't pay off. no practice is so universally beneficial that it should be mindlessly applied 100% of the time.

Sunday, February 12, 2012

is the iphone really malware free?

friday morning mikko hypponen posted a tweet about the folks behind flexispy changing the look of their site, and i took the opportunity to pose a question to him about iphone malware. you see, flexispy is (or was) a piece of mobile malware that f-secure posted about about 6 years ago. not only that, but there's a version of the software for the iphone, so i found mikko's repeated statement that there was no malware for the iphone to be a little strange in light of the fact that both he and his company have been aware of software that seems to contradict that claim for quite some time.

the resulting discussion with both mikko and his colleague sean sullivan lead in 2 separate directions, so let's look at them in turn. first mikko responded with the following:
@imaguid No malware for iPhones. If you jailbreak your phone: all bets are off. Flexispy runs on jailbroken only.
now to me, this gets to one of the hearts of the matter. when people say there's no malware for the iphone, they're only talking about non-jailbroken phones. the pertinent difference between a normal iphone and a jailbroken iphone is that normal iphones can only install apps from the app store. the app store is a so-called walled garden where all the apps go through a screening process to keep out undesirable programs.

so what people really mean when they say no malware for the iphone is that there's no malware in the app store. this is an important distinction, because the iphone ecosystem (and by extension, the threat landscape) extends beyond the app store. when chris di bona attempted to downplay the threat malware played to android devices by pointing to google's efforts to keep their android marketplace clean, a number of folks were quick to point out that the android ecosystem extended beyond google's android marketplace, so it seems strange that people would forget the same line of reasoning applies to the iphone as well.

one other thing (well, the only other thing, really) that mikko said was:
@imaguid ...and to top it all: we couldn't do anything about iPhone malware anyway, as Apple won't allow Antivirus products to iPhone.
and you know what? why should they allow them when there's apparently "No malware for iPhones"? whether or not there is malware for the iphone, apple doesn't want people to think there is. there is this (rather old) idea that computers can be as easy to use as an appliance (like a toaster). this idea is actually very appealing. it promises computers that just work, computers that don't get malware, computers that are easy and safe and worry free. that promise is part of the secret sauce behind apple's marketing, but if they allowed AV products in then it would dispel the illusion of the appliance computer and apple's products would lose their lustre. it's very convenient, then, that AV vendors are willing to be complicit in apple's marketing by repeating the claim that there's "No malware for iPhones".

but such unqualified claims are, as mikko has revealed, not technically true. it's not that there's no malware for iphones, it's that there's no malware in the iphone app store.

but wait, is that really true? is there no malware in the app store at all? i'm not sure that's true when we've recently been made aware of apps in the app store that collect and send personal information to a remote server without the user's knowledge or consent. but it's about time i turned my attention towards the much more verbose and nuanced discussion that sean sullivan and i had on the subject. perhaps he can shed light on why these personal info stealing apps shouldn't be considered malware. while mikko didn't question the classification of flexispy as malware, sean informed me that f-secure no longer calls it malware.
@imaguid @mikko But they then added an installation interface, and we have since categorized it as riskware.
that's right - in spite of the fact that it is designed and marketed as a tool for spying on other people, it is not classified as spyware or malware because it was given an installation interface - meaning that the attacker has to have physical control of the phone for at least as long as it takes to install an app. now, on the desktop this might be a meaningful mitigating factor, but on mobile devices where physical access is so much easier to achieve? come on...

why exactly that stops it from being malware in general or spyware in particular in the context of mobile device security i still can't fathom, but sean offered up two things by way of explanation. one being a concern over being sued... by malware vendors. this rationale is something i heard from dr. solomon years and years ago, but i have to admit i had hoped that the industry had become less spineless in the interim. i guess that was too much to hope for. google may stand up to the government on behalf of it's users (perhaps not always, and perhaps it doesn't always succeed, but it has tried), but apparently anti-malware vendors only stand up for their users when there's zero risk they'll be challenged.

the other thing he offered was the following definition of spyware from google:
Software that self-installs on a computer, enabling information to be gathered covertly about a person's Internet use, passwords, etc.
apparently it's not enough that the software spies on you in order for it to be called spyware, it has to "self-install" as well. now i'm sure i must be missing something, because this definition seems to exclude anything where the victim is socially engineered into installing the software (it's hard to call it self-installing if the victim is the one installing it). it also seems to exclude anything that utilizes the particular trojan horse case where the software actually does perform the function it claims to, so the payload is additional functionality instead of strictly misrepresented functionality. a game that also steals passwords, a text editor that also sniffs network traffic, webcam software that just happens to send the video stream to a second undisclosed location in addition to the intended recipient - all of these are examples of software that ought to be called spyware but which the victim actually knowingly installs (because the undesirable functionality is unreported) and thus fails to meet the "self-install" criteria. this is precisely the type of situation users of the photo sharing iphone app called path faced.

now, sean also pointed me towards the anti-spyware coalition's risk model description document. i had hoped it would help me to learn more about this "self-install" concept that sean assured me was part of an industry agreed upon standard definition. things didn't turn out that way, since the term "self-install" doesn't appear in that document, but the topic of installation and distribution do figure prominently in the contexts of both risk factors and consent factors. unfortunately this document from 2007 appears once again to be geared to desktop computing rather than mobile computing. that's probably not too surprising considering it's 5 years old now, but it does highlight the age old problem of letting context into the classification process. mobile devices are easier to gain illicit physical access to, as well as being shared more freely (and more frequently) in social circumstances by their owners. the issue of consent at the point of install has far less significance as a risk mitigation for mobile devices. furthermore, the issue of consent at the point of install pretty clearly drops the ball in the case of trojans because it's not necessarily fully informed consent.

as the risk model description document demonstrates, somewhere along the line the industry gave up on basing it's classification system on functional definitions. sean insists that this is a "stricter process" but i think it's more correct to say that it utilizes more criteria than a functional definition system would. utilizing more criteria doesn't always lead to a stricter process because not all criteria are created equal and, at least in the case of the risk model description document, some of those criteria are used to create exceptions (which are generally not the hallmark of a strict process).

one of the last things sean wondered is how could the AV industry possibly use my (supposedly) broader definition(s) and not be accused of FUD. now, aside from the fact that the industry is already accused of FUD (and worse) pretty much regardless of what they do, i think it's important to spell out one of the key differences between a functional definition and the kind of definitions that sean sees in use. definitions that include contextual evaluation are judgements, they engender choice and leave room for agendas. a functional definition has no judgement, it is purely descriptive of the functional capabilities of what is being classified. you can no more be blamed for saying software that spies is spyware than you can for saying water is wet or the sky is blue. there's no silver bullet to make accusations go away, but if you take judgement out of the equation it should render those accusations baseless.

so why is all of this important? because it appears that we've somehow stumbled upon a way in which malware can be classified as "riskware" instead of malware. nobody hears about the riskware classification, nobody cares. they hear "No malware for iPhones" and they shut the rest out because that's all they needed to know (or at least according to traditional notions of malware that should have been all they needed to know). classifying malware as something other than malware seems to be what's enabling people to make the "No malware for iPhones" claim, like some kind of terminological shell game. "No malware for iPhones" makes people think the devices are safe and worry free, but there are risks, and not just for those who jailbreak."No malware for iPhones" is creating a false sense of security and with the revelations that have been made about apple's abject failure to lock down a particular type of personal information and the near ubiquitous exploitation of that failure by app developers, it seems like the stuff of snake-oil.

i tend to think that when people face risks they want to know about them rather than be told there's nothing to worry about, and i tend to think that when those risks come in the form of software that acts against the user's interests, informing the user is the AV industry's job. some people don't want that to happen, they want their own interests to take precedent. if the AV industry allows that to happen through inaction (or worse, facilitates it) then they don't deserve the reputation they have for protecting the user. the industry may not be able to put AV software on iphones yet, but they can certainly do a better job of raising awareness of the risks than going around telling people there's "No malware for iPhones". maybe when public awareness is raised apple will change their ways.
image from secmeme.com