Friday, December 31, 2010

expectations for 2011 and beyond

first, this is not prediction or forecast post. this is only a tribute.

no, seriously i hate those posts, they are annoying and i can't imagine being full enough of myself to actually try to prognosticate on what the future might bring.

that doesn't mean i don't have certain expectations for the future, however (though i can't really pin down time frames like those fortune-telling bloggers can).

as far as attackers go i expect that i'm going to disappoint you by not saying what i expect to be the next big thing. long time readers know i can be sensitive about giving the bad guys ideas and i certainly don't want to direct them towards new and annoying avenues of attack.

of course even if i did give them ideas, i'd still expect to mostly see more of what we've already seen - especially more of the things we started seeing this year. attackers seem to change in response to 5 basic influencers
  1. changes in user behaviour: this can be either changes that are meant to thwart attacker (which happen at a truly glacial pace) or adoption of technologies (like twitter for example) that provide attackers with new opportunities.
  2. new efforts by the security industry or authorities to thwart attackers: literally anything that disrupts the status quo for attackers fits in here. reputation systems that treat new unknown things as suspicious would be one example. new cooperative efforts to take down malware gangs would be another.
  3. changes to the computing platform itself: this is pretty strongly related to user adoption of new technologies, but i felt with the way the dominant computing platform seems to be shifting away from personal computers and towards mobile computing devices, the opportunities this would afford attackers deserve be highlighted.
  4. changes to the connectivity of devices: there's little doubt about how big an impact the broad adoption of the internet had on self-replicating malware like viruses and worms and later on distributed malicious computing like botnets. as connectivity continues to change and frankly increase between all sorts of devices it stands to reason new opportunities will present themselves to attackers.
  5. motivational evolution: first it was fame, then fortune, and now we are starting to see a shift towards power being the motivating force behind attacks. there may even be something that comes after the fame/fortune/power triad but that would be too much like making a prediction.
all of those things happen at a pretty slow gradual pace, however, which is why i'm not expecting huge upheavals in the modus operandi of attackers. #2 is probably the only one with the potential to really be punctuated.

now while i may not be keen on giving the bad guys ideas, giving the good guys ideas i'm not nearly so shy about.

i expect to see facebook do something about all the scams. the scam pages and apps are turning facebook into an untrustworthy environment, and in an untrustworthy environment people are less apt to share, which means they're less apt get a real benefit out of facebook, which in turn means they're less apt to use it. i can't imagine how facebook could possible afford to just sit back and let that happen so i expect them to take some kind of action - i have no clue if it will be effective, however.

now that sandboxing and whitelisting are catching on (and in fact 1 well known company seems to have implemented all 3 of my 3 preventative paradigms; oh heck, let's not be coy, kudos to kaspersky internet security - i'm not a customer but at least somebody seems to have either been listening to me or thinking along the same lines) i expect that people will gradually start adopting these technologies in larger numbers (the sandboxes will probably have an advantage since they're getting embedded inside client apps) and maybe even start to realize that these technologies also are limited just like blacklists are. and THEN, maybe i'll have reason to start talking more about strategies for when prevention fails. we can only hope.

speaking of hope, now that at least one vendor has covered the 3 preventative paradigms in some fashion, would it be too much to hope that vendors start looking at the other parts of a proper defensive strategy? prevention is only the first part of the PDR (prevent, detect, recover) triad (which itself seems to me to be incomplete).

back to expectations, i expect to continue to see more examples of authority being exercised - both in official and unofficial capacities - in order to thwart and even arrest attackers. i hope (oh, am i diverging again?) to see greater appreciation for the fact that legislation on it's own has little value. rules mean little if they aren't enforced and enforcement requires detection of violations, attribution, and often (where official authorities are concerned at least) cross-jurisdictional cooperation. i expect at least someone will be highlighting the importance these things played in whatever successes we have and hopefully (there i go again) more attention will be paid to them.

i expect to see some more individual or community-based assistance given to those who exercise authority, probably in the form of detection and/or attribution, much like brian krebs has famously done on more than one occasion.

i also expect, unfortunately, to see people continuing to whine about how AV software isn't effective at anything anymore.  i expect i will continue to make jokes about driving screws with hammers in response.

i expect to see the heterogeneous nature of the threat landscape continue to be underestimated by such verbiage as "today's threats" and "yesterday's threats" (as if yesterday's threats weren't threats anymore).

i expect to hear more about stuxnet. maybe even something that doesn't stretch the limits of credulity (a worm, spreading stealthily for over a year, only managed to hit it's target after it's notoriety reached it's peak???).

i expect i'm going to be holding more people's feet to the fire over marketing bullshit and snake oil peddling.

finally, because these aren't predictions, i expect at least some of these expectations will not be met - at least not in the short term of the upcoming year.

Friday, December 24, 2010

getting the wrong message across

it's that time of year again, jack frost nipping at your nose and chestnuts roasting on an open fire. and while we have that fire handy, lets hold some feet to it, shall we?

see there was a post about our favourite type of malware (the virus) published on the panda security support blog by javier guerrero díaz that seems to have a number of issues that need addressing. let's jump right in.

to start with there's the issue of terminology misuse:
In fact, we still use today the term “virus” to refer to any type of malware in general, when reality shows that, except for the occasional surge, the number of viruses in circulation is much lower than that of Trojans, for example.
the public has already started to pick up the use of the term malware as an umbrella term, replacing it's previous misuse of the term virus. while javier did hint at the inaccuracy of calling all malware viruses, it would have been better to not suggest that "we" (meaning the folks at panda, including himself) still misuse terminology that way. it makes it seem ok to be sloppy with the terms (something which ultimately leads to confusion amongst those who don't know better). i would hope that technically oriented folks were more precise in their word choice.

next was some over-generalization about worms:
Computer viruses differ from other malware specimens like Trojans or worms in that the latter do not need a host to spread.
not all worms are free from the requirement of a host. win32/ska (also known as the happy99 worm) for example must infect the wsock32.dll in order to send itself over email.

there was also some over-generalization about the complexity of viruses:
Also, this characteristic makes them more complex to develop as a computer virus must know the internal structure of the file it tries to infect in order to be able to install on it.
not all viruses need to know the internal structure of the file they're infecting. overwriting infectors (which destroy the original file rather than trying to preserve it) and companion viruses (which don't actually alter the original file at all) have no such need, nor i think do macro viruses.

on top of complexity, there was also some over generalization about the scope of virus infection:
Finally, given that viruses affect all executable files on the system...
not all viruses affect all executable files on the system. some (perhaps many) are much more selective. lehigh, for example, only infected command.com. quite a few affect files that most people would not consider executable (macro viruses for example go after documents instead of executables).

i understand that the post was intended for those less familiar with the subject of viruses and malware, but the problem with over simplification is that there's no agreed upon degree to which things should be simplified. the consequence of this is that everyone presents different 'facts' and that confuses the people you're trying to explain things to. i genuinely believe it's possible to explain things to people in such a way that they can understand you without sacrificing technical accuracy. it takes effort, and i'm certainly not going to suggest that i succeed in reaching this goal in all circumstances, but at least i don't give up trying. if we accept the sacrifice then we have to accept that people will never really understand what we're talking about because we don't give them the power to do so.

finally there is the market-speak that makes me cringe every time i see it:
Any Panda Security solution will keep your computer free from viruses and other malware.
panda's *tools* (if it's really a solution, what problem does it solve?) will not keep users' systems virus free. they may keep them mostly virus/malware free, but there will always be exceptions capable of slipping through.

i've long despised the use of the term "solution" to describe things that are better presented as tools. it's a trick used by marketing to make people believe they're getting the impossible dream - perfect protection. to see these words written by someone in R&D makes me think somebody's been drinking the marketing koolaid.

worse than that, however, is the reference to keeping systems virus/malware free, without qualification or caveat. this is one of the hallmarks of snake-oil in the anti-malware industry; and guess what, when i went searching through my archives looking for examples of this i found one - involving panda! is there something in the water? is it a language thing? do i have to go looking through my archives for the intersection of panda and snake-oil to see if there's a pattern emerging?

Thursday, December 23, 2010

short thought on sandboxing

jeremiah grossman recently penned a guest post for zdnet extolling the virtues of sandboxing. i've made no secret about the fact that i'm also a fan of sandboxing (though i'm not entirely on board with jeremiah's depiction of it with regards restricting things - that verges too close to behaviour blocking for liking) but the sandboxing jeremiah was referring too was the kind that is built into applications as a feature.

not too long ago posted about sandboxes being added to all sorts of apps and wondered (well, suggested) that such sandbox sprawl might not be the best way to go about things. jeremiah's observation that adding sandboxes to apps changes the game from a one exploit show to a two exploit show made me realize another reason why relying on the application's own sandbox is less than ideal - the attacker knows exactly which sandbox they have to escape from.

by contrast, with a separate stand alone sandbox, an attacker wouldn't necessarily know which sandbox is involved and would then need to develop escape exploits for multiple sandboxes and try the shotgun approach, firing them all at once and hoping for the best.

i do believe i'll be sticking with the stand alone sandbox. it seems to have the tactical advantage.

Tuesday, December 21, 2010

who knows what the future may bring?

who knows what the future may bring? well lots of people seem to think they do, and bruce schneier even goes so far as to predict what security will look like 10 years from now. much like long term weather forecasts, he is almost certainly wrong - at least i hope he is, because the picture he paints is distinctly dystopian.

no, that's not just an interpretation - a future where we the users are viewed as parasites living off the life-blood of corporations is not a happy shiny place to live. i can certainly see where he's coming from, though, as the beginnings of that are already visible with such schools of thought as the one that refers to users as product (i.e. we aren't facebook's customers, we're they're product). we are being increasingly objectified and devalued by corporate interests. the entertainment industry (and let's not forget their associated lobby, as the group is now as much a political force as they are a corporate one) is certainly leading the anti-consumer charge in the quest to justify their sense of corporate entitlement - but unlike bruce (who is himself part of the corporate machine) i have faith that society will eventually tip the scales back towards our favour.

we've already seen a time when businesses had all the power and the little guy was at their mercy. it happened during the industrial revolution. we fought back. we won. we outnumber them and they can't exist without us (while human history proves we can exist without them). to call us, rather than blood-sucking corporations, the parasites is to ignore nature in favour of business. that kind of backwards world view was not then and is not now a natural one and nature is something you cannot beat.

but beyond my faith in humanity, i also think schneier is wrong because he's misunderstanding the signs he's reading. for example, referring to iphones as special purpose computers instead of general purpose ones and citing them as evidence of the demise of the general purpose computer demonstrates that bruce hasn't the foggiest notion of what the distinction between a special purpose and general purpose computer really is. what we may well be witnessing is the end of the personal computer in favour of the mobile computing device, but that is an entirely different matter with entirely different repercussions. for one thing, a world without general purpose computers is a world without the world wide web. it is a world without iphone apps, a world without game consoles, a world without software. the iphone may exist in apple's walled garden, but i can (and do) get the same limitations on my PC using application whitelisting. that doesn't turn my PC into a special purpose computer any more than it does the iphone - it just makes it locked down.

so long as the computer is technically capable of running arbitrary code (which is exactly what happens when you install an iphone app or visit a website that has javascript or flash or any of the other wonderful interactive technologies out there) it is a general purpose computer (it satisfies what fred cohen referred to as the generality of interpretation). a world without general purpose computers is very, very hard to imagine. bruce, thinking the difference between special and general purpose computing can be illustrated as the difference between an iphone and a PC, sees a world that technically isn't much different from our own. but the difference between special and general purpose computers is more accurately illustrated as the difference between a cheap simple hand-held calculator and the fancier more expensive programmable variety. a world where computing devices are as inflexible as cheap hand-held calculators is a strange world indeed. you might think that there must be some sort of middle ground between the two that would allow for something more (and then surely the iphone inhabits that middle ground) but ed felton covered the fallacy of the almost general purpose computer a long time ago.

without the elimination of general purpose computing you cannot eliminate user choice. you cannot eliminate the emergence of technologies that empower us to throw off the yolk of corporate interests. the linuxes and firefoxes of the world will continue into the future, and the more anti-consumer that corporations become, the more consumers will choose those alternatives. we are not and never will be the parasites in the relationship with business. we are not facebook's product, we are their patrons. the advertisers are not their customers, they're more like the hotdog vendors at a stadium; they only make money so long as we show up and buy something and eventually we will stop showing up at the facebook stadium (just as we stopped showing up to friendster and myspace) and they'll have to chase us to our new favourite spot like the parasites they are.

Thursday, December 16, 2010

the transparency delusion

prompted by lenny zeltser's recent post on usability (which itself may be a response to my previous post) and with an actual usability study on 2 pieces of security software [PDF](specifically 2 password managers) still fresh in my mind i've decided to take another look at the issue of usability and more importantly transparency.

the usability study i referred to makes an excellent point about security only paying lip-service to usability, and i don't think they mean because the security software they studied had too many clicks to get through each function or because the menus were non-intuitive. the study was a wonderful object lesson for just how badly things can go wrong when transparency is taken too far - and why. in the case of the software in the study, transparency didn't just make the software harder to use, it actually lead to compromised security.

the key problem of transparency is that it robs the user of important information necessary for the formulation and maintenance of a mental model of what's going on. as a result, the user invariably forms an incomplete/inaccurate mental model, which then leads them to make the wrong decisions when user decision-making is required (at some point a user decision is always required - you can minimize them but you can never eliminate them); not to mention making it more difficult to realize when and how the security software has failed to operate as expected (they all fail occasionally) and so robbing them of the opportunity to react accordingly.

the usability study in question serves as an adequate example of how transparency can go wrong for password managers, but what about more conventional security software like firewalls or scanners? mr. zeltser used the example of a firewall that alerts the user whenever an application tries to connect to the internet. let's turn that around - what if the firewall was 'intelligent' in the way mr. zeltser is suggesting? what if it never alerted the user because all of the user's applications happened to be in some profile the firewall vendor cooked up to prevent the user from facing so-called unnecessary prompts? and what if one day that firewall fails to load properly (i.e. windows thinks it's loaded but the process isn't really doing anything)? will the user know? will s/he be able to tell something is wrong? it seems pretty obvious that when something that never gave feedback on it's operation all of a sudden stops operating, there will be no difference in what the user sees and so s/he will think nothing is wrong.

how about a scanner? let's consider a transparent scanner that makes decisions for you. you never see any alerts from it because it's supposedly 'intelligent' and doesn't need input from you. what happens then is that you formulate an incorrect model, not just what the scanner is doing (because you have no feedback from the scanner to tell you what it's doing), but also an incorrect model of how risky the internet is (because your scanner makes the decisions for you). you come to believe the internet is safe; you know it's safe because you have AV, but any specifics beyond that are a mystery to you because you're just an average user. one day you download something and attempt to run it but nothing happens. you try again and again and nothing happens. then you realize that your AV may be interfering with the process, and since you've come to believe the internet is safe instead of risky you decide that your AV must be wrong by interfering with things so you disable it and try again. congratulations, your incorrect mental model (fostered by lack of feedback in the name of transparency) has resulted in your computer becoming infected.

we shouldn't beat too hard on the average users here, though. i have to confess that even i have been a victim of the effects of transparency. a few years ago, when i was starting to experiment with application sandboxing for the first time, i tried a product called bufferzone. in fact, i tried it twice, and both times i failed to formulate an accurate mental model of how it was operating. bufferzone tried to meld the sandbox and the host system together so that the only clue you had that something was sandboxed was the red border around it. not just running processes either, files on your desktop could have red added to their icons to indicate they were sandboxed. but since i was new to sandboxing at the time i didn't appreciate what that really meant; and as a result, each time i removed bufferzone i was left with a broken firefox installation and had to reinstall.

when we talk about transparency in government, we're talking about being able to see what's going on. for some reason, however, when we talk about transparency in security software we're talking about not seeing anything at all - we're talking about invisibility. invisible operation can only be supported if we can make the software intelligent enough to make good security decisions on our behalf. lenny zeltser offer's the church-turing thesis in support of this possibility but i'd like to quote turing here:
"It was stated ... that 'a function is effectively calculable if its values can be found by some purely mechanical process.' We may take this literally, understanding that by a purely mechanical process one which could be carried out by a machine. The development ... leads to ... an identification of computability † with effective calculability" († is the footnote above, ibid).
security decisions necessarily involve a user's intent and expectations - neither of which can be found by 'purely mechanical processes', and therefore neither of which can be used by security software making decisions on our behalf. the decisions made by software must necessarily ignore what you and i were trying to do or expected to happen. that kind of decision-making isn't even sophisticated enough to handle pop-up blocking very well (sometimes i'm expecting/wanting to see the pop-up) so i fail to see how we can reasonably expect to abdicate our decision-making responsibilities to an automaton of that calibre.

transparency in security software is not a pro-usability goal, it is an agenda put forward by the lazy who feel our usability needs would be better addressed if we all could be magically transported back to a world where we didn't have to use security software any more. designing things so that you don't actually have to use them doesn't make them more usable, it's just chasing after a pipe-dream. true usability would be better served by facilitating the harmonization of mental models with actual function, and that requires (among other things) visibility not transparency/invisibility.

Friday, November 19, 2010

security: it's almost like it isn't there

one of the ideas i continue to encounter over and over again throughout the years is the idea of liking a particular security product because it seems like it's not even there. it's amazing where one can find that idea being expressed. panda security's own luis corrons said the following about his wife's impression of panda's product:
My wife’s computer also have it, and she loves it, mainly because she doesn’t realize that it is installed :)

liking a security product because it seems like it's not even there strikes me as suggestive that the person in question likes to ignore security or not be bothered by security concerns. for most people this is going to be a recipe for eventual disaster. luis' wife, however, has luis on hand to take care of any malware incidents, so i guess for her it's ok. it's an interesting and probably effective strategy - well played, mrs. corrons, well played.

most people can't marry an anti-malware expert, however, so placing value in product's ability to shut up is the wrong way to think about things for most of us. don't get me wrong, if a security tool is too 'chatty' then certainly that poses a usability problem, but the quest for complete transparency is a symptom of mismatched expectations.

the predominant expectation among consumers is that if they install the 'right' product or combination of products then they can forget about all those nasty threats because they'll be protected. that's just not true, though, and it's never, ever going to be true.

people will actively defend this line of thinking, however, often they say they just want to do X and don't want security getting in the way. imagine if i said i just wanted to get to mcdonald's and didn't want traffic safety to get in the way - would that sound reasonable? not so much, i imagine. part of the reason for that is that most of us realize that following certain procedures on the road actually does keep us safer than we would otherwise be; but another part is that we also recognize that when others don't follow those procedures they put us and everyone else at risk, not just themselves.

what if i were to tell you the same principles apply in computer security? there are procedures you can follow that not only allow you to reach your goal in a reasonably secure way (whether that goal is getting work done or enjoying online entertainment or whatever else you use your computer for). not only that but by not following those procedures, by ignoring security, one actually does put other computer users at risk as well. i'm not just talking about other people who use the same computer, either. back in the days of viruses, when a virus infected a computer that computer joined the set of computers from which that virus could further it's spread. essentially it enlarged the platform from which the virus could attack still other systems. today, in the age of the botnet, the same principle applies. when a machine becomes compromised it get's added to the attack platform and assists in attacks on other systems, whether those attacks are simply sending out spam or sending out more malware or performing distributed denial of service attacks. by pretending like security isn't a concern a user puts not only themselves but all other computer users at risk as well.

now, likening secure computing practices to safe driving does not mean i'm trying to argue in favour of requiring users to have a license to operate a computer (though there are those who suggest that). the fact is that day to day life is full situations where you have to take precautions to increase your safety. just crossing the street calls for the precaution of looking both ways first. even toasters (which i bring up because some people literally think computers should be as simple to use as toasters) have safety precautions you need to follow - unplug the thing before you try to retrieve that piece of toast or bagel that's stuck inside.

i often criticize the security industry for perpetuating the myth of install-and-forget security, but the consumer shouldn't be thought of as blameless. people need to wake up and take responsibility for their own safety and security online, as well as being good online citizens and not putting others at undue risk. seriously, folks, computing without the need for taking active precautions is pure fantasy and it's time you started living in the real world. if you don't take responsibility for keeping yourself safe and secure, you won't be safe and secure - period.

Monday, November 15, 2010

another #sectorca has come and gone

[this is 2 weeks late now but better late than never]

i attended the sector conference for the 3rd year in a row this year and as with previous years i've chosen to write about the experience for the benefit of those who couldn't make it, or maybe have never gone and are looking for a reason to go in the future (by which i mean more of a reason than just because i said so). it just so happens i took quite a few notes this year (pen&paper style - i'm still not taking a computing device to a hacker conference - come on) so i've got plenty of material (perhaps too much) to draw from for this post

so jumping right into things - before the first talk even started i took a look at the materials that attendees were given. i have to agree with tyler reguly's twitter comment that there were an awful lot of pamphlets, and by extension and awful lot of dead trees. i'll tell you right now where my copies are almost certainly going to wind up: the garbage (or perhaps the blue box). no offense intended, but a big stack of papers i never asked for and have no interest in feels a lot like junk mail (the physical kind, not the electronic variety). i also noticed that the event programme seemed to be about twice as thick as last year's and the entire back half of it seemed to be advertizements. also there was apparently no puzzle/challenge/whatever like there has been in previous years' programmes.

the first keynote was "The Problem with Privacy is Security", given by Tracy Ann Kosa. in spite of my general disagreement with framing things as though there was a conflict between privacy and security - i stick by my previous position that the real conflict is between competing interests/concerns of disparate parties, usually the individual and the organization -  i found the exploration of what organizations interests are (personal information has become a commodity) and how they use security to protect them (at least so far as the integrity, usability, and reliability of the data is concerned) to be quite interesting. there were a number of other interesting things brought up in the walk-talk (for lack of a better term to describe a talk where the speaker wanders off the stage and amongst the room soon after starting) as well. one of the most salient, i think, is that we protect personal information (or try to) after we collect it rather than proactively by designing our systems to collect as little personal information as possible in the first place.

i also liked the fact that the topic of privacy is being taken more seriously, at least as a topic for discussion, by the sector conference organizers. if this trend continues then perhaps one day i'll be able to engage the vendors on the expo floor without the need to give away my personal information, including the non-obvious way whereby they get my name and contact info when they scan my RFID badge. just because i'm mildly curious in the here and now doesn't mean i care to hear anything more from a company in the future.

following the keynote i attended the "Malware Freakshow 2010" talk given by Nicholas Percoco and Jibram Ilyas. from my perspective it wasn't quite as good as their previous malware freakshow last year. that's not to say that it wasn't a great talk, mind you. it's just that last years' talk had a concept that was truly new to me and that doesn't happen all that often so it's kind of hard to top that - for me. for other people i imagine the memory parsing malware might be a new idea, though i suspect most of us have heard of keyloggers and packet sniffers. there were still some great stories, though, which is primarily what this sort of talk is about. it's the most interesting cases they've found in the field while investigating malware incidents. the data center that was literally in a barn is a good lesson to always visit your hosting company's data center in person to see what you're really getting for your money.

next up was Eldon Spickerhoff's "By The Time You've Finished Reading This Sentence, 'You're Infected'". at the moment i don't know if this has been done previous years but i came to realize this was a sponsored talk. it was only half as long as the normal technical talks, just 30 minutes long. i found myself agreeing with some of the things he said such as whitelisting being complementary to other techniques. other things not so much. for example i think he underestimates the value of botnets designed for sending spam (that spam can direct to drive-by download pages that install more malware, effectively growing the size of the attacker's launchpad - that's always valuable). i also don't agree with the notion that whitelists don't have false positives. i suppose it depends on how you define false positive in that context - false acceptance and false rejection probably are more intuitive terms when it comes to whitelists - but whitelists can certainly falsely accept programs (such as malware that has been erroneously whitelisted) and falsely reject programs (such as self-altering programs).

following that was the lunch keynote given by FBI agent Steve Kelly about "Today's Face of Organized Cyber Crime: A Paradign for Evaluating Threat". first of all i liked hearing about how an FBI agent prefers when he can bypass airport security - i hope that sort of thing filters up to the people who can actually affect change, but i won't hold my breath. one of the take-aways from this talk was that cybercrime organizations aren't necessarily organized, at least not in the sense one thinks of when one thinks of organized crime. i wonder though, if perhaps using the mob as your traditional crime model doesn't bias your thinking in a certain direction, but more to the point perhaps cybercrime doesn't really deserve to be called organized. the non-hierarchical network of people, each with their own criminal specialization, could simply be considered an example of criminal collaboration. even in traditional non-organized crime there are plenty of examples of collaboration between criminals with different areas of expertise. for example a thief specializing in high-end merchandise isn't necessarily going to move that merchandise himself after he's stolen it - instead he might well use the services of a fence, and perhaps he even goes back to the same fence over and over again. that kind of continuous criminal collaboration would look very much like the model the speaker presented for an online criminal enterprise but it certainly wouldn't fit what we would normally think of as organized crime.

another point of interest to me was that the FBI didn't want to treat cybercrime as just 'computer facilitated crime', even though as near as i can tell that's exactly what it is. i understand the reasons, though. when you've broken up your law enforcement efforts by type of crime (ie. one department for theft, another for fraud, etc) treating cybercrime as just computer facilitated crime puts the onus on each enforcement department to develop it's own expertise in dealing with the computer facilitated variant of the crime they focus on. computer facilitation, however, changes the nature of how the crime operates and how to combat it so profoundly, however, that spreading any expertise your organization can acquire so thinly across multiple departments just doesn't make practical sense. it's more logical to centralize that expertise into it's own department as the FBI has apparently done.

oh, and if you haven't guessed already, this speaker was much more interesting than the last law enforcement representative i recall giving a keynote.

after lunch (and the keynote that went with it) nature had to take it's course, and this is one place where i really wish things could be better. there's something like a thousand attendees at sector, most of them male, and we all get to share 4 urinals and 4 stalls in the washroom. no surprise, then, that the lineup stretched right out the washroom doors. really, though, when there's a line that long for the men's washroom, and when your security rockstar speaker has to beg people to let him jump to the head of the line so he can get to the talk he has to give in 4 minutes - that's when you know the washroom facilities are just not good enough. also, some of you security guys really need more fibre in your diet. there weren't any lineups for the sinks, now that i think of it, but i don't want to think about why.

there was some interesting things going on in the schedule for the talk that followed the lunch keynote on the first day. the talk that i really wanted to see, "Language Theoretic Security: An Introduction" by Len Sassaman and Meredith Patterson, was removed from the schedule entirely. on top of that, my second choice, Deviant Ollam's "Four Types of Locks", swapped times with Chris Hoff's "Cloudinomicon" - which is the talk i ultimately wound up seeing and had actually planned on seeing in it's original time-slot. much to everyone's surprise i'm sure, Hoff says you really can have security in the cloud, you just have to be prepared to make profound fundamental changes to pretty much everything or start over from scratch. there were some other really good observations that Hoff shared too, like the fact that companies don't make money by doing the right thing - they don't solve long term problems. while looking over my notes i also see something about survivability being about resist, recognize, and recover - perhaps i was paraphrasing him in my notes but in hindsight that sounds an awful lot like the old PDR triad (prevent, detect, recover) from malware defense.

the next talk i attended was Mohammad Akif's titled "Microsoft's Cloud Security Strategy". like Eldon Sprickerhoff's talk before lunch, this was one of the new half-hour sponsor talks and it was during this one that it really struck home that these half-hour sponsor talks kinda suck. i don't know if it's because of the time constraint (you can't get that much detail into 30 minutes) or if it's because of the fact that they were sponsored talks, but i really didn't get much out of this talk. then again it might have nothing to do with any of that, but rather more to do with an apparent disconnect between my concept of strategy and what most of the rest of the security community seems to think strategy means.

after that last talk i was feeling pretty unfocused and i knew i wasn't going to get much out of the next talk unless it really, really interested me. that meant i had to give up on Marisa Fagan's SDL talk in favour of  Brian Contos' "Dissecting the Modern Threatscape: Malicious Insiders, Industrialized Hacking, and Advanced Persistent Threats". the trick worked, it woke my brain right up. it was a very interesting talk and brian gave a very eloquent explanation of how industrial espionage causes harm (in case anyone thought copying was a victimless crime).

the final talk of the day, after being switched around with the Hoff's talk, was Deviant Ollam's "The Four Types of Lock". i've seen one of his talks previously and i enjoyed learning about locks and lock picking, but this talk was even better because instead of just focusing on the weaknesses (and sometimes strengths) of various locks, this one was geared towards aiding the decision making process when it comes to the procurement and usage of locks. i knew beforehand that the talk would be interesting simply because it was obviously presenting a classification system for locks, but this classification system was focused on strength against lock picking techniques. that was probably the most useful basis for a classification system for both attackers and defenders. Deviant (what a great name) is also a great educator, as someone else at the conference mentioned.

after all the first day talks were done it was time for the reception at Joe Bidali's. there i happened to meet David, Max, David (yes, 2 Dave's at the table) and Kevin. Max was happy to see a developer (myself) and a business analyst (David #2) taking an interest in security. Kevin had a decidedly different take on things, though he seemed to agree with Max that developers taking an interest in security was a good thing. Kevin had some very ... strongly held opinions about what developers need to start doing and how we need to work differently because the amount of time that others get to test things is so limited. of course the reality is that the time developers get to do their part is pretty limited too, and we're subjected to many of the same (or at least analogous) kinds of interference from business units that i often see IT folks complaining about, even when direct access between the two groups is severed. i would love to have the luxury to be able to do everything the right way the first time, but one of the things i've learned that sticks out most prominently in my mind is that even my job as a developer can involve a significant amount of compromise - and that's a hard lesson to learn when you're an uncompromising s.o.b. like me.

after the reception was the speakers dinner, which is now open to non-speakers. this was my first time at the speakers dinner, but it wasn't really all that eventful - other than the look on the waitress' face when i said i'd like my steak blue. i had to settle for rare (or at least what passed for rare there).

the morning keynote of day two was Greg Hoglund's talk titled "Attribution for Intrusion Detection". now i've had some choice words about Greg Hoglund before, but not wanting to make a scene i decided this was one of those people i don't want to introduce myself to (he's not unique in that regard, of course - last year i was careful to keep my distance from rsnake). anyways he had some interesting things to say during his talk. things like reverse engineering is dead (i'm sure that would go over real well at RECon) and malware analysis needs to be easy ("just show me the strings"). he also talks about making things harder for the bad guys, which is pretty rich considering he runs a site that helps the bad guys and not only does he know it, he's commented on the fact.

that being said, there was some legitimately interesting stuff too, such as the not particularly controversial idea that an organization is in a better position to know about the targeted threats it faces than security vendors. the consequence of which being the need for organizations to develop their own (rudimentary) malware analysis capabilities. that's where "show me the strings" comes in.

finally, one thing that really struck me was during the Q&A at the end. someone asked him if there was anything that could be done in the hardware architecture to make things more secure and he said that if we could solve the problem of how to prevent data from being turned into code we'd be more secure against malware (or something to that effect). it's not often that security folks approach what cohen describes as the generality of interpretation (the ability to interpret anything as an instruction) - now if he can just realize that the ability to turn data into code is a requirement of general purpose computing he can shorten future answers to such questions to a succinct "no".

following day 2's morning keynote i attended the presentation "What's old is new again: An overview of mobile application security" by Mike Zusman and Zack Lanier. in recent years mobile security has received a bit of attention but most of the attention i see focuses on mobile malware. this talk was more about finding vulnerabilities in legitimate mobile applications, including those required for operating the devices. there was some great information about the various platforms and one of the things i came away with (and perhaps what the presenters were trying to impress upon the audience) was that this new wave of mobile developers appear to have not learned the lessons from the mistakes previous waves of developers (such as web developers or more conventional application developers) have made in the past. it's as if with each new category of computing platform comes a new set of developers rather than a re-allocation of existing developers, and as such they miss out on the benefits of maturation in existing development communities.

following the mobile app security talk and just before the lunch keynote was another half-hour period for sponsor talks. since i had soured on the notion of such talks i took the opportunity to wander around the expo floor. unfortunately, due to my apparent increasing intolerance for marketing bullshit (if such an increase is even possible - i have long said i don't trust marketing people as far as i can throw them and i look forward to one day finding out how far that is) i found i couldn't bring myself to stop at any of the booths manned by suits. that just left the hardware hacking village which had a home-brew 3D printer and some amazing products of that printer, and the lockpick village where i learned how incredibly weak car locks are. do not leave valuables in your car - we're talking about locks that are easily opened with blank keys and may potentially be pick-able with popsicle sticks. doesn't that fill you with confidence?

the second day's lunch keynote was given by none other than Mike Rothman of Security Incite fame and now a part of Securosis. his talk "Involuntary Case Studies in Data Security" was about data breaches, most if not all being well publicized ones. Mike's talk didn't draw me in quite as much as Greg Hoglund's keynote earlier that day. perhaps it was because it had a business rather than technical focus. that said there was still a number of interesting points discussed during the talk. two that really got my attention the most were a) that there is still no known case of lost media resulting in fraud (even when encryption was absent), and b) companies don't notice their own breaches - they're always found by 3rd parties. the first one is actually a good thing, but it seems like a matter of luck to me; continued loss of media should eventually result in that media falling into the wrong hands and being used for fraud. the second one, where companies don't notice their own breaches, is obviously not good at all but i don't know that there's any way to improve the situation since, as Mike also said, even if you know the past you're still doomed to repeat it because other people who you work with don't understand it and will drag you down with them.

the talk i went to following lunch was Garry Pejski's "Inside the Malware Industry". what i seemed to have missed when reading the description for this talk was that it was going to be a first-hand account from someone who was actually in that industry. that's right, Garry admitted that he was a malware writer! i took more notes at this talk than any other - obviously "know your enemy" had a lot to do with that but also there were a lot of details about the malware, along with Garry's observations about the security countermeasures, the trustworthiness of employers in that industry, and the legality of the business. unlike Hoglund, Garry was repentant about the role he played in cybercrime, although he isn't convinced that the software was illegal because it had an EULA. my suggestion to him is to consider carefully how completely the EULA disclosed the actions of the malware and also to consider sections 342 and 430 of the Criminal Code of Canada (since that does appear to have been the proper jurisdiction).

after the malware industry talk was another half-hour period for sponsor talks. what was also being held during these periods were so-called turbo talks. they were called this because they held 2 back-to-back in the half-hour slot so each person only got 15 minutes.  Nick Owen did a short and sweet  presentation with a long name - "Securing Your Network With Open Source Technologies And Standard Protocols: Tips And Tricks". it came along with the premise (supported by a study apparently) that making better provisions for legitimate remote access results in fewer breaches. i think you can see where that's coming from - people are going to do something in order to work remotely, so it's better if you give them a secure way to do it.. following that, and without any sort of scheduled break for setting things up (thus causing the second turbo talk to run late) was Julia Wolf's presentation "OMG-WTF-PDF" which literally could not have been given a better name. i have a new appreciation for PDF, although i think appreciation is the wrong word. the format is jaw-droppingly bad and the fact that acrobat has 15 million lines of code (compared to NT4's 11 million) is amazing to me. it's amazing because i can't imagine how the programmers managed to put the drugs they were obviously taking down long enough to write so much code.

the final talk i attended (which, since the previous one ran late, i missed the beginning of) was Mike Kemp's "Into the Black: Explorations in DPRK". the title probably doesn't give away the fact that the talk is about the cyberwarfare capabilities of North Korea. that description doesn't really give away the fact that the talk is a debunking of the notion that North Korea has any cyberwarfare capabilities. Mike Kemp presented a hilarious juxtaposition between the purported North Korean superhackers and the internet black hole they live in. i don't want to give too much away, and i couldn't do it justice if i did.

this has been quite a long post (and it's taken me quite a while to finish it) but suffice to say i feel i learned a great deal from sector this year (far more than i've mentioned here) and would recommend it to anyone interested in security. i'm not sure if i'm going to go again next year, however. i've gone 3 years in a row now, and as security still isn't technically my job it kind of feels like self-indulgence to go to these things - and my tolerance for self-indulgence has limits. my employers have been quite generous in affording me this indulgence (and paying for it, no less), but i can't keep taking advantage of that generosity indefinitely.

Monday, October 25, 2010

pity the anti-virus naysayer

pity the anti-virus naysayer, for when one decries the failure of anti-virus one reveals the failure in oneself
i don't think it's necessarily all that interesting to talk about the AV is dead movement anymore - saying anti-virus (or anything else security-wise) is dead is a pretty obvious cry for attention. instead in this post i want to look at the popular notion of "the failure of AV".

when one talks about the failure of anti-virus, what has anti-virus failed to do in the most general sense? failed to stop malware XYZ? failed to protect the endpoint from a specific attack? no, those are all reasonable failures not really worthy of being harped on if you accept that no preventative measure is perfect. in the most general sense, when one talks about the failure of anti-virus one is talking about the failure of anti-virus to live up to one's own expectations.

but are those expectations reasonable? in all likelihood they aren't. they are expectations born not out of an understanding of AV, but rather out of listening to marketing (stop listening to marketing!). if you truly understood AV then your expectations would be a pretty close match to reality, so incidental failures wouldn't surprise you or be a cause for concern. if you really understand AV then those incidental failures should be anticipated and planned for.

therefore, when one decries the failure of AV, it is because one doesn't actually understand it, one hasn't anticipated the incidental failures and made plans for them. it is a failure of understanding that happens all too often, where one tries to use marketing bullshit as a substitute for actual knowledge but only winds up with mismatched expectations. actual knowledge has no substitute and can often be hard to come by. "the failure of AV" may get you brownie points in populist crowds, but it's too facile a conclusion to be useful in the larger scheme of things.

Wednesday, October 20, 2010

social networking vs. privacy

the privacy issues surrounding social networking sites are nothing new by any stretch of the imagination, but it seems to me that many people have mismatched expectations when it comes to privacy and social networks - and i'm not just talking about the people who are not yet aware of the issues. even those people who are actively criticizing the privacy implications of the technologies and policies in play at social networking sites seem to be experiencing a fundamental disconnect from the reality of social networking.

the fact of the matter is, no social networking site can be both socially useful and promote privacy in a meaningful way at the same time. if we ignore the practical concerns of how to get funding or similar topics that lead us to call social networking users products rather than customers - even an ideal social networking site must necessarily be a privacy failure.

before i explain why, i think it's important to understand what social networking sites are for and by extension what successful ones (including our ideal one) must do in order to be compelling. the core goal of a social networking site is to enrich our social experiences, either by allowing us to have rewarding social experiences with more convenience (like keeping up to date when you've got a spare moment, even if it's in the dead of night) and less expenditure of resources (time, energy, money, or some combination of the three) than we would otherwise be able to have, or by allowing us social experiences that wouldn't otherwise be possible at all (such as reconnecting with long lost friends).

to that end it should come as no surprise that social networking sites have to focus on facilitating the establishment, maintenance, and strengthening of social connections. it should also come as no surprise that social connections flounder in the absence of openness. that is a social network's undoing from a privacy perspective, because openness is incompatible with the guardedness engendered by the strategies we use to protect our privacy.

now there are a couple of specific complaints that i'm sure come to the reader's mind at this point, chief among them being that sites like facebook should still be able to use an opt-in model for information sharing instead of an opt-out one. you have to understand, however, that the opt-in model is essentially equivalent to being guarded-by-default (you could also liken it to default-deny or even whitelisting). no one can dispute that this would be a superior model from a privacy perspective, but as someone who is guarded-by-default in real life i can assure you that it is not a winning social strategy. by going with an opt-in model you put people in the position of having to make conscious decisions about what they need to be open about in order to get the most rewarding experience for themselves (where such calculating behaviour might be familiar only to a select few) as well as figuring out precisely how to go about being open about those things. in other words the opt-in model forces the user into a kind of simulated social awkwardness, which would not be a compelling user experience at all.

you could be thinking right now that even if an 100% opt-in model would scare users away, a more balanced model than 100% opt-out should be possible - and yes, it certainly is. privacy lobbyists (for lack of a better term) have certainly managed to pressure facebook (and i assume others) to change various features to be more privacy-friendly. that being said, without such pressures (representing a broadly held preference to the contrary), social networking sites should be expected to go with the opt-out model and let those who feel they need to protect the information in question actually make the conscious effort to opt-out. the reason for this is purely practical (and i don't mean in the making things easier for lazy programmers sort of way). there is no single sharing strategy that both optimally meets everyone's social needs and their privacy needs as well. that means any attempt at making more balanced sharing defaults amounts to trying to second-guess what's going to work best for users at the risk of making it more difficult to be open in a way they may have found rewarding. defaulting to opt-out is essentially erring on the side of caution with respect to not compromising the primary goal of an ideal social networking site.

all this being said, when it comes to sharing data with advertising partners or other third party organizations, that has nothing to do with enriching the social experiences of the user. those are entirely business-driven decisions, and while they make sense for the business, they provide no direct benefit to the user and so there is no reason to believe the user would appreciate that sort of openness being facilitated (or rather foisted on them) by default. those sharing practices rightly deserve to be made opt-in rather than opt-out, but i don't expect the business people running the social networking sites to draw this distinction between sharing that facilitates social connection and sharing that facilitates advertising revenue. at least not without a good swift kick in the arse on a regular basis.

(2010/10/21: edited to correct typo spotted by @ChetWisniewski)

Thursday, October 14, 2010

i am a hacker...

... but i am not a crook.

i have previously touched on the fact that the word hacker gets used inappropriately to mean criminal, and i've objected to it on semantic (pedantic?) 'that is not what it originally meant' grounds. that's only one dimension, however.

while i do object on semantic grounds and take that objection seriously in it's own right, the fact is that because i self-identify as a hacker (among a variety of other things) i also happen to find the characterization of hackers as criminals to be rather insulting. not that i expect the people using that characterization to bend over backwards for little old me, of course, but guess what - i'm not the only non-criminal who self-identifies as a hacker.

in fact there are so many of them, especially in the security domain, that a conference that (among other things) fosters the spirit of hacking in children was held for the first time this year (called hackid). i can't help but think that a lot of those parents/infosec professionals would be less than enthusiastic about the idea of imparting the spirit of hacking onto their little tykes if they were willing to accept a world where the term has taken on such a pejorative meaning. it's not just me who is insulted by the 'criminal' insinuation, it's all these people and their kids too.

really there's only two reasons to misuse the term hacker this way: stupidity or laziness. stupidity requires no explanation, but by laziness i mean too lazy to find a better term - and there are better terms, such as criminal or computer criminal or online criminal or even (gasp!) cybercriminal. these were what was being implied by using the term hacker anyways, so why not cut out the middleman?

well because apparently the unwashed masses are more familiar with the term hacker ({ahem} who's fault is that, exactly?) and it's too much work (laziness rears it's head again) to dispel that misconception and actually educate them properly. and this at a time when we're actually winning the war against the misuse of the term virus as an umbrella term (the media is increasingly and correctly using the term malware as the catch-all term for bad software and as a result malware is becoming the term the public uses as well). would you believe some of the same lazy bums who use that 'too much work' line of reasoning actually fancy themselves educators? i'm sorry but if you can't be arsed to dispel misconceptions and educate about the social dimension of the security space, why should anyone believe you'll do your due diligence with respect to dispelling misconceptions and educating about the technical dimensions? uh huh, yeah, i thought so - there is no good reason for anyone to believe that.

stupid media uses the term everybody else uses because they don't know any better. lazy media uses the term everybody else uses because it's easier to just go with what the experts say, and lazy experts use the term everybody else uses because it's too much work to change the tide. but the tide changed with virus - no doubt in part because viruses stopped being the primary issue and experts for the most part couldn't bring themselves to call a non-viral piece of malware a virus, so the proper umbrella term for malicious software started to trickle down.

that same trickle-down effect could work for the term hacker too, but only if the lazy bums out there (and you know who you are) actually start taking their supposed roles as educators seriously and start doing their job properly instead of half-assed.

Wednesday, September 29, 2010

whole product testing / whole attack testing: two sides of the same coin

(this has actually been sitting in the drafts pile for a while)

several weeks ago brian krebs asked me for my thoughts about a new NSS Labs test which i was happy to provide. aside from the fairly predictable spike in traffic that resulted from brian's subsequent article, i also found an unexpected treat in my inbox - NSS' rick moy reached out to me so that we could discuss a few things. now, this post isn't intended to bring up anything anyone did or said in private email, but i do want to thank rick because if he hadn't prompted me to engage on this topic further than i had already done with brian i might not have gotten to this point in understanding the duality of whole product testing vs. whole attack testing.

the idea of whole attack testing came to me as i was contemplating what little information i could find freely available about NSS' most recent test of how well anti-malware products prevented drive-by downloads. the name was a play on the the term "who product testing" that has become so popular in anti-malware testing circles, and which NSS themselves are big proponents of (after all, they try to use their own attempts at whole product testing as a differentiating factor to set themselves apart from other testing organizations). i thought it was a natural extension of the line of reasoning that brought us whole product testing, maybe even the logical conclusion of that line of reasoning. after all, if you're only testing against part of a multi-stage attack it seems like you encounter similar biases that you get when you only test part of a multi-layer product.

but now that i've thought about it some more i realize what that really means. you literally can't have whole product testing without whole attack testing. if you only test one part of a multi-stage attack then you're only testing the parts of a product that are designed to deal with that particular stage of attack. if you're testing exclusively with neutered or otherwise benign exploits, for example, then it doesn't matter if you're testing  entire products against those exploits, only the parts of the products designed to deal with exploits will be capable of raising an alert. as a result, the biases you encounter aren't just similar to the ones you encounter in testing individual parts of a product, they're identical - because you will effectively still only be testing individual parts of the product.

in order to get a true measure of how well a product prevents compromise in the face of real attacks it is necessary to test the whole product against real whole attacks. as difficult, expensive, and painful as that may be, if we really want to produce tests that tell the laymen what they are expecting tests to tell them, this is what has to be done.

what is whole product testing?

whole product testing is a form of anti-malware testing that aims to measure the effectiveness of entire anti-malware products rather than just testing the known malware scanner or the heuristic engine within the product.

whole product testing came about in answer to the problem where testing individual parts of an anti-malware product in isolation didn't give an accurate view of how well the product as a whole could perform (for example a threat might slip by the known malware scanner but be picked up by a behavioural technique that wouldn't show on a scanner test) and there was no way to combine the results of tests of the various parts to represent the effectiveness of the whole product. only by giving every part of a product the opportunity to stop a threat can we have an idea of whether that threat would have been stopped on a end user's machine.

because of the wide array of passive and active defenses anti-malware products provide, whole product testing requires each malware sample in the test set to be launched and then the system checked for indications of how well or poorly the anti-malware product stopped the malware sample from compromising the system. after this the system has to be returned to a known-clean state (generally by restoring an image of the drive). this is quite a bit more time and labour intensive than simply running a scanner against a directory full of malware and as a result often requires the size of the test bed to be more modest due to practical considerations (not enough hardware, manpower, etc). while a smaller test bed size may potentially raise questions about statistical significance (depending on how small it is) the ability of the results to map more directly to what an end user can expect makes this type of testing more ideal than earlier testing of a product's individual parts.

back to index

what is anti-malware testing

anti-malware testing is a means by which a qualified organization measures various properties of anti-malware software, such as speed, memory footprint, malware prevention effectiveness, or even malware removal effectiveness.

in theory, anti-malware testing should be straight-forward. we want the test results to tell us what we would experience if we used the anti-malware ourselves in the real world in order that we can make better decisions about what product to use, so it stands to reason that a test should simulate real world usage. in practice such simulation can actually be very difficult and a variety of shortcuts have been introduced over the years to make anti-malware testing more practical.

unfortunately, as we have found out, even small deviations from the real world can often have a big impact on the actual meaning of the test results such that they can't actually be interpreted the way we intended. one of the challenges that the community faces is understanding how these shortcuts affect the meaning of the results, determining if the new meaning is still useful in some way, and developing new testing methodologies that have fewer and/or less impactful shortcuts so that the tests can come ever closer to approaching the ideal state where their results will actually have the meaning we intend for them to have.

back to index

Monday, September 27, 2010

stuxnet revisited

(some of you may have seen a very  early draft of this in your RSS feeds - a slip of the finger caused a publishing mishap)

even though it wasn't that long ago that i posted a number of scathing criticisms of the stuxnet worm, new revelations about the worm and also some of the discussion in this computer world article that asks "is stuxnet the best malware ever?" (and many others i've seen since starting this post) have prompted me to re-examine my opinion on stuxnet.

there have actually been a number of really good technical analyses of stuxnet, but things seem to fall down when people try to turn their technical analysis into a tactical analysis.

what does stuxnet have?
  1. 4 0-day exploits
  2. additional non-0-day exploits
  3. the ability to determine if it's running on a plant floor vs. a corporate network so that it can avoid using some of those exploits in environments where the 'noise' they produce would be noticed by IPS/IDS
  4. a windows stealthkit (also erroneously known as a rootkit)
  5. a SCADA PLC  stealthkit
  6. digital signatures on 2 versions of it's code using private keys stolen from 2 different sources
  7. a centralized command and control communications channel (now controlled by Symantec)
  8. a P2P update communications channel
  9. the ability to alter the way the SCADA system controls a very particular (and as yet unknown) process
  10. the ability to spread itself over the internal network of an organization via network shares and vulnerabilities
  11. the ability to spread itself beyond the confines of a particular organization's network using removable media and the 0-day exploit for the LNK vulnerability (and an unorthodox implementation of autorun before the LNK exploit was added)
  12. at least 3 distinct versions (the one prior to the inclusion of the LNK 0-day, the first version containing the LNK 0-day compiled in march, and a second containing the LNK 0-day compiled in june and using a different digital signature)
  13. an infection counter to (in theory) limit the spread of the worm
4 0-days is impressive, no doubt about that. the SCADA specific payload obviously required an engineer with knowledge and experience and (in all likelihood) access to a SCADA system that matched the intended target. many of stuxnet's properties are impressive, but some of them have additional significance.

the stealthkits are intended to provide stealth (obviously) so as to keep the window of opportunity for the attack to succeed open longer than it might otherwise be. this implies a persistent presence will be required for the attack to succeed.

the digital signatures on the code also provided some stealth from the heuristic engines of anti-malware products.

the IPS/IDS avoidance also qualifies as a kind of stealth.

the C&C channel (aside from making stuxnet a botnet on top of everything else) implies that the attack is not 100% autonomous. certain actions only happen when stuxnet receives commands to do them. as such, stuxnet will be waiting when it isn't being given commands and this will require a persistent presence.

the update functionality also implies an intent to maintain a persistent presence; and not just persistent over a short term, persistence over a long enough time frame that some part of the attack code becomes no longer fit for use and needs to be updated.

the release of the version with the second digital signature extended the useful lifetime of the signed binaries by several years, as the first was set to expire in june of this year.

as you can see, a considerable number of stuxnet's properties point towards a protracted operation. the payload shows a number of indications that a persistent presence on affected systems would be required for the intended attack scenario to play out as planned.

at the same time, however, the delivery mechanism thoroughly compromises that objective by being noisy and ultimately is the reason the worm and it's significance were uncovered. with each new system a virus tries to infect, the probability that the infection will fail catastrophically (and thereby draw attention to the virus' presence) goes up. while there was a mechanism in place meant to limit the self-replication (and therefore the probability of that catastrophic failure occurring) a simplistic infection counter was obviously not enough to keep the worm from spreading far and wide and drawing attention to itself. you don't take this risk unless you can't be more targeted (or unless you don't know what you're doing).

once the worm was found out, the fact that it was a technical marvel worked entirely against it. if stuxnet had simply been just another dumb autorun worm it probably would have remained in obscurity (and indeed the earlier version that was an autorun worm did remain in obscurity despite having been discovered previously), but because of the novelty of it's execution technique (the LNK 0-day) additional attention was paid to it and the SCADA-targeting payload was discovered and everything snowballed from there.

i have stated previously that i consider stuxnet a failure, and that much hasn't changed. the fact that it's a technical marvel doesn't mean it can't also be a tactical failure. the history of viruses is littered with examples of technically sophisticated viruses that never even made it into the wild while buggy, braindead viruses somehow
proliferated.

stuxnet at least made it into the wild, but the conflicting objectives between it's payload and it's distribution mechanism (one was targeted, the other was not; one was silent and patient, the other was more like a smash and grab) means that if the people behind it haven't already accomplished their objective, it's unlikely they will now.  the C&C channel is already lost to them, the P2P channel will almost certainly be monitored for new versions of the worm with new instructions and/or a new C&C channel. the entire population of infected machines they built up is now a complete write-off because they didn't know how to maintain harmony between the distribution mechanism and the payload.

furthermore, since they were still releasing new versions as late as june 14, it stands to reason they had not yet achieved their objective at that point.

to date, siemens have only found 14-15 SCADA systems that have been infected and as i understand it, none have had their PLC's altered. there really doesn't seem to be much evidence to suggest stuxnet's creators achieved their goals.

while there's a lot of speculation floating around that i don't agree with, i am willing to speculate that the people behind stuxnet are relatively new to the world of doing bad things with computers - i don't mean vulnerability reseach, by the way, since they clearly have some talented people in that arena - i'm talking about being new at mounting actual attacks. cybercriminals have adopted a proven strategy of 'keep it simple, stupid' (KISS) and it has served them well. on the other hand the stuxnet creators tried too hard, made their attack too complex, and generally didn't show the same kind of polish or experience at launching a successful targeted attack that cybercriminals have shown.

i think being relatively new at this is actually compatible with the possibility of them state-sponsored. while we often like to attribute supernatural powers to government efforts in the technical arena (ex. NSA's cryptographic capabilities are often believed to be light years beyond what the private sector can do), the US government made it abundantly clear that sometimes (especially when it comes to attacks in cyberspace) that faith is not always well founded. i don't expect nation states to have the experience that cybercriminals do because they aren't out there actually mounting attacks as frequently as cybercriminals are (if they were, the 'pain' suffered as a result of all those attacks would have triggered a war by now).

after being reminded of the US military's incompetence in 2008, i'm now more willing to believe that this failure was the work of a nation state. however, i'm still not completely ruling out other possibilities. while the industrial process altering payload does indeed change this from an issue of espionage to an issue of sabotage, that doesn't (in my mind) rule out rivalry between businesses. certainly legitimate businesses are not generally known for attempting to sabotage their competitors or others, but less legitimate businesses (say those with ties to traditional organized crime) certainly are.

the one piece of speculation i absolutely cannot abide by, however, is the one about the target of the stuxnet worm. the idea that a nation was the target is ridiculous - do you know how easy it would have been to limit the worm to only spread on computers running inside that nation? surely the geniuses behind it could have made the distribution mechanism much more targeted than it was had a nation been the target (or had a nation contained the entire target population). this more recent theory that it was targeted towards a particular iranian nuclear facility means that whoever was behind it was willing to risk causing an environment disaster, so you'd tend to think those who'd have something to lose by being nearby would know better than to try such a thing. one of the most ridiculous ideas is that stuxnet was targeted for a single system, unique in all the world, and that it's got a fingerprint of that system that it's looking for. in order to generate such a fingerprint in the first place, the attackers would need unprecedented access to such a target; the kind of access that would completely obviate the need for an untargeted distribution mechanism.

but people persist on thinking that iran was in some way the target, and i think i know why. it's because people are thinking of stuxnet like it's some sort of military-grade cyber missile. they see a pocket of high infection density and think they're looking at the electronic fallout of a cyber bomb. under these conventional kinetic warfare sorts of analogies you expect the target to be somewhere around the epicenter. but, and i cannot stress this enough, this is the WRONG mindset to use when you're talking about a virus and we are talking about a virus! if you're thinking about this in those sorts of kinetic warfare terms then you're head is in entirely the wrong place (interpret that as you will). computer viruses behave like a disease - stop thinking about ground zero and start thinking about patient zero - stop thinking of blast radius and start thinking about epidemiology. think about how difficult it is to control or even predict the movements of a biological vector in a biological attack. without a an agent friendly to the cause doing the dispersal, you can't know where it's going to go first or most often - and even if you do have a friendly agent doing the dispersal you can't know where the disease will spread to afterward or where it will thrive best.

you cannot tell who or what the target was by looking at where the most infected machines were. that only tells you where the worm enjoyed the most reproductive advantage - and most importantly (as kaspersky's alexander gostev rightly points out) the infection populations change over time. the only way you're going to find out what the target was forensically is by finding the PLC(s) it was designed to alter. then, and only then, will you actually know what the target is - and without knowing who/what the actual target was, you cannot make reasonable guesses about the specific motivations behind the attack, and by extension you cannot infer attribution based on who had the most to gain.

but maybe these questions should be reversed. instead of trying to figure out the likely culprit based on who the target was, perhaps it would be better to track down the culprit and ask them who the target is. the two private keys stolen from two different companies in the same area in taiwan seems unlikely to be a coincidence. someone there is involved - even if their sole involvement was selling keys they stole onto a 3rd party, that gets you a lot closer to the people responsible than searching for PLC needle in a haystack.

Monday, September 20, 2010

anatomy of a snake oil campaign

a certain piece of snake oil was brought to my attention over the weekend and i thought it might prove useful to highlight some of the questionable things i saw.


originally found here
  1. trusted by 100% of fortune 100 companies? what does that even mean? do you think all 100 of those companies use zone alarm? really? not a single one uses norton? that would be pretty amazing considering norton is in the #1 position in this industry. obviously trusting and using must be two very different things. it seems to me like this is just a clever way to put 100% on the page without claiming something false like 100% detection or 100% protection. instead they say something completely meaningless but still get the benefits of having 100% prominently displayed on the page. how many people do you think will come away from this page and think that 100% was actually in reference to something meaningful like detection even though such a claim, had they actually made it, would have been false? yeah. very sneaky.
  2. a financial trojan virus? really? financial trojan, sure. virus? maybe, i don't know for sure that it doesn't self-replicate. but to put those two terms together like that seems like the work of someone who didn't know what they were talking about. a common ploy is to throw out technical sounding jargon in order to add the air of credibility - but when you don't know what you're talking about you have a tendency to combine terms inappropriately. there's a fine line to walk when it comes to jargon. obviously there are times when a vendor needs to use these in order to convey particular information. but what also happens sometimes is that vendors will use jargon unnecessarily to confuse the audience and make themselves look smarter and more important. some vendors are good at this - checkpoint? not so much. at least not here.
  3. comparing products on the basis of virustotal results. some time ago i wrote about using virustotal for comparing anti-malware products. i wrote that those of us who know better will laugh at you when you do it. i'm laughing at you right now, checkpoint, and i don't think i'm the only one. the rule of thumb is this: virustotal is for testing malware, not anti-malware. vendors who want to be taken seriously should try to remember that. consumers should probably try keeping it in mind too. virustotal is a great tool, but it's a quick and dirty tool, there's a lot of functionality in modern anti-malware software that it doesn't (and probably can't) leverage.
  4. only zonealarm can protect you? the whole page hypes up the threat of this one variant of zeus. in one breath they tell you that zeus changes often (it does this by way of many, many variants) and then make a big deal out of protecting against this one variant. imagine advertising a bulletproof vest on the basis that it's the only thing that can protect against bullets with a particular striation pattern. all things considered, do you really think you're likely to be fired at with those particular bullets? i didn't think so.
  5. complete protection against new threats is almost textbook snake oil. nothing can protect the user completely, much less protect them completely against new threats. why are vendors still trying to pull crap like this? how have practitioners of this kine of snake oil salesmanship not gone under yet?
<sarcasm>really folks, just get zonealarm, it'll cure what ails you. or your computer. or your dog. </sarcasm>

folks, you need to learn to be more discerning consumers so that the pool of money that supports this sort of intellectual dishonesty dries up. vote with your wallet - steer clear of manipulative marketing and the companies that engage in it.

Wednesday, September 15, 2010

are companion infectors contrived?

while reading a recent threatpost article i was rather taken aback by the following quote:

However, one security researcher said that the vectors for using EXE files in this kind of attack are unlikely to be seen in the real world. HD Moore, CSO of Rapid7 and founder of the Metasploit Project, said that he'd seen some cases of other file types being vulnerable to this kind of attack, but didn't think widespread exploitation was likely.
"Most of the EXE cases are contrived vectors, not realistic for exploits," he said.
i suppose path precedence companion viruses must be contrived then. but if that's so then mr. moore must be using a meaning of contrived that i'm not familiar with, because not only did they work reasonably well in their day, but they still operate quite well even now.

to be clear, and to avoid hyping the issue, i should point out that they aren't much of an issue for users of windows explorer. the way explorer works and the way it's used doesn't necessarily lend itself to this attack. but if you use the command line or happen to write and/or use scripts then planting *.EXE binaries can most definitely still pose a security problem - and there are still users in that group, many of them IT or infosec professionals. i would hope that such people would have an awareness of such a threat, but i've seen increasing evidence that people (even security folks) just don't get viruses in general (even after over 1/4 of a century) much less an obscure, ancient kind like this.

Tuesday, September 14, 2010

buckshot yankee: cowboys and indians in cyberspace

if you've been following security news in the past month or so you've probably heard about the DoD revealing that an autorun worm managed to get onto classified systems. maybe you were even curious when they attributed it to an unspecified (and possibly unknown) foreign intelligence agency. maybe you were even surprised to learn that this was the genesis of the US cyber command.
 
my first reaction to hearing these things was something along the lines of:
holy crap, the origins of the US cyber command are a farce!
now, don't get me wrong, i think the idea of the US cyber command is probably a good one. but the idea that it was formed because of a run of the mill autorun worm, a profound skills/knowledge deficit (disabling autorun was a security best practice even then and there was a similar incident with NASA earlier that same year so what were their infosec people thinking?), and a hammer&nail mentality (when all you have is a hammer everything looks like a nail, when all you have is military training everything looks like the work of enemy agents) is actually kind of scary.

not only is it scary because of how badly they can blow banal malware incidents out of proportion, but also because in all the investigation and subsequent reorganization to form the cyber command they never seem to have overcome that skills deficit enough to realize their error and get that realization to the top level decision makers. so we're going to have a military body enforcing it's will in cyberspace, developing new and interesting ways of exercising it's authority, but still unable to distinguish between an attack with direct human intent from the actions of an autonomous software agent.

they don't call them viruses just because it sounds cool, folks. these things spread by themselves like a disease. they don't need to be aimed, they don't need someone intentionally helping them along by setting up websites or sending commands or any of that junk. heck, earlier that same year another autorun worm managed to spread to computers on the international space station. you think viruses in space was an intended goal? if it were then it wouldn't have had code to steal online gaming passwords. it boggles my mind how after over 25 years, more than a quarter century, people (even security folks) don't get that computer viruses spread by themselves like a disease without the need for intentional assistance. that's why it's called self-replication.

as such, without clear evidence of intent (and i've yet to hear about any such evidence nearly a month later), occam's razor dictates that we have to assume it wasn't an intentional act by a foreign intelligence agent ($deity help us if the military for the most powerful country in the world see's fit to ignore occam's razor). the supposed foreign agent is most likely imaginary and the military has spent the past 2 years engaging in make-believe. all that time, effort, and money that went into buckshot yankee and the development of the cyber command would have been better spent on overcoming their skills deficit and the institutional issues that allowed that deficit to persist.

that is of course unless the department of defense is actually telling us a partial fiction and the cyber command arose out of early speculation that a foreign power might be involved. i imagine, however, that they'd have a much harder time selling the new budgetary requirements for such a development on speculation alone, so an imaginary foe would have been required, and a virus infecting classified systems would have provided excellent context for selling that story.