Friday, September 02, 2016

the anti-virus harm balance

anti-virus software, like all software, has defects. sometimes those defects are functional and manifest in a failure to do something the software was supposed to do. some other times the defects manifest in the software doing something it was never supposed to do, which can have security implications so we classify them as software vulnerabilities. over the years the software vulnerabilities in anti-virus software has been gaining an increasing amount of attention by the security community and industry - so much so that these days there are people in those groups expressing the opinion that, due to the presence of those vulnerabilities, anti-virus software does more harm than good.

the reasoning behind that opinion goes something like this: if anti-virus software has vulnerabilities then it can be attacked, so having anti-virus software installed increases the attack surface of the system and makes it more vulnerable. worse still, anti-virus software is everywhere, in part because of well funded marketing campaigns but also because in some situations it's mandated by law. add to that the old but still very popular opinion that anti-virus software isn't effective anymore and it starts looking like a perfect storm of badness waiting to rain on everyone's parade.

there's a certain delicious irony in the idea that software intended to close avenues of attack actually opens them instead, but as appealing as that irony is, is it really true? certainly each vulnerability does open an avenue of attack, but is it really doing that instead of closing them or is it as well as closing them?

if an anti-virus program stops a particular piece of malware, it's hard to argue that it hasn't closed the avenue of attack that piece of malware represented. it's also hard to argue that anti-virus software doesn't stop any malware - i don't think anyone in the anti-AV camp would try to argue that because it's so demonstrably false (anyone with a malware collection can demonstrate anti-virus software stopping at least one piece of malware). indeed, the people who criticize anti-virus software usually complain not about set of malware stopped by AV being too small but rather that the set of malware stopped by AV doesn't include the malware that matters most (the new stuff).

so, since anti-virus does in fact close avenues of attack, that irony about opening avenues of attack instead of closing them isn't strictly true. but what about the idea that anti-virus software does more harm than good? well, for that to be true anti-virus software would have to open more avenues of attack than it closes. i don't know how many vulnerabilities any given anti-virus product has so i can't give an exact figure of how many avenues of attack are opened. i doubt anyone else can do so either (though i imagine there are some who could give statistical estimates based on the size of the code base). the other side of the coin, however, is one we have much better figures for. the number pieces of malware that better known anti-virus programs stop (and therefore the number of avenues of attack closed) is in the millions if not tens of millions and that number increases by thousands each day. can the number of vulnerabilities in anti-virus software really compare with that?

it's said that windows has 50 million lines of code. if an anti-virus product were comparable (i suspect in reality it would have fewer lines of code) and if that anti-virus product only stops 5 million pieces of malware (i suspect the real number would be higher) then in order for that anti-virus product to do more harm than good it would need to have at least one vulnerability for every 10 lines of code. that would be ridiculously bad software considering such metrics are usually estimated per 1000 lines of code.

now one might argue (in fact i'm sure many will) that those millions of pieces of malware that anti-virus software stops don't really represent actual avenues of attack because for the most part they aren't actually being used anymore. they've been abandoned. counting them as closed avenues of attack isn't realistic. the counter-argument to that, however, is to examine why they were abandoned in the first place. the reason is obvious, they were abandoned because anti-virus software was updated to stop them. the only reason why malware writers continue making new malware instead of resting on their laurels and using existing malware in perpetuity is because once anti-virus software can detect that existing malware it generally stops being a viable avenue of attack. so rather than the abandonment of that malware counting against anti-virus software's record of closing avenues of attack it's actually closer to being AV's figurative body count.

there is still malware out there that anti-virus software hasn't yet stopped, and as that set is continually replenished it's unlikely that anti-virus software will stop all the malware. it has stopped an awful lot so far, however, so the next time someone says anti-virus software does more harm than good (due to it's vulnerabilities) ask them for their figures on the number of vulnerabilities in anti-virus products and see how it compares with the number of things anti-virus software stops. i have a feeling you'll find those people are full of it.

Tuesday, September 01, 2015

there's a quality problem in the anti-malware industry

if you follow infosec news sources at all, by now you've probably heard about the claim made by an anonymous pair of ex-kaspersky employees that kaspersky labs weaponized false positives.

more specifically, the claim is that engineers at kaspersky labs were directed to reverse engineer competing products and use that knowledge to alter legitimate system files by inserting malicious looking code into them so that they would both seem like files that should be detected and be similar enough to the original file that the competing product will also act on the legitimate file and in so doing cause problems for users of those competing products.

i've heard this described as fake malware, but for the life of me i can't see why it should be called fake. the altered files may not do anything malicious when executed, but they're clearly designed to exploit those competing products. furthermore, there is clearly a damaging payload. this isn't fake malware, it's real malware. it may launch it's malicious payload in an unorthodox and admittedly indirect manner, but this is essentially an exploit.

some consider the detection of these altered files to be false positives because the files don't actually do anything themselves, but since they have malicious intent and indirectly harmful consequences, i think the only real false positives in play here are the original system files that are being mistaken for these modified files.

by all accounts, this type of attack on anti-malware products actually happened. what's new here is the claim that kaspersky labs was responsible at the direction of eugene kaspersky himself. there's a lot of room for doubt. the only data we have to go by so far, besides the historical fact of the attack's existence, is the word of anonymous sources (who potentially have an ax to grind) and some emails that, quite frankly, are easily forged. circumstantially there's also an experiment kaspersky participated in around the same time frame that has similar earmarks to what is being claimed except for the part about tricking competing products into detecting legitimate files as malware.

i don't expect we'll ever know for sure if kaspersky was behind the attacks. doubts have been expressed by members of the industry, but frankly i've seen too many things whitewashed or completely ignored (like partnerships with government malware writers) to take their publicly expressed statements at face value. there are certainly vendors i'd have a harder time believing capable of this but there just doesn't seem to be sufficient evidence that the claims are true. the problem is that i can't imagine any kind of evidence the anonymous sources are likely to have that isn't easy to repudiate. had they taken a stand at the time (like someone with actual scruples would have done) they would have been able to put their names behind their claims - they may have lost their jobs but they surely would have been able to find employment with a different vendor because hiring a whistle-blower would have been good PR.

however, as it stands now, the anonymous sources have to remain anonymous. if they're telling the truth then they are complicit in kaspersky's wrong-doing, and if they're lying they are throwing the entire industry under the bus for no good reason (because this claim fans the fires of that old conspiracy theory about AV vendors being the ones who write the viruses). Either way, to have this claim linked to their real identities now would make them radioactive in the industry. no one would touch them, and for good reason.

long ago it used to be that the industry only employed the highest calibre of researchers. people who were beyond reproach. naturally, in order to grow, the industry has had to hire ever increasing numbers of people and old safeguards against undesirable individuals joining the ranks don't scale very well. increasingly people who aren't beyond reproach are being found amongst the industry's ranks and there appears to be no scenario where these two anonymous sources don't fall into that category. the inclusivity that the general security community embraces (and that the anti-malware industry is increasingly mimicking) has the consequence that blackhats are included. the anti-malware industry is going to have to either figure out if they're ok with that or figure out a way to avoid what the general security community could not.

Tuesday, December 16, 2014

no malware defeats 90% of defenses

yesterday, 'security expert' robert graham penned a blog post claiming that all malware defeats 90% of defenses - a claim made in answer to the FBI's claim that the attack on sony would have been just as successful against 90% of other companies. as you might well imagine, however, robert graham was in error.

the error isn't a straight-forward one, but it is one that most of the security industry makes. it's an error in framing.

the security industry likes to frame the problem as automaton vs. automaton because that facilitates the comforting lie they tell their customers. businesses see security (not incorrectly) as something that costs them time and money and so they search for ways to cut those costs. the security industry, flush with skillful sales people, tells businesses what they want to hear: that they can cut costs and automate much of security, leaving only a handful of personnel left to operate a little like janitorial staff - cleaning up messes and keeping the automaton running smoothly. likewise, the security industry tells consumers what they want to hear as well: that they just need to install a product and that product will take care of security for them automatically.

in security, however, your adversary isn't a thing, it's a person. malware doesn't defeat defenses anymore than a pick and tension wrench defeats the tumblers in a lock. malware is an object, not a subject. it may have some small measure of autonomy (some more so than others), but it doesn't defeat anything - it's not the agent in that kind of scenario, it's simply a proxy for an intelligent adversary.

intelligent adversaries are notoriously good at outsmarting automatons. robert graham provided a wonderful example of that in his own post when he described creating brand new malware that went undetected by the anti-malware software being run by his targets. what he failed to do was take appropriate credit. it wasn't the malware the defeated those defenses, it was a person or persons with APT level skill (even if it didn't require quite that much skill to pull it off - he described it as easy, but easy is a relative term). the targets were compromised, not because they were using substandard defensive technology per se, but because they were relying on automatons to protect them against people.

in a battle of wits between an automaton and an intelligent adversary, the intelligent adversary has the advantage by definition.

so long as the security industry continues to tell their customers what they want to hear instead of what they need to know, those customers are going to continue relying on a stupid box to fend off smart people. that is a recipe for failure no matter what technology is involved.

Monday, November 24, 2014

stealing master passwords is just not that big a deal

by now a lot of people have heard the news that a new version of the citadel trojan steals the master password for password management software. a LOT of electrons have gone into reporting this new development over and over and over again, but it really doesn't seem like it's a big enough deal to warrant all this attention.

while it may be novel to use keylogging to steal specific passwords for password management software, password stealers and keyloggers are anything but new, and the biggest difference between having this new version of citadel on your system or a traditional keylogger is basically that citadel will be able to collect (and therefore compromise) all your passwords faster than a normal keylogger (all at once vs. piecemeal).

that's really all there is to it. from the perspective of a potential victim, citadel isn't doing anything really new, it's just doing it more efficiently. password stealing malware has been out there for a long time and password managers were never meant to combat that particular threat to password security. password managers are meant to facilitate the use of strong, unique passwords which in turn serves to mitigate the risk of compromises to remote systems - compromising the local system is an entirely different problem.

at the end of the day, you can't operate securely with a compromised computer. even if you were to use 2 factor authentication (which could conceivably render password stealing moot) everything else you enter or access would be exposed to potential theft or manipulation if you're using a compromised computer.

i realize it may seem awkward that a class of software security pros have been promoting for years in order to improve security is now being targeted by malware, but it's only awkward because such a needlessly big deal is being made out of it. password management software still mitigates the same risks associated with remote compromises that it always did,  and you're as hosed as you ever were in the event of a local compromise. nothing has actually changed for the people trying to keep their things secure so stop acting like this development changes anything - it doesn't.

Wednesday, September 17, 2014

the PayPal pot calling the Apple Pay kettle black

so if you haven't heard yet, PayPal took out a full page ad in the New York Times trying to drag Apple Pay's name through the mud based on Apple's unfortunate celebrity nude selfie leak. This despite the fact that PayPal happily hands out your email address to anyone you have a transaction with. In essence, PayPal has been leaking email addresses for years and not doing anything about it, so they shouldn't get to criticize others for leaking personal information.

what's the big deal about email addresses? while it's true that we often have to give every site we do a transaction on an email address, we don't have to give them all the same address. in fact, giving each site a different email address happens to be a pretty good way to avoid spam, but more importantly it's a good way to avoid phishing emails, and that's important where PayPal is concerned because PayPal one of the most phished brands in existence.

unfortunately, because PayPal wants all parties in a transaction to be able to communicate with each other, they do the laziest, most brain-dead thing one can imagine to accomplish this: they hand out your PayPal email address to others, which is pretty much the worst email address to do that with. i have actually had to change the disposable email address i use with PayPal because they are apparently incapable of keeping that address out of the hands of spammers, phishers, and other email-based miscreants. furthermore, i also use their service less because i don't want to have to clean up after their mess.

at some point i may have to start creating disposable PayPal accounts and use prepaid debt cards with them. certainly if i were trying to hide from massive spy agencies then that would be the way to go, but if i'm only concerned with mitigating email-borne threats i really shouldn't have to go to that much trouble. there are other, more intelligent things that PayPal could, even should be doing.

  • they could share the email address of your choosing, rather than the one you registered with their service unconditionally. that way you could provide the same address you probably already provided that other party when you created an account on their site. it shouldn't be too difficult for them to verify that address before sharing it with the other party since they already verify the one you register with.
  • they could offer their own private messaging service so that communication could be done through their servers (which would no doubt aid in conflict resolution).
  • they could provide a disposable email forwarding service such that the party you're interacting with gets a unique {something} address that forwards the mail on to the email address you registered on PayPal with, and once the transaction is completed to everyone's satisfaction the address is deactivated.
they don't do anything like that, however. here's what you can do right now with the facilities PayPal makes available. it's a more painful and less intuitive process than anything proposed above, but it does work.
  1. before you choose to pay for something with PayPal, log into PayPal and add an email address (the one you want shared with the party you're doing a transaction with) to your profile. PayPal limits you to 8 addresses.
  2. confirm the address by opening the confirmation link that was sent to that address
  3. make that address the primary email address for your account
  4. confirm the change in primary email address (if you have a card associated with your PayPal account, PayPal may ask you to enter the full card number)
  5. at this point you can use PayPal to pay for something and the email address that will be shared with the other party is the one you just added to your PayPal account
  6. once you've paid with PayPal you will probably want to log back into PayPal, change the primary email address back to what it originally was (and confirm the change once again) and then remove the address you added for the purposes of your purchase. the reason you'll likely want to do this is because PayPal sends emails to every address it has on record for you, and those duplicate emails will get old fast.
most people aren't even going to be aware that they can do this to keep their real PayPal email address a secret from 3rd parties. as a result all manner of email-borne threats can and eventually will wind up in what would otherwise have been a trusted email inbox. make no mistake, this isn't PayPal providing a way to keep that email address private, this is a way of manipulating PayPal's features to achieve that effect. there are too many unnecessary steps involved for this to be the intended use scenario.

as such, PayPal is leaking a valuable email address by default every time you pay for something. yes Apple's selfie SNAFU was embarrassing to people, and yes if Apple doesn't do something about that now that they're becoming a payment platform it could be not just embarrassing but financially costly for victims, but PayPal is already assisting in similarly costly outcomes right now (not to mention potential malware outcomes) so they really have no right to be criticizing Apple. Apple, at least, is taking steps to correct their problems - what is PayPal doing?

Monday, September 08, 2014

on the strength of authentication factors

i ran across a story on friday about barclays bank rolling out biometric authentication for online banking and wound up starting a debate on twitter that i didn't have time for and couldn't easily fit into a series of tweets even if i did have time. essentially what it came down to was that i don't believe all authentication factors are equally strong and the statement that the barclays system was a "password replacement" raised a red flag for me.

the reason it raised a red flag for me is because single factor biometric authentication is something i've come across before, and not just in an article on the web or even as a user but as a builder. my first job out of university was with a biometric security company, and one of the biggest projects i had while working there was developing an authentication enhancement for windows logon. one of the requests made (and the one i fought the hardest against) was to allow logon with just the biometric. 

here's the problem with this idea - since windows didn't have biometric capabilities built in, the only way to add single factor biometric authentication in was to store more traditional authentication data that windows could accept (such as a password) and then pass that along to windows when the subject's biometric sample matched the registered biometric template. i should note that the article about barclays makes it clear they'll be doing the same thing since they say that barclays won't be storing customer biometric data on their servers. there will have to be a local biometric client that stores more traditional authentication data and passes it on when a biometric match is achieved. storing credentials is not exactly the safest thing in the world. it's not like you can just store a hash of the authentication data in this scenario because you have to be able to present the original, unmodified credentials to the authentication system.

i balked at the idea of making windows less instead of more secure, but i acquiesced when the decision was made to keep the more secure 2 factor mode of operation (without traditional credential storage) in there as well, along with informing the users that biometric-only logon was less secure. 

it's not just less secure because of the credential storage, though, and this is where the twitter debate on friday ventured into. in the course of that job i had the opportunity to examine multiple biometric systems, such as face, voice, iris, etc. and i came away with 2 realizations: 1) the only biometrics that users will ever accept are non-invasive ones (no one wants sensors stuck into them), and 2) that lack of invasiveness makes it relatively easy to steal biometric samples from users, often without them even knowing. fingerprints can be lifted from anything you touch. recordings of your voice can be made without your knowledge. photographs of faces are ubiquitous and a high enough resolution image will capture your iris pattern. 

other authentication factors like passwords and tokens generally rely on restricting access to the authentication data, often through secrecy. when that secrecy is lost, such as when someone takes a photograph of a door key (which is a kind of token) it becomes relatively easy to reproduce the authenticator and gain access to what was being protected. biometrics, especially non-invasive ones, forgo this secrecy under the mistaken belief that reproducing the authenticator is difficult for biometrics. the reality, though, is that you don't have to reproduce a biometric sample, you only have to create an approximation that is good enough to fool the biometric sensor, which often isn't particularly difficult. optical sensors can be fooled with images, audio sensors can be fooled with recordings, the mythbusters once fooled a capacitance sensor by licking a photocopy of a fingerprint.

now hold on, i hear you say, isn't it also really easy to steal passwords? and isn't reproducing that authenticator the easiest of all? it's certainly true that in practice all kinds of things can affect how easy it is for an attacker to become illegitimately authenticated. for that reason i try to look at the upper bound of the strength of the various authentication factors. how strong is a system under ideal conditions, that is where everything goes right and legitimate parties don't make any mistakes.

for passwords, that ideal situation means that the user doesn't accidentally click on anything that would steal his/her password, doesn't get fooled by phishing sites, etc. in short, the attacker can't get the password from the user. it also means the attacker can't get passwords in transit (because that's been properly secured) or a password database from service provider because no vulnerability is found in their system and their employees are likewise careful to avoid making mistakes. under this ideal situation the attacker's only way to succeed in gaining illegitimate entry is to perform an online brute force attack (no, not a dictionary attack, because the user didn't make the mistake of using something from a dictionary) and they'd have to go slow because the ideal provider would have rate-limited failed logon attempts. now you might say this is unrealistic, people make mistakes, and that's true in practice in the aggregate, but it is possible for an individual to do everything right, and it is also possible for attackers to not be able to find any way to attack the provider in order to get the password database. this isn't how strong password protection always is, but rather the ideal we hope to achieve by making our systems secure and avoiding making mistakes, and sometimes in limited cases this is achieved.

for tokens, let's consider the ideal situation to be comparable to that for passwords but on top of that let's consider the strongest token possible (ie. not a door key). let's consider a token that produces one-time-passwords (without any vulnerabilities that would make those passwords easy to predict) so that even brute force attacks become much harder. on the surface this seems even stronger than passwords, but there's a chink in the armour and apple's recent icloud problems are a good example. tokens can be lost or stolen so there needs to be a way to recover from that problem. while our "ideal situation" precludes our user from losing their token, it does not preclude our service provider from providing users with a way to cope with the loss of their tokens. the strongest way to do this is to provide the user with pre-generated one-time-passwords ahead of time. this can work for an individual user who is careful and doesn't make any mistakes but as we've previously seen our "ideal situation" does not extend to the point of saying all users make no mistakes, so the pre-generated one-time-pads are going to fail for reasons such as never being printed out and put in a safe place, or not being able to get to that safe place because the user is traveling, etc. what's a service provider to do then? so far, their best option might be to use traditional passwords as a fall back, and if they do then the token system becomes only as strong as passwords, because although our ideal user didn't lose their token, the provider can't really know that the user didn't lose it (or worse that it was stolen) and has to accept attempts to use the password fall back. while there is room for tokens to be stronger than passwords, the price is that only ideal users will be able to recover in the event of a lost token, and that price may be more than service providers are willing to accept.

for biometrics, we once again consider an ideal user who does nothing wrong, and an ideal service provider who likewise makes no mistakes. in spite of doing nothing wrong the user's voice can still be recorded, their face can still be photographed (in most cultures since facial covering is relatively rare), etc. simply interacting with the world cannot qualify as doing something wrong or making a mistake. acquiring the information necessary to construct a counterfeit authenticator is easy compared to passwords and tokens because no effort is taken to conceal that information and the cultural adjustments needed to change that are beyond what i think would be reasonable to expect. the difficulty in attacking a biometric authentication system boils down to the difficulty in fooling a sensor (or sometimes 2 sensors as people have tried to strengthen fingerprint biometrics with so-called "liveliness tests"), and that difficulty has been consistently overestimated in the past.

this is why i consider biometrics weaker than passwords - because even when everyone does everything right it's still fairly easy to fool the system. as such, when someone (especially a bank) provides people with an authentication system that replaces passwords with biometrics, i think that should raise an alarm. even at that prior job of mine it was conceded that that mode of operation was more about convenience than it was about security. convenience is a double-edged sword, it can make things easier for legitimate users and attackers alike if you aren't careful. using biometrics in a 2 factor authentication system may provide more security than any single factor authentication system can, but biometrics on it's own? there's a reason some people have started saying that your biometric is your username, not your password. don't replace passwords with it (at least not without having someone present to guard against funny business - which isn't an option for online banking).

Monday, June 30, 2014

i wouldn't bet on it

last year cryptography professor matthew green made a bet with mikko hypponen that by the 15th of this month there would be a snowden doc released that showed that US AV companies collaborated with the NSA. he has since accepted that he lost the bet to mikko, but should he have?

i mentioned to matthew the case of mcafee being in bed with government malware writing firm hbgary and mikko chimed in that hbgary wasn't an AV company and being partners with them wasn't enough to win the bet. aside from the fact that this is the first time after all these years that i've seen a member of the AV industry publicly comment on the relationship between mcafee and hbgary (i guess managing matthew's perception of AV is more important than managing mine), something about mikko's response rang hollow.

one way to interpret the situation with hbgary is to view them as government contractors whom mcafee endorsed, advertised, and helped get their code onto the systems of mcafee's customers (hbgary makes a technology that integrates with mcafee's endpoint security product). that certainly would have given hbgary access to systems and organizations they might have had difficulty getting otherwise. i have no idea if that access was ever used in an offensive way, though, so this line of thought is a little iffy.

another way to interpret the situation is to directly contradict mikko and admit that hbgary is a member of the AV industry. after all, they make and sell technology that integrates into an endpoint security product. they may only be on the fringe of the industry, but what more do you have to do to be a member of the industry than make and sell technology for fighting malware? the fact that they also made malware for the government makes them essentially a US AV company that collaborated with the government in one of the worst ways possible.

i feel like this should be enough to have won matthew green the bet, at least in spirit, but the letter of the bet was apparently that a snowden doc would reveal it and the revelation about mcafee and hbgary actually predates snowden's leaks by a number of years. 

so, the question becomes are there any companies that happen to be members of the AV industry and also happen to have been fingered by a snowden leak? it turns out there was (at least) one. they were probably forgotten because they're not just an AV vendor, but AV vendor does happen to be one of the many hats that microsoft wears (plenty of security experts were even advising people to drop their paid-for AV in favour of microsoft's offering at one point in time), and microsoft was most certainly fingered by snowden docs. the instances where microsoft helped the government may not have involved their anti-malware department, but the fact remains that a company that is a member of the AV industry was revealed by snowden documents to have collaborated with the government.

i imagine mikko could find a way to argue this doesn't count either - i admit it's not air-tight - but given how close it meets both the spirit and (as i understand it) the letter of the bet, i think mikko should match the sum he had matthew pay to the EFF and pay it to an organization of matthew's choosing. i won't bet on that happening, though.